text stringlengths 1.23k 293k | tokens float64 290 66.5k | created stringdate 1-01-01 00:00:00 2024-12-01 00:00:00 | fields listlengths 1 6 |
|---|---|---|---|
CSSF-CLIP-HSQMBC: measurement of heteronuclear coupling constants in severely crowded spectral regions
A new pulse program development, a chemical shift selective filtration clean in-phase HSQMBC (CSSF-CLIP-HSQMBC), is presented for the user-friendly measurement of long-range heteronuclear coupling constants in severely crowded spectral regions. The introduction of the chemical shift selective filter makes the experiment extremely efficient at resolving overlapped multiplets and produces a clean selective CLIP-HSQMBC spectrum, in which the desired coupling constants can easily be measured as an extra proton–carbon splitting in f2. The pulse sequence is also provided as a real-time homonuclear decoupled version in which the heteronuclear coupling constant can be directly measured as the peak splitting in f2. The same principle is readily applicable to IPAP and AP versions of the same sequence as well as the optional TOCSY transfer, or in principle to any other selective heteronuclear experiment that relies on a clean 1H multiplet.
Introduction
Long-range heteronuclear scalar couplings contain important information about molecular relative conguration, structure identity and structural conformation. [1][2][3][4] The size of these couplings are in the same range as proton-proton scalar couplings, 0-15 Hz, but are generally more complicated to measure accurately. There is a large number of different pulse sequences available for the measurement of long range heteronuclear scalar couplings, mainly divided into TOCSY basedand HMBC/HSQMBC-based methods. The pros and cons with these have been comprehensively reviewed. [5][6][7] In short, no technique has so far proven to be a generally applicable method to measure long range couplings, but it is always a matter of choosing the most appropriate technique with regards to the amount of sample, the sample complexity (overlap), whether the carbon is protonated or not, and how many couplings need to be measured simultaneously.
A user-friendly approach for measuring long range protoncarbon scalar couplings has been proposed by Saurí et al. 8 The CLIP-HSQMBC uses a selective pulse on a proton signal to remove interfering homonuclear couplings. This is crucial to assure pure absorptive lineshapes to allow the accurate measurement of coupling constants. The HSQMBC then produces carbon coupled in-phase multiplets in which the relevant proton-carbon coupling produces an extra splitting in f2 compared to the corresponding proton multiplet in an ordinary proton 1D spectrum.
Severely crowded spectral regions present a challenge for the measurement of coupling constants in general. Spectral overlap in both the proton and carbon dimensions of non-homonuclear coupled protons is not uncommon in complex molecules e.g. for (a) multiple residues of the same amino acid in modied peptides, (b) (deoxy)-ribose moieties in nucleic acids, (c) carbohydrates, (d) pseudo-symmetric parts of small molecules (see securidine A example below), or (e) in stretches of repeated atoms like for example in molecules containing (partly unsaturated) lipids. Another case is coincidentally overlapping proton signals coupling to the same carbon through long-range scalar coupling. This can arise for example in natural products containing many aliphatic protons like modied cyclic peptides, polyketides, macrolides, steroids, saponins, terpenoids, glycosides etc. Difficulties with spectral overlap can sometimes be circumvented by coupling the experiment to a TOCSY element, which situationally allows the selection of one out of several overlapping resonances. 8 A 3D HSQC-HSQMBC approach has also been proposed to address the problem of spectral overlap, 9 as well as a J-scaled CLIP-HSQMBC. 10 In order to overcome this limitation we here propose to apply gradient enhanced proton chemical shi selective ltration (CSSF) 11 as the selection element in the CLIP-HSQMBC method. The ability to very cleanly select an unresolved multiplet in the proton dimension also results in a reduced number of observed correlations in the carbon dimension, thereby reducing the risk of inconvenient overlaps, both direct and folded/aliased, within the sampled carbon spectral width. We have denoted the pulse sequence development: CSSF-CLIP-HSQMBC.
An important limitation to get straightforward coupling measurements directly from in-phase separated peaks in the parent CLIP-HSQMBC is the requirement that no other proton coupled with the proton of interest may be excited by the selective pulse, as this will add dispersive contributions to the lineshape. It should be noted that this limitation is not overcome by the chemical shi selective lter even though it visually appears entirely clean. It is only through the selectivity of the shaped pulse that contributions from J HH couplings can be avoided, whereas the CSSF cleans up any off-resonance chemical shis that are excited by the selective pulse.
It is highly attractive to simplify the multiplet pattern of the crosspeaks, which results from the homo-and heteronuclear couplings, to a simple doublet in f2 split by the heteronuclear coupling constant. In the case where there is no spectral overlap, a PSYCHE version of HSQMBC to achieve spectrum-wide homonuclear decoupling has been reported. 12 In order to achieve homodecoupling in the CSSF-CLIP-HSQMBC experiment, a version with real-time band-selective homodecoupling (bshd) during acquisition has been prepared. 13,14 We demonstrate that even though coupling constant measurement in the direct dimension of homodecoupled spectra can be treacherous, the heteronuclear coupling constants can be reliably measured directly as the splitting of the doublet in f2. This is possible as long as certain experiment conditions were scaling occurs are avoided.
Results and discussion
The new pulse sequence development, the CSSF-CLIP-HSQMBC, makes use of a chemical shi selective lter as the means to achieve a clean in-phase selection of the proton of interest. 11 CSSF is an iterative method that adds up the onresonance signal while off-resonance contributions are eliminated by destructive averaging because of differences in chemical shi evolution. The co-addition of FIDs makes this method extremely selective and allows the measurement of scalar couplings that may otherwise have been considered unmeasurable because of severe spectral overlap. Successful selection only requires a spectral separation of 1-2 Hz in the proton resonance frequency, and is thus able to resolve multiplets that appear to coincide.
The CSSF-CLIP-HSQMBC sequence shown in Fig. 1 contains two modules: the gradient-selected (gs)-CSSF 11 and a CLIP-HSQMBC 8 sequence, followed by 1 H detection in the absence of 13 C decoupling. The rst part of the pulse sequence is the gs-CSS lter, which allows the highly selective excitation of overlapping proton signals of interest with a reported resolution of up to 1.4 Hz. The selectivity of the CSSF is given by where t max is the maximum chemical shi evolution interval and Dn is the chemical shi difference between the overlapping protons. The second part is a CLIP-HSQMBC sequence that allows the clean observation of 13 C-isotopomer signals in which both the heteronuclear and homonuclear couplings exhibit pure in-phase character. This is achieved by the selective 180 proton pulses which eliminate homonuclear coupling modulations and hence avoid signal distortions. Further, heteronuclear antiphase components are eliminated by the application of a 90 carbon pulse before acquisition resulting in pure absorptive lineshapes. In order to speed up the acquisition, non-uniform sampling can readily be applied in the indirect dimension. The full pulse sequence is attached in the ESI. †
Quinine
As a proof of principle, the CSSF-CLIP-HSQMBC was compared to the parent CLIP-HSQMBC experiment using a sample of 50 mM quinine in DMSO-d 6 (Fig. 2). Even though the H3 0 and H6 0 protons are only separated by 1.8 Hz in a proton 1D spectrum acquired at 400 MHz proton frequency, the CSSF allows the selection of the near perfectly clean individual doublets using eight added FIDs (t d0 ¼ 8) with different chemical shi evolution periods, effectively removing all off-resonance contributions ( Fig. 2c and d). The clean doublets display the Gradients G 1 and G 2 are used for coherence selection using the echo-antiecho protocol, G 4 acts as a zz-filter, and G 1 , G 3 , G 5, G 6 and G 7 flank the selective refocusing proton pulses and hard 180 pulses, respectively. The following pulsed field gradients were used: For the CSSF-CLIP-HSQMBC_bshd experiment (b) homonuclear decoupling during the acquisition time (AQ) is performed using refocusing blocks including a pair of hard and selective 180 1H pulses applied at intervals of 2d ¼ AQ/n, where n is the number of loops. couplings 3 J H3 0 H2 0 ¼ 4.5 Hz and 4 J H6 0 H8 0 ¼ 2.8 Hz. The corresponding CSSF-CLIP-HSQMBC produces clean 2D spectra consisting only of correlations originating at the selected proton and the observed splitting patterns are in this case identical to the original CLIP-HSQMBC sequence (Fig. 2e) as none of the carbons simultaneously couple strongly enough with both of the two overlapping protons (H3 0 and H6 0 ).
The multiple-bonds proton-carbon couplings can easily be measured as an extra splitting in the respective HSQMBC crosspeak. Both reference peaks are doublets and the CLIP-HSQMBC peaks are resolved doublets of doublets.
The splittings due to homonuclear couplings in f2 in the CSSF-CLIP-HSQMBC spectrum can be eliminated by applying a real-time band-selective homonuclear decoupling scheme during acquisition (pulse sequence in Fig. 1b, and spectra in Fig. 2f). This is not as uncomplicated as it might rst appear as pulsing during windowed acquisition is known to be able to cause J-scaling, phase shis and chemical shi shis. [15][16][17][18][19] We do however show empirically that, as long as the length of the acquisition blocks are approximately twice as long as the selective pulse, no detectable scaling occurs and the method is robust for measuring long-range heteronuclear couplings in the direct dimension, under real-time band-selective homonuclear decoupling conditions (Fig. S1-S4 in the ESI †).
Securidine A
The recently characterized natural product securidine A contains a spin system that has a high degree of chemical "symmetric equivalence" of positions 11-14 in an arginine sidechain spin-system, and thus results in nearly overlapped resonances that are very challenging to access (Fig. 3a). 20 In this spin system, H11 and H14 are partially overlapping in the proton dimension (Dd ¼ 6 Hz, multiplet total width ¼ 18.1 Hz), and both protons have long range CH couplings to the same carbon atoms, C12 and C13. The attached protons, H12 and H13, are completely overlapping in the proton dimension (Dd < 1 Hz). This spin system was used to challenge the CSSF-CLIP-HSQMBC sequence to evaluate the possibility of measuring the proton-carbon couplings individually in a near perfectly overlapped spin system. The CSSF could successfully select clean 6.2 Hz quartets from the partially overlapping H11 and H14 signals (Fig. 3b).
The selection prole was used in a CSSF-CLIP-HSQMBC to produce the individual H11 to C12/C13 and H14 to C12/C13 correlations. In this example the experiment was acquired with very high resolution in F1 without any interference from aliased peaks. These CLIP-HSQMBC resonances would be indistinguishable in an ordinary selective CLIP-HSQMBC (Fig. 3c). 1D cross-sections through the CSSF-CLIP-HSQMBC peaks were extracted (Fig. 4c-f) and compared to the CSSF 1D reference peaks (Fig. 4a and b). The resulting multiplets were initially too complex to allow direct measurement of the extra CH splitting. The long range 2 J C12H11 , 3 J C13H11 , 2 J C13H14 and 3 J C12H14 coupling constants were therefore measured using two different approaches, both previously described. 5 The rst method uses the sum of couplings, measuring the separation of the two outermost maxima of the multiplets. The reference peak holds the sum of all proton couplings whereas the HSQMBC cross section holds the sum of all proton couplings plus the selected n J CH coupling. The difference between the two sums yields the heteronuclear coupling constant.
The second method ts two HSQMBC cross sections to the reference peak by offsetting them (Fig. 4c-f), and the offset corresponds to the heteronuclear coupling constant. This method is preferred as it can be difficult to determine the outermost maximums of a multiplet if the signal to noise is low. Furthermore, tting two multiplets by offsetting them is more robust in the presence of small phase distortions from weak J HH coupling or imperfect purging, or in the presence of marginal second order effects.
The long-range heteronuclear proton carbon coupling constants in securidine A were thus successfully determined as follows: 2
Experimentals
Quinine NMR experiments were recorded on a 50 mM sample of quinine in DMSO-d 6 . The instrument was a Bruker Avance Neo Nanobay spectrometer operating at 400 MHz for protons, equipped with a liquid nitrogen cooled broad-band observe cryoprobe (Prodigy BBO) with cryogenic enhancement for 1 H, 2 H and all tunable X nuclei ( 15 N-31 P). Experimental parameters of the CSSF-CLIP-HSQMBC experiment (see Fig. 1): the length of the CSSF increment, D, was 12.5 ms and a 15.8 ms 180 Gaussian pulse was used as a selective 180 pulse on protons. Adiabatic CHIRP shapes with a sweep width of 60 kHz were used for inversion (0.5 ms) and refocusing (2 ms) 180 carbon pulses. Smoothed square-shaped gradients of 1 ms duration were used, followed by a recovery delay of 200 ms. The gradient amplitude ratios for G 1 : G 2 : G 3 : G 4 : G 5 are 80 : 20.1 : 33 : 50 : 17. Gradient strengths are given as percentages of the absolute gradient strength of approximately 53.5 G cm À1 . The acquisition times t 2 and t 1 were 0.78 s (spectral width 5263 Hz, 8k complex data points) and 6.36 ms (spectral width 10 060 Hz, 128 real data points), respectively. The relaxation delay was 1.5 s and 2 scans were accumulated for each of the CSSF increments (n ¼ 8), resulting in 16 scans per t 1 increment. Zero lling to 512 points in F1, 8096 points in F2 and sine-squared window function in both F1 and F2 dimensions were applied before Fourier transformation of 2D data.
Securidine A
All NMR experiments were acquired on a sample of 2.0 mg isolated securidine A dissolved in 1 : 1 DMSO-d 6 -: chloroform-d 1 . The instrument was a Bruker Avance III HD spectrometer operating at 600 MHz for protons, equipped with an inverse detected TCI Helium Cryoprobe with cryogenic enhancement for 1 H, 2 H and 13 C. The experimental parameters of the CSSF-CLIP-HSQMBC were typically: ns ¼ 8, Hz wide Gaussian pulse was used for selective refocussing of the measured proton and the maximum gradient strength was 65.7 G cm À1 , otherwise identical settings were used as for quinine above. The data was zero lled to 8k complex points in the direct dimension and forward linear predicted to 1k complex points in the indirect dimension using 2 coefficients, and multiplied with a 45 degrees sine-squared function. The 1D cross section through each peak was then tted to two reference peaks taken from the CSSF 1D, where the offset determined the n J CH coupling constant.
Conclusions
In summary, we present a development that enables the measurement of long-range heteronuclear coupling constants in severely crowded spectral regions by using a chemical shi selective lter as means to eliminate any offresonance signals in the original CLIP-HSQMBC pulse sequence. We show that an offset of as little as 1-2 Hz is enough to allow for a clean ltration and accurate measurements of coupling constants. The CSSF selection is not dependent on being part of a spin system (like a selective TOCSY for instance), hence the method is generally applicable to any coupling of interest that is not perfectly isochronous with an interfering resonance, or are mutually coupled to each other. We further present a homonulcear decoupled version of the experiment and show that it is possible to reliably measure the heteronuclear coupling constants directly as the splitting in the direct dimension as long as the acquisition blocks are at least twice as long as the selective pulse used in the bshd element.
Conflicts of interest
There are no conicts to declare. | 3,645.6 | 2019-11-04T00:00:00.000 | [
"Physics",
"Chemistry"
] |
REGARDING LITERATURE ON ATMOSPHERIC CORROSION OF WEATHERING STEELS
Extensive research work has thrown light on the requisites for a protective rust layer to form on weathering steels (WS) in the atmosphere, one of the most important being the existence of wet/dry cycling. However, the abundant literature on WS behaviour in different atmospheres can sometimes be confusing and lacks clear criteria regarding certain aspects that are addressed in the present paper: What corrosion models best fit the obtained data? How long does it take for the rust layer to stabilize? What is the morphology and structure of the protective rust layer? What is an acceptable corrosion rate for unpainted WS? What are the guideline environmental conditions, time of wetness (TOW), SO2 and Cl, for unpainted WS? The paper makes a review of the bibliography on this issue.
INTRODUCTION
Weathering steels (WS), also known as low-alloy steels, are mild steels with a carbon content of less than 0.2% wt, to which mainly Cu, Cr, Ni, P, Si and Mn are added as alloying elements to a total of no more than 3.5% wt [1].The enhanced corrosion resistance of WS is due to the formation of a dense and well-adhering corrosion product layer known as patina.Besides possessing greater mechanical strength and corrosion resistance than mild steel, the patina is also valued for its attractive appearance and selfhealing abilities.The main applications for WS include civil structures such as bridges and other load-bearing structures, road installations, electricity posts, utility towers, guide rails, ornamental sculptures, façades, roofing, etc.
The recent introduction of high performance steel, a new high-strength WS that does not require painting, has dramatically increased the number of steel bridges being built throughout the world, which has approximately trebled in the last ten years and now accounts for more than 15% of the market [2].WS is an attractive material that reduces the life cycle cost of steel structures, which remain in service for long periods of time [3].
Extensive research work has thrown light on the requisites for the protective rust layer to form.It is now well accepted that wet/dry cycling is necessary to form a dense and adherent rust layer, with rainwater washing the steel surface well, accumulated moisture draining easily, and a fast drying action.Surfaces protected from the sun and rain (sheltered) tend to form loose and poorly compacted rust, while surfaces freely exposed to the sun and rain produce more compact and protective rust layers.The structures should be free of interstices, crevices, cavities and other places where water can collect, as corrosion would progress without the formation of a protective patina.It is also not advisable to use bare weathering steels in indoor atmospheres due to the lack of alternate wetting and drying cycles which are necessary to physically consolidate the rust film, or in marine atmospheres where the protective patina does not form.
However, the abundant literature can be sometimes confusing and lacks clear criteria regarding certain concepts that are addressed in the present paper, namely: − What laws best fit the atmospheric corrosion of WS?
− How long does it take to reach a steady state (stabilization of the rust layer) in which the corrosion rate remains practically constant?− What is the morphology and structure of the protective rust layer?− What is an acceptable corrosion rate for the use of unpainted WS? − What are the guideline environmental conditions (TOW, SO 2 and Cl -) for the use of unpainted WS?Each of these items is reviewed below.
CORROSION OF WEATHERING STEEL WITH EXPOSURE TIME
As the use of weathering steels in civil engineering became more common it became necessary to estimate in-service corrosion penetration.
Corrosion penetration data is usually fitted to a power model involving logarithmic transformation of the exposure time and corrosion penetration.This power function (also called the bilogarithmic law) is widely used to predict the atmospheric corrosion behaviour of metallic materials even after long exposure times, and its accuracy and reliability have been demonstrated by a great number of authors [4][5][6][7].
Legault et al. [5,8] noted that atmospheric corrosion of WS in industrial and marine environments may be described by the expression: or its logarithmic transformation: where C is the corrosion after time t, and A and n are constants.
Pourbaix [6] also stated that the bilogarithmic law is valid for different types of atmospheres and for a number of materials, and is helpful in extrapolating results of corrosion up to 20-30 years from four-years test results.
The n value can provide a criterion for gauging long term atmospheric corrosion susceptibility.It gives a measure of the resistance to transport processes within the corrosion product oxide once it has formed [8].When n is close to 0.5, it can result from an ideal diffusion-controlled mechanism when all the corrosion products remain on the metal surface.This situation seems to occur in slightly polluted inland atmospheres.On the other hand, n values of more than 0.5 arise due to acceleration of the diffusion process (e.g. as a result of rust detachment by erosion, dissolution, flaking, cracking, etc.).This situation is typical of marine atmospheres, even those with low chloride contents.Conversely, n values of less than 0.5 result from a decrease in the diffusion coefficient with time through recrystallisation, agglomeration, compaction, etc. of the rust layer [9].
In the special case when n = 1, the mean corrosion rate for one-year exposure is equal to A, the intersection of the line on the bilogarithmic plot with the abscissa t =1 year.
There is no physical sense in n > 1 as n = 1 is the limit for unimpeded diffusion (high permeable corrosion products or no layer at all).Values of n > 1 occur practically as exceptions, due, for instance, to outliers in the mass loss determinations.As a rule, n < 1.Therefore, n could be used as an indicator for the physico-chemical behaviour of the corrosion layer and hence for its interactions with the atmospheric environment.The value of n would thus depend both on the metal concerned, the local atmosphere, maximum exposure time and the exposure conditions.
On the other hand, the parameter A provides a criterion for gauging short term atmospheric corrosion susceptibility.It provides a measure of the inherent reactivity of a metal surface as reflected in the tendency for that surface to produce a corrosion product layer in a short term atmospheric exposure [8].
McCuen et al. [10] proposed to improve the power model setting numerically A and n coefficients with the nonlinear least-squares method directly to the actual values of the variables C and t, not the logarithms of the variables, since the logarithmic transformation give too much weight to the penetration data for shorter exposure.This eliminates the overall bias, and more accurately predicts penetration for longer exposure times.They call numerical power model to this new model which has the same functional form as the bilogarithmic model.
Nevertheless, they saw that WS corrosion penetration data revealed behaviour differences that could not all be explained by the parabolic model, and thus preferred a composite model (power-linear model) consisting of a power function for short exposure times, up to 3 to 5 years, followed by a linear function for longer exposure times.This model is similar to that used to develop standard ISO 9224 [11], which envisages two exposure periods with different corrosion kinetics.In the first period, covering the first ten years of exposure, the growth law is parabolic (average corrosion rate, r av ), while in second period, for times of more than 10 years, the behaviour is linear (steady state corrosion rate, r lin ).
ISO 9224 [11] offers information on guiding corrosion values for carbon steel and weathering steel (Table 1) in each time period according to the atmospheric corrosivity as defined in ISO 9223 [12].The guiding corrosion values are based on experience obtained with a large number of exposure sites and service performances.
The question of whether this law provides a better prediction of WS corrosion cannot be fully answered until a greater volume of data is available for analysis, with reference to exposure times of at least 20 years.McCuen and Albrecht compared both models (the power model and the power-linear model) using atmospheric corrosion data reported for WS in the United States and concluded that the experimental data fitted the powerlinear model better than the power model and thus provided more accurate predictions of long-term atmospheric corrosion.[13] Finally, Klinesmith et al. [14] mention that all variation related to environmental conditions appears as error variation in the time-dependent models for models that predict corrosion loss as a function of time only.Further, time-dependent models will yield inaccurate predictions when used to estimate corrosion loss in environments that are different from the environment where the model was calibrated.To overcome the problem noted, they propose a model that incorporates multiple environmental factors such as: TOW, SO 2 , Cl -and temperature (T): where A, B, D, E, F, G, H, I, J and T 0 are empirical coefficients.
The model was formulated for different metals and the results indicate that it was reliable for use in a broad range of conditions or locations.
In spite of that, recent studies on the long-term atmospheric corrosion of WS continue to use the power function because of its simplicity [6,15,16], although ignoring the linear part will introduce considerable error in thickness estimates for long exposure times.
RUST LAYER STABILISATION TIME
Information on this aspect is highly erratic and variable, going from claims that a protective patina can be seen to be forming after as little as 6 weeks exposure in environments with low pollution to reports of stabilisation times of "a few years" , one year , 2-3 years , 8 years, etc.
The time taken to reach a steady state of atmospheric corrosion will obviously depend on the environmental conditions of the atmosphere where the steel is exposed, and to address this issue it is important to have abundant information on the effect of climatic and atmospheric pollution variables on WS corrosion resistance.However, the fact is that unfortunately there is no solid grounding (supported by ample experimentation) upon which to base relationships between atmospheric exposure and WS corrosion variables.
Despite the scarcity of information, a review has been made of published field data on the atmospheric corrosion of WS in different atmospheres, especially for long exposure times.
Table 2 offers a compendium of data encountered in the literature which may be useful in this respect.One factor of uncertainty is that atmospheres are mostly classified in purely qualitative terms (rural, urban, industrial, or marine), based on a subjective appreciation of pollution factors and omitting the humidity variable.On the other hand, although there is a large amount of data on behaviour in the first years of exposure, there is an ostensible lack of information on exposure times of more than 20 years.
The plotting of corrosion versus exposure time has allowed to estimate the time necessary for stabilisation (steady state) of the rust layer, which would indicate the time necessary for the formation of protective layers (column 6).Indeed, a look at Table 2 allows observing that after very short exposure times (3-5 years) stabilised (protective) rust layers are commonly formed in rural and urban atmospheres.Longer exposure times (5-10 years) are usually required in industrial atmospheres.With regard to marine atmospheres, although little information is available, corrosion rates are seen to be higher and the time taken to reach the steady state, if it is reached, tends to be longer, in excess of 15 years.
Table 2 also offers important information on aspects such as the effect of the type of atmosphere on WS corrosion, specifically, about the corrosivity category (column 5) as a function of 1 st year corrosion, long term WS corrosion rates (column 7), and the relationship between carbon steel corrosion and WS corrosion (column 8), aspects on which reported data in the literature are highly variable.
In relation to the influence of the type of atmosphere in the corrosion of weathering steel, as in the case of carbon steel, it can clearly be seen how WS corrosion rises going from a practically pollution-free rural atmosphere (C2) to an urban atmosphere (C2-C3) and from there to industrial and marine atmospheres (C3-C4).
Particular mention should be made of the effect of atmospheric SO 2 pollution on WS corrosion, where the existing literature is rather confusing, from those who say that WS is less sensitive to SO 2 than carbon steel, especially for long exposure time, to those others who believe that "weathering steels need access to SO 2 or sulphate-containing aerosols to improve their corrosion resistance" [17], or "it is generally accepted that a low but finite concentration of SO 2 in the atmosphere can actually assist the formation of a protective layer on WS".In the first case, Satake and Morosishi [18] report a close relationship between corrosion losses and SO 2 contents in the air in the first year of exposure, but which disappears in the fifth year of exposure.On the contrary ISO 9223 [12] notes that "in atmospheres with SO 2 pollution a more protective rust layer is formed".The literature also reports that copper in WS can form small amounts of relatively insoluble copper hydroxy sulphates, such as [Cu 4 (SO 4 )(OH) 6 ] or [Cu 3 (SO 4 )(OH) 4 ].These compounds can precipitate in pores of the rust layer, thereby improving the barrier effect of the patina [17] .
Studies into the effect of SO 2 pollution in the atmosphere on WS corrosion are very scarce.Perhaps the most important information in this respect is that obtained in a 8year study performed at numerous locations within the framework of the UNECE International Cooperative Programme on effects on materials, including Historic and Cultural monuments [19].Figure 1 plots the WS corrosion rate after 8 years of exposure versus the SO 2 concentration in the atmosphere, and a clear effect of SO 2 in the atmospheric corrosion of WS is observed.According to Leygraf, it seems that a certain amount of deposited SO 2 is beneficial.However, large amounts result in intense acidification of the aqueous layer, triggering dissolution and hindering precipitation [17].
With regard to marine atmospheres, although little information is available (see Table 2), corrosion rates are seen to be higher and the time taken to reach the steady state, if it is reached, tends to be longer, in excess of 15 years as it was mentioned previously.The action of chloride ions in this type of atmosphere seems to hinder the formation of protective rust layers, thus impeding the use of conventional WS.
Another aspect about which bibliographic information is highly variable is the relationship between mild steel corrosion rate and WS corrosion rate for different exposure times and atmosphere types.It is common to find general statements in the literature such as: "WS has to 4, 5 to 8, 10, etc., times more corrosion resistance than carbon steel", or "in comparison to carbon steels, WS may have corrosion rates of more than one order of magnitude less, depending on the environmental conditions".That relationship obviously depends again on the environmental conditions where both materials are exposed and the time of exposure.For this reason, the last column in Table 2 shows the relationship R between carbon steel (CS) corrosion and conventional WS corrosion for the longest exposure time in each of the atmospheres for which information is available in the reviewed literature.The average values yield relationships of 2.3 (rural-urban atmospheres), 2.9 (industrial atmospheres) and 3.2 (marine atmosphere).The lack of precision is great, and a certain tendency is seen for the value of this relationship to rise with time, which means that the beneficial effects of WS are increasing.It is also noted that the reduction in WS corrosion seems to be greater in the most aggressive atmospheres (industrial or marine).
MORPHOLOGY OF THE PROTECTIVE RUST LAYER
The formation of the protective rust layer on weathering steels is not yet completely understood.In the late 1960's researchers suggested that the stratification of rust layers was an intrinsic property of protective rusts formed on WS.The work of Okada et al.
[20] is perhaps the most commonly cited in this respect, where the authors show that unlike carbon steel, upon which only one stratum of rust is formed, on WS two strata of rust may be observed after a certain exposure time, consisting of an internal layer with protective characteristics and an unprotective outer layer.Similarly, Graedel [17] and Zhang et al. [9] state that the structure of rust on WS is different from that on iron or carbon steels.It is characterised by a double-layer structure, with the inner phase providing a greater barrier to oxygen and water than the outer phase.The outer phase is flaky and poorly adherent, whereas the inner phase adheres well.
Microscopic observation performed by Yamashita et al. [21] and Okada et al. [20], found that the rust layer can be divided into two layers: one corresponds to the outer layer which is optically active, and the other is the inner layer which is optically isotropic (darkened).On the other hand, the surface rust formed on the mild steel consists of the mottled structure consisting of the optically active and isotropic corrosion products.The optically isotropic layer was mainly composed of amorphous spinel-type iron oxide and the optically active layer was γ-FeOOH.Raman et al. [22] and Suzuki et al. [23] described the outer layer of WS contains several different crystalline oxyhydroxides, including lepidocrocite (γ-FeOOH), goethite (α-FeOOH), akaganeite (β-FeOOH), feroxyhyte (δ-FeOOH), maghemite (γ-Fe 2 0 3 ), magnetite (Fe 3 O 4 ), and ferrihydrite (Fe 5 HO 8 .4H 2 O), and an inner region, consisting primarily of dense amorphous FeOOH with some crystalline Fe 3 O 4 (Figure 2).
Dillmann et al. [24] studied in detail that the major phases of the rust layers are magnetite, goethite and lepidocrocite.Lepidocrocite seems to be more present in the outer layer and goethite seems to be the major constituent of the inner layer.
Furthermore, Okada et al. [9] pointed out that the inner rust layer is enriched in some kind of alloying elements just like Cr, Cu, and other elements, whereas the outer layer is distributed with cracks and pores, could not inhibit the entrance of the corrosive electrolyte.
Now it is generally agreed that both CS and WS form rusts that tend to stratify with exposure time [25].Both common carbon steels and WS present a rust layer that is in turn composed of two sublayers, a reddish outer layer and a dark grey inner layer (polarised light).This stratification is independent of the degree of protection afforded by the rust.The composition and morphology of the protective patina formed on WS is very different to the coating formed on carbon steel.The difference between the rust layers formed on carbon steel and on WS is that the α-phase (goethite phase) on the latter forms a densely packed and uniform layer of nanometre-sized particles, which are closely attached to the underlying steel substrate.On carbon steel, however, the distribution of phases is more heterogeneous, resulting in a rust layer with a mottled structure.The corrosion protection ability of WS is mainly attributed to this dense αphase whose formation is stimulated by dry-wet-dry cycling [3].
According to Yamashita and Uchida [26], the protective rust layer on WS is usually formed spontaneously after a certain number of years of exposure.Until the protective ability of the rust layer emerges, the WS corrosion rate is not especially low.
Furthermore, the protective rust layer cannot form in coastal environments where the amount of airborne sea-salt particles is relatively high.The higher the chloride deposition rate in marine atmospheres, the greater the degree of flaking observed, with loosely adherent flaky rust favouring rust film breakdown (detachment, spalling) and the initiation of fresh attack.The morphologic characteristics of the protective patina will therefore depend on the type of environment (rural, urban, industrial or marine), WS composition, years of exposure, relative humidity, temperature and pollutants (SO 2 , Cl -, etc.) as the main factors governing the formation and transformation of the protective layer.
STEEL
In 1960 Larrabee and Coburn [27] suggested that an average WS corrosion loss of 2-3 mils (25-75 µm) in 15 years in a given atmosphere (i.e.1.7-5 µm/y) would be sufficiently low for this steel to be used in the atmosphere without being painted.
In Japan, conventional WS specified as Japan Industrial Standard G 3114 SMA (JIS-SMA Weathering Steel), which is almost the same as that first commercialised by US Steel Corporation in the 1930s, can be used for bridges in environments where less than 0.3 mm corrosion loss per side is expected in 50 years of exposure, i.e. 6 µm/y [28].It is important to note that 50 years does not mean bridge lifetime, and that 0.3 mm of corrosion loss does not define the criteria of structural stability.These values are used to define the corrosivity of the atmosphere, and in general steel structures can be said to be sufficiently durable to accommodate a great deal of corrosion before any risk of collapse.Conventional WS is currently considered in Japan to be appropriate for bridges in environments where corrosion loss on one side of a girder is less than 0.5 mm in 100 years of exposure, i.e. 5 µm/y [29].
In USA, according to Cook [30], the acceptable corrosion rate for weathering steel in medium corrosivity locations is 120 µm maximum for 20 years of exposure, i.e. 6 µm/y again.Due to the development of rust patina during the lifetime of a structure that incorporates unpainted WS, a corrosion allowance should be made on each exposed surface, representing a loss of thickness of the material used for structural purposes [31]: • For atmospheric conditions defined by ISO 9223 [12] as class C1, C2 or C3 ("mild" environments for WS) the corrosion allowance should be 1 mm per surface, while for class C4 on C5 ("Severe" environments for WS) the corrosion allowance shall be 1.5 mm per surface.• For the "interior" surface of box-sections the allowance shall be 0.5 mm.
Following the same criterion as Japan and USA for the use of unpainted WS allowing a maximum atmospheric corrosion rate for steel of 6 μm/y in long-term exposures, the seventh column in Table 2 presents information on this aspect.According to the information contained in Table 2, conventional WS may be used in rural and urban areas without excessive SO 2 pollution where the WS corrosion rate at the end of the exposure period is usually less than 6 μm/y, however, it should not be used in industrial or marine atmospheres because long term WS corrosion rate has higher values, usually in excess of 6 μm/y.It should nevertheless be noted that the available information corresponds to relatively short exposure times in which, as has been commented above, WS still presents high corrosion rates and where protective rust layers have not perhaps yet had time to form.
Japan
An application guideline for unpainted WS has been prepared based on the results of surveys and examinations, including the long-term exposure tests carried out from 1981 to 1993 by three organisations: the Public Works Research Institute of the Construction Ministry, the Japan Association of Steel Bridge Construction, and the Kozai Club [28].
Figure 3 shows the relationship between atmospheric salinity and WS corrosion (per exposed surface), differentiating between two zones according to the adherent or nonadherent nature of the rust formed.Salinity measurements were obtained by the gauze method (JIS-Z-2381) [32].Bearing in mind the maximum corrosion rate permitted for the use of WS, 6 μm/y, a NaCl critical level of 0.05 mg/(dm 2 d) or 3 mg/(m 2 d) of Cl -is established for airborne salt [33].At the present time the NaCl limit is set at 0.1 mg/(dm 2 d) or 6 mg/(m 2 d) of Cl -and there is even talks of a range of 0.1-0.2mg/(dm 2 d) of NaCl or 6-12 mg /(m 2 d) of Cl -, depending on the conditions of usage [34].
United Kingdom
The United Kingdom Department of Transport Standard BD 7 [31], "Use of WS for highway structures", suggested in 1981 that uncoated WS should not be used when: 1.The chloride level exceeds 0.1 mg/(dm 2 d) or 10 mg/(m 2 d).
2. The yearly average time of wetness exceeds 60%.should be avoided for use of uncoated WS.
USA
According to United States government guidelines [35], the following conditions should be avoided if WS is used: Chloride deposition > 50 mg/(m 2 d), sulphur dioxide deposition > 168 mg/(m 2 d), as United Kingdom (Standard BD 7 [31]), and average time of wetness > 60%.The time of wetness is here defined as the time during which the relative humidity is > 80% and the temperature is > 0º C. Environmental data has been obtained in accordance with ISO 9223 [12].Unlike in Japan, atmospheric salinity data has been obtained by the wet candle method (ISO 9223) [12].The relationship between the data obtained with these two techniques, the gauze method and the wet candle method [36], is displayed in Figure 4 for NaCl levels between 0.013 y 3.8 mg/(dm 2 d), obtained by the wet candle method [37].
It shows that the wet candle method is more sensitive to the presence of NaCl, capturing greater amount of aerosol that the gauze method for NaCl levels higher than 0.05 mg/(dm 2 d).
Unlike the effect of atmospheric salinity on WS corrosion, for which valuable information was obtained in 12-year exposure tests carried out to classify the severity of environments in Japan (Figure 3), no similar study concerning the effect of atmospheric SO 2 has been found in the literature.Nevertheless, the graph in Figure 1 draws on interesting results obtained by UNECE [19] for the case of weathering steel exposed for 8 years in numerous atmospheres in Europe and America.The graph excludes sites where the atmospheric salinity or time of wetness is excessively high for protective rust layers to form (≥ 6 mg/L of Cl -in rain water and TOW in excess of 5500 h/y).This graph allows a critical level of around 20 mg/(m 2 d) of SO 2 to be established, in accordance with the criterion for the use of unpainted WS in atmospheric exposure (6 µm/y).
In another study performed in the former Czechoslovakia by Knotkova et al. at different testing sites with high SO 2 pollution, a SO 2 critical level of 90 mg/(m 2 d) was established for the use of WS in this type of atmospheres [38], much higher than the obtained level from figure 1.
CONCLUSIONS
From the bibliographic review performed the following conclusions may be drawn: • Although the power function (C= At n ) seems to provide a good fit of the evolution of atmospheric corrosion of weathering steels with exposure time, the power-linear model provides better predictions for long-term exposure times.
• The time taken for the rust layer to stabilise obviously depends on the environmental conditions of exposure: 3-5 years for rural or urban atmospheres; 5-10 years for industrial atmospheres; and > 15 years for marine atmospheres, if a steady state is ever actually reached in this type of atmosphere.Atmospheres polluted with SO 2 , if not strongly polluted (> 90 mg/(m 2 d) of SO 2 ), promote earlier stabilisation of the rust layers, (≈ 3 years) possibly due to the sealing of internal porosity in the rust by corrosion products formed between SO 2 and copper in the WS.
• With regard to the morphology of the rust layers formed in the atmosphere, two sublayers are formed on both carbon steel and WS, the innermost being responsible for the protective properties of the rust in the case of WS.
• There seems to be general agreement on the allowable corrosion rate (6 µm/y) for the use of unpainted WS in the atmosphere.
• There is a lack of unified criteria on guideline environmental conditions (SO 2 , Cl -) for the use of unpainted WS.The chloride level allowed in Japan seems to be excessively low (6
Figure 3 .
Figure 3. Influence of air-born salts (atmospheric salinity) on the stability of the protective layer of corrosion products formed on WS.
Figure 4 .
Figure 4. Relationship between measurements obtained by the wet candle method and the gauze method mg/(m 2 d)) and excessively high (≥300 mg/(m 2 d)) in United Kingdom, while the SO 2 levels allowed in the United Kingdom and USA seem excessively high (200 and 168 mg/(m 2 d)),
Table 1
Guiding corrosion values for corrosion rates (r av, r lin ) of carbon steel and weathering steel in atmospheres of various corrosivity categories.
Table 2
Compendium of bibliographic information on evolution of atmospheric corrosion of conventional WS with time of exposure. | 6,672.4 | 2012-01-01T00:00:00.000 | [
"Materials Science"
] |
A Transgenic Prox1-Cre-tdTomato Reporter Mouse for Lymphatic Vessel Research
The lymphatic vascular system plays an active role in immune cell trafficking, inflammation and cancer spread. In order to provide an in vivo tool to improve our understanding of lymphatic vessel function in physiological and pathological conditions, we generated and characterized a tdTomato reporter mouse and crossed it with a mouse line expressing Cre recombinase under the control of the lymphatic specific promoter Prox1 in an inducible fashion. We found that the tdTomato fluorescent signal recapitulates the expression pattern of Prox1 in lymphatic vessels and other known Prox1-expressing organs. Importantly, tdTomato co-localized with the lymphatic markers Prox1, LYVE-1 and podoplanin as assessed by whole-mount immunofluorescence and FACS analysis. The tdTomato reporter was brighter than a previously established red fluorescent reporter line. We confirmed the applicability of this animal model to intravital microscopy of dendritic cell migration into and within lymphatic vessels, and to fluorescence-activated single cell analysis of lymphatic endothelial cells. Additionally, we were able to describe the early morphological changes of the lymphatic vasculature upon induction of skin inflammation. The Prox1-Cre-tdTomato reporter mouse thus shows great potential for lymphatic research.
Introduction
The lymphatic vascular system has an important physiological role in the maintenance of tissue fluid homeostasis, the transport of antigens and immune cells from the periphery to lymph nodes where the adaptive immune response occurs, and the intestinal absorption of dietary lipids [1]. Moreover, the lymphatic system contributes to a number of pathological processes such as primary and secondary lymphedema, cancer metastasis, inflammation and transplant rejection [2]. In some pathological conditions such as cancer dissemination and transplant rejection, the inhibition of lymphangiogenesis, the growth of new lymphatic vessels (LVs) from pre-existing ones, has been considered as a new therapeutic approach [3]. On the other hand, the activation of lymphangiogenesis might be beneficial for the treatment of lymphedema and chronic skin inflammation [4]. Given the importance of lymphangiogenesis as a therapeutic target and the need for further insights into the contribution of lymphangiogenesis to pathological conditions, substantial efforts have been invested in generating mouse models that allow the visualization of LVs in vivo and the isolation of lymphatic endothelial cells (LECs) for transcriptome analyses. To date, several transgenic mouse lines for fluorescent detection of LVs have been described. These lines are based on gene-targeted bacterial artificial chromosome (BAC) transgenic constructs for the expression of either GFP [5], mOrange [6] or tdTomato [7] under Prox1 transcriptional control. The expression of an EGFP-luciferase dual fluorescent-bioluminescent reporter under the control of Flt4 (vascular endothelial growth factor 3) regulatory elements has also been reported [8]. Additional LV detection techniques used in mice include positron emission tomography (PET) combined with radiolabeled anti-LYVE-1 antibodies [9], the injection of liposomal preparations of indocyanine green [10] and the use of PEG-conjugated near infrared dyes [11]. Here, we describe the generation of a tdTomato reporter mouse line and show the specific labeling of the LVs after crossing with a Prox1-Cre-ERT2 line [12]. For the first time, we show the applicability of this lymphatic-specific reporter mouse to intravital microscopy (IVM) of dendritic cell (DC) migration and studies of LV morphology during the early phases of cutaneous inflammation, as well as LEC single cell analysis.
Our findings indicate that this new mouse model has a great potential for studying the lymphangiogenic process and related functions in physiological and pathological conditions.
Cloning and in vitro testing of the tdTomato reporter construct
The tdTomato coding sequence was amplified by PCR (forward primer 5'-ATG GTG AGC AAG GGC GAG GA-3', reverse primer 5'-AAC AAA AGC TGG GTA CCG GGC-3') and cloned into a pCMVbASIRE construct [13] (kindly provided by Dr. Sabine Werner, ETH Zurich) to obtain the pCMVbASIRE-tdTomato plasmid. The floxed-STOP cassette was excised by transformation of MM294-Cre E. coli as previously described [14]. Efficient recombination of the STOP cassette was tested by restriction digestion analysis. HEK293 cells were transiently transfected with pCMVbASIRE-tdTomato or the Cre-recombined plasmid using the PEI (polyethylenimine) method and analyzed with an inverted fluorescent microscope (Zeiss) 48 hours after transfection.
Generation of the lox-STOP-lox (LSL)-tdTomato reporter mouse pCMVbASIRE-tdTomato was digested with AclI and the 4.8 Kbp fragment (LSL-tdTomato) was purified using the QIAquick gel extraction kit (QIAGEN) and was eluted in sterile water. LSL-tdTomato reporter mice were generated by pronuclear microinjection of the AclI DNA fragment into C57BL/6N-fertilized oocytes. Founders were identified by PCR of genomic DNA using the following primers: FOR 5'-GCG TTA CAT AAC TTA CGG TAA ATG GCC C-3', REV 5'-GGG CGT ACT TGG CAT ATG ATA CAC TTG ATG-3'. Relative transgene copy number was estimated by real-time PCR on genomic DNA using SYBR green and the following primer pair for the tdTomato transgene: FOR 5'-GCG TTA CAT AAC TTA CGG TAA ATG GCC C-3', REV 5'-GGG CGT ACT TGG CAT ATG ATA CAC TTG ATG-3'. The following primer pair was used to amplify a control endogenous gene (podoplanin): FOR 5'-AGG GTA TGA AAG CCC CAA GC-3', REV 5'-GAG ATA CCC AGG GCG AGG TT-3'. Both reactions were found to have the same efficiency; therefore, delta Ct (tdTomato Ct-podoplanin Ct) was used to estimate the relative copy number.
Mice, breedings and tamoxifen administration
LSL-tdTomato reporter mice were crossed with keratin 5 (K5)-Cre-ERT2 mice obtained from the MMRRC repository (University of Missouri) [15]. Double transgenic animals were identified by genotyping. The back skin of double transgenic animals and wild-type littermates (6-8 weeks old) was shaved and painted with 1 mg 4-hydroxytamoxifen (4-OHT, Sigma) dissolved in ethanol for 5 consecutive days. LSL-tdTomato reporter mice and tdRFP reporter mice [16] were crossed with Prox1-Cre-ERT2 mice [12]. Double transgenic animals were identified by genotyping. Double transgenic adult animals and wild-type control littermates were intraperitoneally injected with tamoxifen three times a week for two weeks (5 l μg/g body weight, 10 mg/ml tamoxifen in sunflower seed oil, Sigma). Prox1-mOrange2 mice were described previously [6].
Intravital microscopy of dendritic cell migration
Bone marrow derived DCs were generated from the bone marrow of CD11c-YFP mice as described [18]. 5 x 10 5 bone marrow DCs were injected into the ventral side of the ear pinna. After 4-6 hours, mice were anesthetized with medetomidine (1 mg/kg) and ketamine (75 mg/kg), and the ears were depilated with VEET cream. Intravital microscopy was performed as described by Nitschke et al. [18]. Briefly, mice were transferred to a custom-made microscopy stage and placed into a 37°C incubator chamber installed on the microscope platform. Imaging was performed on a Zeiss LSM 710 inverted confocal microscope (Carl Zeiss AG). A 1 hour time-lapse video including Z-Stacks every 30 seconds was acquired using a 20x 0.8 NA Plan Apochromat objective. An Argon laser (488 nm, for YFP excitation) and a solid-state laser (561 nm, for tdTomato excitation) were used for image acquisition. Videos were analyzed with IMARIS software (v7.1.1, Bitplane, Zurich, Switzerland).
Ethics statement
All mice used in this study were bred and housed in the animal facility of ETH Zurich. Experiments were performed in accordance with animal protocols 149/2008, 237/2013 and 190/2011 approved by the local veterinary authorities (Kantonales Veterinäramt Zürich).
Immunofluorescence analysis of tissue whole mounts
Mice were sacrificed, hair was removed with depilation cream and ears were harvested and split into two halves along the cartilage. Ear tissues or lymph nodes were fixed for two hours in 4% PFA at 4°C, washed for 1 hour in PBS and incubated in blocking solution (5% normal donkey serum, 1% BSA, 0.01% Triton-X 100 in PBS) for 4 hours at room temperature. Subsequently, samples were incubated overnight at room temperature with primary antibodies in blocking solution: rabbit anti-RFP (1:300, Rockland), goat anti-LYVE-1 (1:200, R&D Systems), goat anti-Prox1 (1:200, R&D Systems). After extensive washes in PBS, samples were incubated for 2 hours at room temperature with AlexaFluor 488, 594 or 647-conjugated secondary antibodies raised in donkey (1:200, Invitrogen). After at least 2 hours of washes in PBS, samples were mounted with Vectashield mounting media (Vector) on glass slides. Whole mount z-stacks were acquired using an LSM 710 FCS confocal microscope equipped with a 10x 0.3 NA EC Plan-Neofluar objective using ZEN software (Zeiss), and were processed with ImageJ software.
FACS analysis of ear single cell suspensions
Ears were digested with collagenase IV (Invitrogen) as described [19]. Ear single cell suspensions were stained with the following antibodies: rat anti-CD45-APC-Cy7
Ultramicroscopy analysis of lymph node whole mounts
Mice were sacrificed and lymph nodes were dissected and fixed in 4% PFA. After a wash in PBS, samples were permeabilized in 0.5% Triton-X 100 in PBS. After a wash in PBS, samples were blocked in blocking solution (1% BSA, 0.1% Tween-20 in PBS) and subsequently incubated with primary antibodies diluted in blocking solution. The antibodies used were rabbit anti-RFP (1:100, Rockland) and goat anti-LYVE-1 (1:100, R&D Systems). Samples were washed in PBS 0.1% Tween-20 (PBS-T) and incubated with AlexaFluor 488 and 647-conjugated secondary antibodies (1:200, Invitrogen) in blocking solution. After a wash in PBS-T, samples were embedded in 1% ultrapure low melting point agarose, dehydrated with methanol and cleared with BABB (benzyl alcohol/ benzyl benzoate 1:2) [20]. Briefly, samples were dehydrated in a series of 50%, 70%, 95% (at least 1 hour each) and 100% methanol (overnight) and cleared for 1 hour in 50% BABB in methanol and finally in BABB for at least 8 hours. Samples were stored in BABB at 4°C in the dark until image acquisition. Images were acquired using a LaVision Ultramicroscope (LaVision BioTec, Bielefeld). Stacks were captured using 2 μm step size and 2.5x magnification. Maximum projection of a 260 μm z-stack was obtained using the ImageJ software.
Generation of the tdTomato reporter mouse
The tdTomato coding sequence was cloned under the control of a CMV-enhancer, β-actin promoter (CAG) and downstream of a transcriptional/translational-floxed stop cassette (LSL) (Fig 1A), allowing a strong expression of the tdTomato transgene upon Cre recombination of the floxed STOP cassette. In order to test the construct in vitro, HEK293 cells were transiently transfected with the plasmid with or without the STOP cassette. TdTomato was expressed specifically upon recombination of the floxed STOP cassette (Fig 1B), confirming that the construct was efficient and not leaky. An AclI fragment was utilized for the generation of a transgenic mouse line by injection into the pronucleus of fertilized C57BL/6N oocytes. Five founders were identified by PCR of genomic DNA ( Fig 1C) and designated as C57BL/6N-Tg (CAG-tdTomato)581-585Biat. Three founders (number 2, 4 and 26) bred normally and transmitted the transgene to the progeny with Mendelian distribution. The relative copy number of the transgene was estimated by real-time PCR of genomic DNA in comparison with a control gene (podoplanin). Founder 4 carried the highest amount of copies, founder 2 the least and founder 26 an intermediate number of copies ( Fig 1D).
TdTomato is expressed in the skin upon crossing of the LSL-tdTomato reporter mice with a K5-Cre-ERT2 line To test the expression of tdTomato in vivo upon recombination of the STOP cassette, and to select the best founder for further experiments, we crossed the LSL-tdTomato reporter mice with a mouse line expressing Cre recombinase under control of the skin-specific keratin 5 promoter in an inducible fashion (K5-Cre-ERT2) [15]. Cre expression was induced in double transgenic and wild-type littermate adult mice by applying 4-hydroxytamoxifen (4-OHT) in ethanol on the shaved back skin for 5 consecutive days (Fig 2A). Before treatment and two days after the last application, mice were imaged using an IVIS spectrum. Imaging of representative animals derived from founder line 2 clearly demonstrated tdTomato expression in the treated back skin (Fig 2B, dashed line). We also observed systemic activation in other skin compartments (i.e. tail and paws, Fig 2B) since 4-OHT is absorbed through the skin and is readily active. These results confirmed that the LSL-tdTomato reporter mouse is suitable for imaging of tdTomato expression upon genetic recombination in vivo. Founder line 2 showed a strong and complete pattern of expression as compared to the other founders (data not shown), and was therefore used for all consecutive experiments.
TdTomato is expressed in Prox1 positive cells upon crossing of the LSL-tdTomato reporter mice with a Prox1-Cre-ERT2 line The transcription factor Prox1 regulates the development of the lymphatic system and maintains lymphatic identity throughout life [21,22]. We crossed the previously characterized Prox1-Cre-ERT2 mouse line [12] with the LSL-tdTomato reporter line. In order to induce Cre expression, adult mice were injected intraperitoneally with tamoxifen (50 μg/g body weight), three times a week for two weeks and analyzed as early as two days after the last application ( Fig 3A). Tamoxifen-treated mice were imaged using a stereomicroscope. Known Prox1 expressing organs, including the lens (Fig 3B), the heart ( Fig 3C) and the liver (Fig 3D), showed strong fluorescence. LVs were clearly identifiable in different anatomical locations due to their clear tdTomato expression. Fig 3 shows representative stereomicroscopic images of LVs in the mesentery (E), tongue (F), uterus (G), bladder (H), inguinal lymph node (K, L), auricular lymph node (J) and ear skin (I). In many organs, lymphatic valves were readily visible and characterized by a stronger tdTomato fluorescence (arrowheads in Fig 3). Freshly isolated and unfixed lymph node and ear skin samples were also analyzed by confocal microscopy. Fig 3M shows clearly visible tdTomato endogenous fluorescence in the lymph node subcapsular sinus. In the ear skin (Fig 3N), endogenous tdTomato signal was found in LV structures, with a stronger signal in valves (arrowheads in Fig 3N). Some single cells positive for tdTomato were also visible. Taken together, these data show that tdTomato expression faithfully recapitulated Prox1 expression.
TdTomato expression co-localizes with lymphatic markers in different organs
In order to confirm specific tdTomato expression in LVs, we performed whole mount immunofluorescent stainings of PFA fixed ear skin using an antibody against tdTomato (RFP), since the endogenous tdTomato signal could not be preserved after PFA fixation, and the established lymphatic markers Prox1 and LYVE-1. TdTomato co-localized with LYVE-1 positive lymphatic capillaries in the skin (Fig 4A). Moreover, lymphatic collectors, which are characterized by a lower expression of LYVE-1 but retain Prox1 expression in adult mice, were strongly positive for tdTomato (Fig 4A, dashed line). TdTomato positive lymphatic capillaries and collectors were also Prox1 positive (Fig 4B). In ear skin, some tdTomato single cells were also visible, but they did not co-localize with LYVE-1 or Prox1 (Fig 4A and 4B). These single cells were also negative for the major histocompatibility complex (MHC) II expressed by antigen presenting cells, and for the pan-leukocyte marker CD45 (not shown). FACS analysis of ear single cell suspension (Fig 4C) revealed that LECs (CD45-CD31+ podoplanin+) expressed tdTomato. In contrast, blood vascular endothelial cells (BEC, CD45-CD31+ podoplanin-) and leukocytes (CD45+) did not show any tdTomato expression. We additionally performed ultramicroscopic analysis of lymph nodes that were immunostained for LYVE-1 and tdTomato and then were optically cleared (Fig 4D). We found a clear co-localization of the lymphatic marker LYVE-1 with tdTomato ( Fig 4D). The pattern of LVs in the lymph node was prominent in the subcapsular sinus and between the lobes. Further confocal analysis of lymph node whole-mount preparations showed the presence of LYVE-1+RFP+ LVs together with LYVE-1+RFP-cells, presumably macrophages, in the subcapsular sinus ( Fig 4E). Collectively, these data confirm the lymphatic identity of the tdTomato-positive vessel-like structures.
Comparison of the Prox1-Cre-tdTomato mouse with other RFP reporters
In order to compare the tdTomato reporter to other red fluorescent reporters for the imaging of LVs, we directly compared Prox1-Cre-tdTomato mice with Prox1-Cre-ERT2 mice crossed with the previously established reporter line tdRFP [16]. Intravital confocal microscopic analysis of mouse ear skin performed with the same microscope settings and laser power showed considerably brighter LVs in the Prox1-Cre-tdTomato mouse than in the Prox1-Cre-tdRFP mouse (Fig 5A). Only the application of a higher laser power allowed visualization of LVs in the Prox1-Cre-tdRFP mouse (Fig 5B). Multiphoton microscopy was used to reach deeper into the tissue and to image collecting vessels. Also in this setting, the Prox1-Cre-tdTomato mouse performed better than the Prox1-Cre-tdRFP mouse ( Fig 5C).
Additionally, we imaged a previously characterized LV reporter line, namely the Prox1--mOrange mice which express the fluorescent protein mOrange under direct Prox1 transcriptional control [6]. Intravital confocal imaging of ear skin showed clear labeling of LVs (Fig 5D). However, the distribution of the fluorescent signal appeared more nuclear as compared to the Prox1-Cre-tdTomato mouse, as shown in the orthogonal view where the lumen of the LVs was better visualized (Fig 5D).
The Prox1-Cre-tdTomato mouse is a valid tool for intravital microscopy of DC migration One of the major functions of the lymphatic system is the trafficking of DCs from the periphery to the lymph nodes [23]. To test the use of the lymphatic reporter mouse for the analysis of DC migration, Prox1-Cre-tdTomato mice were injected with YFP-expressing DCs and imaged by intravital microscopy (IVM). Endogenous tdTomato-fluorescence was clearly visible in the LVs imaged with the IVM settings (Fig 6A). YFP-DCs were imaged for 1 hour and could be times a week for two weeks). TdTomato expression was analyzed with a stereomicroscope and was detected in the eye (B), heart (C) and liver (D). TdTomato was visible in lymphatic structures in the mesentery (E), tongue (F), uterus (G), bladder (H), ear skin (ripped in half, I), lymph nodes of the neck area (J) and the inguinal lymph node (K). Ex vivo imaging of an inguinal lymph node (L). M Confocal imaging of tdTomato autoflorescence showed lymphatic structures in a freshly isolated lymph node. Maximal intensity projection of a tile scan, z-stack of the lymph node is shown. N Confocal imaging (maximal intensity projection of a z-stack) of tdTomato autoflorescence showed lymphatic structures in a freshly isolated split ear sample. Arrowheads indicate lymphatic valves. Scale bars = 2000 μm (D-C), 1000 μm (J, K), 500 μm (E-I, L), 100 μm (M-N). Images shown are representative of 3 transgenic animals analyzed from 2 independent litters. doi:10.1371/journal.pone.0122976.g003 easily tracked inside the LVs or in the interstitium (S1 Video). Fig 6B shows the tracks of several DC crawling inside LVs. It was possible to image in vivo over time the entry of DC into the lymphatic capillaries. As shown in the representative orthotopic view (Fig 6C) and in the S2 Video, an YFP-positive DC makes first contact with a tdTomato-positive LV and finally enters it, squeezing through the cell-cell openings. These data show the applicability of the Prox1-Cre-tdTomato mouse to the analysis of immune cell trafficking into and within the LVs in vivo.
The Prox1-Cre-tdTomato mouse facilitates the FACS sorting of LECs
LECs finely regulate their gene expression in response to inflammatory stimuli and play an important role in modulating inflammatory responses, as shown by gene expression analyses of sorted LEC from inflamed and uninflamed tissues [4,24]. Until now, staining for multiple antigens is needed in order to sort pure populations of LECs from complex single cell suspensions [25]. We next investigated whether the Prox1-Cre-tdTomato mice might simplify LEC isolation from ear single cell suspensions. We found that about 60% of tdTomato-positive cells were CD45-negative, CD31-positive endothelial cells (Fig 7). An additional CD45-negative, CD31-negative population accounted for about 40% of tdTomato-positive events (Fig 7). Among the endothelial cells, 98% were identified as LEC (CD31-positive and podoplanin-positive, Fig 7), suggesting that a pure LEC population can be obtained from the ears of Prox1-Cre-tdTomato mice by sorting tdTomato-positive, CD31-positive cells, simplifying existing sorting protocols [25]. The identity of the additional tdTomato-positive population remains to be elucidated.
Progressive enlargement of the lymphatic vessel diameter during the early phases of skin inflammation
We next used the Prox1-Cre-tdTomato mouse model to investigate the early morphological changes that the lymphatic vasculature might undergo upon induction of inflammation. To this aim, we induced a contact hypersensitivity reaction to oxazolone in the ears and performed intravital confocal microscopy over time (Fig 8A). We observed a progressive enlargement of the LV diameter during the first 48 hours after induction of inflammation (Fig 8B). However, the total vessel length did not change during the time points analyzed (not shown). Together with the LV enlargement, the ear thickness progressively increased (Fig 8C). Collectively, these observations indicate that enlargement of the vessel diameter represents the earliest morphological change that occurs in LVs upon acute inflammation.
Discussion
In this report, we describe the generation of an inducible tdTomato reporter mouse and its applicability to lymphatic research. The induced Prox1-Cre-tdTomato mice showed bright red fluorescence in LVs and could be successfully applied to in vivo microscopy of DC migration and of LV morphology during acute inflammation, and to FACS analyses of LECs isolated from complex tissues.
TdTomato is the brightest among the red-fluorescent proteins, derived from directed evolution of DsRed [26], and it is therefore well-suited for in vivo imaging and multi-color FACS analysis. Accordingly, we found that the tdTomato reporter mouse had increased brightness of LVs and allowed an increased depth of imaging, as compared to a previously generated tdRFP reporter line [16]. Moreover, tdTomato is more photostable than tdRFP since we observed considerably less photobleaching when performing IVM of DC migration over one hour in Prox1-Cre-tdTomato mice as compared to our previous experience with VE-cadherin-Cre-tdRFP mice [18].
We cloned tdTomato under the control of a strong ubiquitous promoter and a transcriptional/translational-floxed STOP cassette, and generated a transgenic reporter line. In a preliminary study, we crossed the obtained founder lines to an inducible, skin specific, Cre recombinase expressing mouse line: K5-Cre-ERT2. Surprisingly, founder line 2, which was characterized by the least number of copies, had the strongest and most uniform tdTomato expression. Since the reporter was generated by pronuclear microinjection, which results in random transgene integration into the genome, we assume the impact of position effects on the expression pattern. It is possible that the integration sites of founders with higher copy numbers are in genomic regions that disturbed their transcription, or the high copy number itself was responsible for repeat-induced silencing of the transgene [27,28].
In contrast to the previously published BAC-based models for fluorescent imaging of lymphatic vessels [5][6][7], our approach involved the generation of a reporter mouse with a floxed STOP cassette followed by tdTomato, and its crossing with an inducible, lymphatic specific Cre expressing line, allowing not only tissue specificity, but also time specificity of induction. Several lymphatic specific Cre lines have been generated so far. Most of them, however, are characterized by extra-lymphatic Cre expression. A LYVE-1-Cre line expressed Cre recombinase in a subset of blood vascular endothelial cells and leukocytes [29]. A podoplanin-Cre line was also described [30], but the expression of this transmembrane protein is also prominent in stromal cells of lymph nodes and secondary lymphoid organs; therefore, it is not well suited for LV imaging and analysis in the lymph node. Thus, we chose to cross the reporter line to a previously described Prox1-Cre-ERT2 line [12]. This line shows some extra-lymphatic expression of Cre recombinase, in line with the known function of Prox1 as a transcription factor in specific tissues. We detected strong tdTomato expression in the liver, heart and lens, where Prox1 is normally expressed [5,7]. Importantly, in anatomical locations where LVs are usually analyzed, such as the skin and the lymph nodes, tdTomato expression was strong on LVs, as evidenced by co-staining for the lymphatic markers LYVE-1, Prox1 and podoplanin in wholemount preparations and FACS analyses. It is of interest that a yet unknown subpopulation of single cells positive for tdTomato was identified in the skin. As shown by whole-mount stains and FACS analyses, this population was not positive for Prox1, MHC-II or CD45. Characterization of the previously described Prox1-driven lymphatic reporters, namely the ProxTom [7] and the Prox1-GFP mouse lines [5], did not indicate the presence of this single cell population in the skin. The identity of this population is presently not clear and merits further study. Nevertheless, since this cell population appeared to be immobile, its presence did not interfere with the IVM analysis of DC migration.
The direct fluorescent visualization of the lymphatic vasculature in distinct transgenic and knockout mouse models could be advantageous. Since the Prox1-Cre-tdTomato mouse features two alleles transmitted independently (the Prox1-driven Cre recombinase and the inducible tdTomato reporter), this application will require the generation of triple-transgenic mice.
Tissue fixation with PFA required the use of an anti-RFP antibody to detect tdTomato, since the endogenous fluorescence could not be preserved. Other fixation methods such as periodate-lysine-paraformaldehyde might preserve endogenous tissue fluorescence, as described by Truman et al [7]. Moreover, whole mount staining for some antigens might also work omitting the fixation step. However, in many applications, such as IVM, FACS analysis and stereomicroscopic analysis, tissue fixation is not required and the endogenous fluorescence could be easily detected.
Application of the Prox1-Cre-tdTomato mouse for the analysis of DC migration in the ear skin enabled the observation of DCs interacting with LECs and their entry into the LVs. Once inside the vessels, DCs actively migrated and crawled within the LVs. Until recently, DCs were thought to be passively transported by flow from the periphery to the draining lymph nodes once they had entered the LV. Recent studies, however, have shown that after transmigration, DCs actively crawl within the LV [18,31]. DC migration has been shown to be integrin-independent [32], whereas the CCR7-CCL21 axis plays a key function [31]. In these reports, the visualization of the lymphatic vasculature was obtained either by antibody staining [31,32] or by the use of a pan-vascular reporter mouse, the VE-cadherin-Cre mouse line, in which both blood and lymphatic vessels can be visualized [18]. Our model has some advantages compared to the latter ones, since it can specifically visualize LVs and does not require antibody staining. Moreover, the LVs in our model are red-fluorescent and enable the combined visualization of GFP and YFP fluorescent leukocytes. This model could therefore be applied to further IVM studies, aiming to unravel additional cellular players and molecular interactions involved in the complex mechanisms of DC migration.
We also applied our animal model to the simplification of existent LEC FACS sorting protocols [25]. Single-cell sorting is a powerful tool to isolate a pure cell population from a complex tissue digest and to analyze its protein or gene expression. This technique allowed not only the discovery of novel molecules differentially expressed by LECs and BECs [33], but also the investigation of the changes in gene expression that occur in LECs upon different inflammatory stimuli [34]. Our animal model allowed the separation of a pure LEC population from ear single cell suspensions by use of the intrinsic tdTomato fluorescence and staining for the pan-endothelial marker CD31, thereby simplifying the 4-colour staining currently used. Our data are in agreement with a previous report that took advantage of the ProxTom mouse [35].
Finally, the Prox1-Cre-tdTomato mouse allowed to directly visualize and monitor over time by IVM the first morphological changes of LVs that occur during acute inflammation, namely the progressive enlargement of LVs diameter, using a cutaneous contact hypersensitivity model. Until now, most related studies have focused on later time points of inflammation and have analyzed morphological changes in the lymphatic vasculature by histology [17]. The possibility to image the same mouse at different time points also provides the advantage to reduce the number of animals needed for analysis.
Collectively, our data show that the Prox1-Cre-tdTomato mouse model shows bright redfluorescent LVs and has important applications to IVM of leukocyte migration into and within LVs, in vivo studies of LV morphology and ex vivo FACS analyses of LEC. It is therefore a useful novel tool for the study of LVs in physiological and pathological conditions and it will be certainly of great use to lymphatic research.
Supporting Information S1 Video. Intravital microscopy of dendritic cell migration. YFP-DCs were injected into the ear and a frame was acquired every thirty seconds for 1 hour in an inverted confocal microscope. The video shows DCs (green) moving in the interstitium and in lymphatic vessels (red). Video speed: 5 frames per second. (AVI) S2 Video. Intravital microscopy of dendritic cell migration. YFP-DCs were injected into the ear and a frame was acquired every thirty seconds for 1 hour in an inverted confocal microscope. The video shows a DC (green) entering a lymphatic vessel (red). An orthogonal view is provided to enhance visibility of the DC location inside the lymphatic vessel. Video speed: 5 frames per second. (AVI) | 6,574.4 | 2015-04-07T00:00:00.000 | [
"Biology"
] |
6q25.1 (TAB2) microdeletion is a risk factor for hypoplastic left heart: a case report that expands the phenotype
Introduction Hypoplastic left heart syndrome (HLHS) is a rare but devastating congenital heart defect (CHD) accounting for 25% of all infant deaths due to a CHD. The etiology of HLHS remains elusive, but there is increasing evidence to support a genetic cause for HLHS; in particular, this syndrome is associated with abnormalities in genes involved in cardiac development. Consistent with the involvement of heritable genes in structural heart abnormalities, family members of HLHS patients have a higher incidence of both left- and right-sided valve abnormalities, including bicuspid aortic valve (BAV). Case presentation We previously described (Am J Med Genet A 173:1848–1857, 2017) a 4-generation family with a 6q25.1 microdeletion encompassing TAB2, a gene known to play an important role in outflow tract and cardiac valve formation during embryonic development. Affected adult family members have short stature, dysmorphic facial features, and multiple valve dysplasia, including BAV. This follow-up report includes previously unpublished details of the cardiac phenotype of affected family members. It also describes a baby recently born into this family who was diagnosed prenatally with short long bones, intrauterine growth restriction (IUGR), and HLHS. He was the second family member to have HLHS; the first died several decades ago. Postnatal genetic testing confirmed the baby had inherited the familial TAB2 deletion. Conclusions Our findings suggest TAB2 haploinsufficiency is a risk factor for HLHS and expands the phenotypic spectrum of this microdeletion syndrome. Chromosomal single nucleotide polymorphism (SNP) microarray analysis and molecular testing for a TAB2 loss of function variant should be considered for individuals with HLHS, particularly in those with additional non-cardiac findings such as IUGR, short stature, and/or dysmorphic facial features.
Introduction
Hypoplastic left heart syndrome (HLHS) is a severe, complex congenital heart defect (CHD) characterized by hypoplasia of the left ventricle and ascending aorta, an atrial septal defect (either large or restrictive), and a patent ductus arteriosus, which provides the only blood flow to the body. It commonly involves atresia or stenosis of the mitral and aortic valves. The prevalence of HLHS is 1.6 per 10,000 live births, and it accounts for 4-8% of all CHD [1]. HLHS is the most severe abnormality in the spectrum of left-sided obstructive CHDs, though it can also be associated with malformation of the tricuspid and pulmonary valves [2]. Although HLHS can be present in a liveborn child, outcomes are universally fatal during infancy without early surgical intervention. Surgical intervention was first implemented in the 1980s, and now involves multiple staged procedures. The end result is that deoxygenated blood is passively directed to the pulmonary circulation via intraatrial lateral tunnel palliations or more commonly via an extracardiac Fontan circuit (where deoxygenated blood is diverted from the right heart altogether); the right ventricle becomes the systemic ventricle, pumping oxygenated blood through a neo-aorta to the rest of the body (Fig. 1) [1]. Despite surgical advances, HLHS still accounts for 25% of CHD death in infancy, and only 50-70% of affected children live past 5 years of age [2].
The pathogenesis of HLHS is unclear, but there is growing literature supporting a genetic etiology. HLHS is highly heritable, with a 500-fold increased incidence among siblings and a 1000-fold increase if a parent has any form of CHD [3]. Approximately 30% of fetuses with HLHS have genetic syndromes or other extra-cardiac abnormalities [4]. Several syndromes caused by chromosomal abnormalities have been associated with HLHS, including Turner syndrome (monosomy X), Edwards syndrome (trisomy 18), DiGeorge syndrome (deletion of 22q11.21), and Jacobsen syndrome (deletion of 11q) [4][5][6]. Isolated variants in genes involved in cardiac development have been associated with HLHS (Table 1).
Haploinsufficiency or loss of function of TAB2 alone has been shown to be responsible for a multi-system disorder including CHDs. We previously described a 4generation family (the largest reported to date) with a 6q25.1 microdeletion encompassing TAB2 (TGF-beta activated kinase 1/MAP3K7 binding protein 2) [19]. All affected family members were born with cardiac Encodes a transcription factor involved in a variety of developmental processes including the cardiovascular system abnormalities, several with aortic valve malformations, including bicuspid aortic valve (BAV). We now update this family description to include details of the cardiac abnormalities in affected members. We also report the confirmed presence of the TAB2 deletion in a second child in the family to die in infancy from HLHS. These findings suggest that haploinsufficiency of TAB2 is a risk factor for HLHS, expanding the phenotype of the previously reported 6q25.1 microdeletion syndrome [19].
Case presentation
This report focuses on the second member of the family to die during infancy from complications related to HLHS (VI.3; Fig. 2). Except for this newborn baby (VI.3, Fig. 2) this family's syndromic features, including their extra-cardiac findings, have been previously described [19]. In this report we highlight their echocardiographic findings. For the cardiovascular manifestations in the family, the proband (II.3) was born with BAV and developed progressive aortic dilation, and he ultimately required aortic valve replacement and aortic root repair (Fig. 3). The proband's father (I.1) also had BAV, along with mitral valve prolapse and a redundant tricuspid valve; he ultimately died from heart failure. The proband had 4 children (III.2-III.5), all born with congenital valve malformations. The first child (III.2) died within a week of birth. He was the first family member to have HLHS, characterized by a hypoplastic/diminutive left ventricle with a large atrial septal defect, dysplastic aortic valve with aortic stenosis, hypoplastic aorta and aortic arch, aortic coarctation, and a large patent ductus arteriosus. He also had redundant atrioventricular valves. The second child (III.3) has BAV, bileaflet mitral valve prolapse, and a myxomatous tricuspid valve (Fig. 3). Likewise, the third child (III.4) has BAV and substantial mitral valve thickening (Fig. 3). III.4 has a daughter (IV.1), who was also born with BAV, an atrial septal defect, and a mildly dysplastic pulmonic valve (Fig. 3). Both the fourth child (III.5) and the proband's sister (II.2) have normal aortic valves but thickened/redundant mitral valve leaflets with mild-to-moderate mitral regurgitation (Fig. 3).
Given the proband's (II.3) enlarged aorta and bicuspid aortic valve, genetic testing initially focused on genes Testing of the remaining living members of the family showed that the deletion segregates with CHD (Fig. 2). Genetic testing was not possible for III.2, who had died years earlier in the newborn period from complications of HLHS. Thus, while highly probable, it was not definitive that III.2 with HLHS had the familial microdeletion. However, III.5 recently had a son (IV.3) also born with HLHS. In addition to HLHS, fetal ultrasound revealed short long bones, intrauterine growth restriction, and a horseshoe kidney. The baby was born at 39 weeks. Birth length was 44.5 cm (< 1%), and birth weight was 3.21 kg (39%). Physical exam on day one of life revealed a sacral dimple and syndromic facies with low-set, posteriorly rotated ears. Despite being born at term, the baby had lung hypoplasia and developed severe respiratory distress. Postnatal echocardiogram showed a diminutive, hypoplastic left ventricle with a parachute mitral valve and BAV. He had a hypoplastic aortic arch with a discrete coarctation. He also had an unrestricted atrial septal defect and a large patent duct arteriosus providing systemic blood flow (Fig. 4). His respiratory distress worsened, and he was too unstable for surgical palliation. The baby died 15 days after birth. Postnatal CMA performed on umbilical cord blood, using the Agilent GGXChip + SNP v1.0 4x180K array platform described previously [19], detected the same microdeletion encompassing TAB2 as seen in the rest of the affected family members.
Discussion and conclusions
This family's 6q24.3-25.1 deletion is 1.76 Mb and spans 21 genes (Supplemental Figure 1) [19]. There are multiple lines of evidence implicating TAB2 as the causal gene for structural CHD in this region [19][20][21][22], though we cannot definitely exclude involvement of the 20 other genes in this family's structural heart disease. TAB2 is heavily expressed in the endocardial cushion and plays an important role in outflow tract and valvular formation during human embryonic development. Titrated knockdown of TAB2 in embryonic zebrafish showed dose-sensitive defects in cardiac development [20]. TAB2 was shown to be the only gene within the smallest overlapping region among patients with a 6q25.1 microdeletion and CHD [19], and a balanced translocation that disrupted TAB2 was shown to segregate with familial CHD [20]. Ackerman et al. recently reported a child born with a similar CHD presentation with a sporadic TAB2 nonsense variant (c.1491 T > A; p.Y497X) [21]. TAB2 microdeletions have also been associated with more complex CHD, including tetralogy of Fallot [22]. Our report is the first associating TAB2 haploinsufficiency with HLHS.
Hitz [23] and Carey [24] hypothesize that up to 10% of HLHS is related to chromosomal microdeletions or duplications. In this family with a known chromosomal deletion, two members in differing generations died of HLHS, one of which was verified to have the TAB2 microdeletion. It is unlikely that this is coincidental and unrelated to the deleted gene known to affect cardiac development. Generational skips in phenotype could be related to an autosomal recessive inheritance pattern, but given the rarity of HLHS, the odds of autosomal recessive inheritance are extremely low. The family's phenotypic and genotypic findings suggest that haploinsufficiency of TAB2 is a risk factor for HLHS. As we collect genetic data on cohorts of individuals with HLHS, it will be worthwhile to see if a 6q25.1 deletion/TAB2 abnormality is more pervasive in this population.
In our 4-generation family, BAV is widely prevalent. A genetic relationship between HLHS and BAV has long been speculated [25]. Approximately 10% of relatives of infants with HLHS have BAV, whereas BAV is present in only 1-2% of the general population [26,27] . Hinton et al. reported a set of monozygotic twins, one with BAV and the other with HLHS [3]. Pathogenic variants in other genes, such as NOTCH1, cause a spectrum of aortic valve abnormalities, including both BAV [13] and HLHS [28]. Based on observation alone, we cannot definitively prove that BAV and HLHS co-segregate within the family through a common genetic defect. However, like NOTCH1, TAB2 is important in embryonic cardiac development [20]. TAB2 deletions and loss-of-function This case report also underscores the complexity of genotype-phenotype predictions. Even within a single family with the identical 6q25.1 microdeletion, there is great variability in the spectrum of CHD, from simple valvular defects to HLHS. Variable expressivity, genetic heterogeneity, and reduced penetrance have been proposed as possible factors contributing to genotypephenotype differences in CHD [1]. TAB2 appears to be a risk factor for HLHS, but there may be other genetic modifiers and environmental factors critical to the development of this congenital abnormality. Recently, a study using 8 mouse lines with HLHS highlights the genetic heterogeneity of HLHS. Exome sequencing revealed 330 coding or splicing mutations, none which were shared among the mouse lines. In addition, 5 mouse lines had pathogenic variants in 2 or more genes in analogous human chromosomal regions previously associated with HLHS or LV outflow tract obstruction [29]. This discovery favors a multigenic etiology for HLHS. Perhaps the two family members affected with HLHS (III.2 and IV.3) had additional genetic variants predisposing them to more complex CHD.
Although we cannot yet predict who with a TAB2 deletion will have HLHS, our observation that TAB2 haploinsufficiency is associated with HLHS is an important step in further elucidating the genetic underpinnings of this complex congenital heart disease. Thus far, only a handful of gene abnormalities have been linked with this CHD. Given the universal mortality associated with HLHS without early palliation, and the newly recognized association with the 6q25.1 microdeletion, we recommend a fetal echocardiogram in all women carrying an at-risk fetus. Pre-conception genetic counseling is recommended for affected individuals, even those with only a mild phenotype. Furthermore, testing for abnormalities in TAB2 should be considered in patients with HLHS with any non-cardiac abnormalities, including prenatal growth restriction, short stature, and/or dysmorphic facial features. Given the likely genetic heterogeneity of HLHS, chromosomal microarray analysis to evaluate for microdeletions, reflexing to molecular testing for TAB2 loss of function variants should be standard in the genetic work-up of these patients. | 2,835 | 2020-03-17T00:00:00.000 | [
"Medicine",
"Biology"
] |
Novel prognostic prediction model constructed through machine learning on the basis of methylation-driven genes in kidney renal clear cell carcinoma
Abstract Kidney renal clear cell carcinoma (KIRC) is a common tumor with poor prognosis and is closely related to many aberrant gene expressions. DNA methylation is an important epigenetic modification mechanism and a novel research target. Thus, exploring the relationship between methylation-driven genes and KIRC prognosis is important. The methylation profile, methylation-driven genes, and methylation characteristics in KIRC was revealed through the integration of KIRC methylation, RNA-seq, and clinical information data from The Cancer Genome Atlas. The Lasso regression was used to establish a prognosis model on the basis of methylation-driven genes. Then, a trans-omics prognostic nomogram was constructed and evaluated by combining clinical information and methylated prognosis model. A total of 242 methylation-driven genes were identified. The Gene Ontology terms of these methylation-driven genes mainly clustered in the activation, adhesion, and proliferation of immune cells. The methylation prognosis prediction model that was established using the Lasso regression included four genes in the methylation data, namely, FOXI2, USP44, EVI2A, and TRIP13. The areas under the receiver operating characteristic curve of 1-, 3-, and 5-year survival rates were 0.810, 0.824, and 0.799, respectively, in the training group and 0.794, 0.752, and 0.731, respectively, in the testing group. An easy trans-omics nomogram was successfully established. The C-indices of the nomogram in the training and the testing groups were 0.8015 and 0.8389, respectively. The present study revealed the overall perspective of methylation-driven genes in KIRC and can help in the evaluation of the prognosis of KIRC patients and provide new clues for further study.
Introduction
Kidney renal clear cell carcinoma (KIRC) is the most common type of renal carcinoma and accounts for approximately 80-90% of all kidney tumors [1]. KIRC is characterized by a high risk of metastasis and the main cause of death from kidney tumors [2,3]. With the application of medical imaging, such as computed tomography and ultrasound, the early discovery of tumor is improved. However, despite substantial advances in its diagnosis and treatment, the mechanisms underlying KIRC are not fully understood, and the prognosis is still poor [4].
DNA methylation is an important epigenetic modification to regulate gene expression. In some promoter-related cytosine-phosphate-guanine (CpG) sites, DNA methylation often contributes to the silencing of downstream genomic regions [1]. Studies show that tumor suppressor genes can be inhibited by hypermethylation, whereas oncogenes can be activated by hypomethylation [5,6]. Many studies have explored the abnormal DNA methylation in cervical [6] and breast [7] cancers, which may be possible markers for diagnoses, therapeutic targets, and prognosis [5,8]. The DNA methylation in KIRC has been studied for several years [9]. The suppression of Wnt antagonists through DNA methylation plays an important role in the proliferation of many RCC cell lines and patient tumor samples [10]. The DNA methylation of several genes, such as PARVG, PLCB2, and RAC2, are correlated with RCC prognosis [11]. The hypermethylation of cystatin 6 and LAD1 appears to be associated with poor prognosis and poor response to some antiangiogenic therapies in RCC [12].
With the rapid development of computer technology, an increasing number of biomarkers of various tumors are identified using bioinformatics analysis, and many tumor-associated databases are established [13]. The Cancer Genome Atlas (TCGA) is a public database that includes 33 cancer types and matched clinical data [14]. Studies reveal many genes, such as CEP55, ACAA1, ACADSB, ALDH6A1, and AUH, that may serve as potential diagnostic and prognostic biomarkers for KIRC [13,15]. The methylation-driven gene is differentially expressed in the control and the disease groups, and the methylation status is negatively correlated with corresponding gene expression value. At present, systemic analysis on methylation-driven genes in KIRC remains limited. In addition, no multivariate prognostic model based on methylation-driven genes especially a clinical prediction model combining DNA methylation data and clinical data exists.
In the present study, we combined the TCGA-KIRC clinical, methylation, and expression data to identify the DNA methylation profiling in KIRC, discover the prognosis-related methylation genes and position, establish methylation prognostic prediction model, and construct an easy trans-omics prognostic nomogram. The findings of the present study can provide valuable data resources for KIRC in clinical and molecular study in the future.
Preparation of dataset
The KIRC methylation data, RNA-seq expression counts, and clinical information were downloaded from the TCGA website (https://cancergenome.nih.gov/). After combining the methylation data, the methylation positions and gene names were matched, and the average methylation level of each gene was calculated (normal samples = 160, tumor samples = 325). The R package DEseq2 (version 1.20.0) was used to standardize the RNA-seq expression counts and obtain the gene expression value matrix. The follow-up time, survival state, age, gender, and tumor stage were extracted from clinical information and used for subsequent analysis. A total of 317 tumor samples were obtained by matching the methylation data, gene expression value, and clinical information matrices in accordance with the sample number. The samples with a survival period of less than 30 days were removed, and a total of 294 tumor samples were obtained for survival analysis.
Differential methylation analysis
The R package limma (version 3.36.5) was used to calculate the differential methylation between the tumor and the normal groups. The fold change value of methylation levels between the tumor and the normal groups was calculated and then standardized through logarithm. Wilcox test was used for the statistical analysis of methylation data and adjusted. P value < 0.05 was used as a cutoff standard. The R package pheatmap (version 1.0.10) was used to visualize differential methylation.
Calculation of methylation-driven genes
The matched gene expression values matrix, the DNA methylation values of the tumor group, and the DNA methylation values of the normal group were input into the R package MethylMix (version 2.10.2) to obtain the methylation-driven gene. The Wilcox test was used for statistical analysis, and P value was adjusted using the Bonferroni method. Adjusted P value < 0.05 was used as cutoff standard.
Gene Ontology (GO) analysis
The GO analysis for all methylation-driven genes was performed using the R package cluster profiler (version 3.8.1) based on org.Hs.eg.dbdatabase (version 3.6.0). The parameters of enrichGO analysis function were: ont = "BP," pAd-justMethod = "BH," pvalueCutoff = 0.05. The GO analysis results were visualized using the enrichplot (version 1.2.0) and the GOplot package (version 1.0.2).
Construction of the methylation prognostic prediction model by using the Lasso regression
The clinical survival information and methylation data of driven genes were integrated in accordance with the sample name. Survival analysis and result visualization were performed using the R package survival (version 2.43-1) and the SurfMiner package (version 0.4.3). The "Handout" method was used for model construction and testing. Briefly, all samples were divided into the training (70%) and the testing (30%) groups by using the hierarchical random method through the "createDataPartition" function in the machine learning R package "caret" (version 6.0-81) to keep each categorical variable of the data in the subset consistent with the original proportion of the overall data, thereby ensuring that the data distribution of the training and the testing groups is consistent. The methylation prediction model in the training group was established as follows. The cv.glmnet function of the machine learning package glmnet (version 2.0-16) was used to calculate the Lasso rank. The glmnet function was used to calculate the Lasso by using the Cox multivariate regression model. The model with the best performance and the least number of independent variables was selected using the best lambda value. Samples were divided into high-and low-risk groups, and the survival rate was further compared using this methylation prognostic prediction model. P<0.05 was regarded as the cutoff standard. The predictive efficiency was evaluated using the area under the receiver operating characteristic curve (AUC). This methylation prognostic prediction model was also validated in the testing group.
Correlation analysis of methylation-driven genes and methylation positions
In accordance with the name of selected methylation-driven genes in the previous methylation prognostic prediction model, the multiple methylation position details of these genes were extracted from the TCGA methylation dataset. The Rcorr function in the Hmisc package (version 4.1-1) was used to calculate the correlation between the expression value of methylation-driven genes and methylation positions. The significant correlations were screened using the absolute value of correlation coefficient > 0.4 and the P value of correlation test < 0.05. These correlations were visualized using the Cytoscape software (version 3.7.1).
Construction of trans-omics prognostic nomogram
The clinical information, such as age, gender, and tumor stage, combined with the score of methylation prognostic model was used to construct the trans-omics prognostic prediction model. The machine learning package caret (version 6.0-81) was used to divide all samples through random stratification. Approximately 70% and 30% of the samples were used as the training and the testing groups, respectively. The rms package (version 5.1-2) was used to build the Cox proportional hazards model, draw the nomogram, and calculate the c-index to evaluate the efficiency of the trans-omics prediction model in the training and the testing groups. In both groups, the machine learning calibration method was used to calculate and draw the calibration curve to evaluate the model's accuracy. Figure 1A shows the flow diagram of the present study. We obtained the related RNA-seq counts (n=602), DNA methylation β-values (n=485, normal = 160, tumor = 325), and clinical information (n=537) from TCGA. The methylation of some samples was not tested, and the methylation and the expression data did not have one-to-one correspondence. After matching the methylation data with the expression data in accordance with the sample number, we obtained DNA methylation samples (n=477, normal = 160, tumor = 317) to screen methylation-driven genes. Then, we constructed and verified the methylation prognostic model by using machine learning.
Differential methylation gene analysis
A total of 134 differential methylation genes with |log 2 FC| >1 were adjusted. P values < 0.05 are shown in the hierarchical clustering heat map ( Figure 1B). The methylation levels of the normal and the tumor groups were significantly different, and the hypomethylation and the hypermethylation genes were distinguished evidently by hierarchical clustering. Results also showed that the hypomethylation and the hypermethylation genes contained protein coding genes, such as CECR6, RNF180, CXCL1P, HIST3H2A, and ZNF492, and noncoding genes, such as HOXB13-AS1 2, RP11-343J3.2, DGUOK-AS1, AC009506.1, and AL021918.2, which indicated that the extent of methylation was universal for genes.
Methylation-driven gene discovery and GO analysis
A total of 242 methylation-driven genes were screened using the MethylMix package. The detailed information of all methylation-driven genes is listed in Supplementary Table S1. The top 10 genes with the highest absolute correlation coefficient between methylation state and gene expression value are shown in Figure 2A. These genes were XIST, CCDC8, KRTCAP3, SMIM3, DCAF4L2, ZNF471, ALDOC, LGALS12, VEGFA, and AQP1. The correlation coefficients were between −0.681 and −0.875 and showed a significant difference (P<0.05).
The GO analysis of the 242 methylation-driven genes showed that in biological process, 242 methylation-driven genes was mainly clustered in the activation of immune cells ( Figure 2B). These biological processes were closely related to tumorigenesis and development. Figure 2C-J shows the typical gene methylation pattern of the top 10 genes, and Figure 2A,C shows the methylation density distribution of the LGALS12 gene, which was hypomethylated in the tumor group with simple methylation component. Figure 2D shows the correlation coefficient (r = −0.706, P=3.295e-49) between the expression value of LGALS12 and its methylation level. AQP1 also had low methylation in the tumor group but had two components with different distributions. The correlation coefficient between the expression value and the methylation state was −0.681 (P value = 1.724e-44; Figure 2E,F). Figure 2G shows that CCDC8 was a hypermethylation gene with a single methylation component in the tumor group. Figure 2H shows the correlation coefficient (r = −0.804, P=5.893e-73) between the expression value of CCDC8 and the methylation level. The KRTCAP3 gene was also hypermethylated in the tumor group with two components in different distribution. The correlation coefficient between the expression value and the methylation level was −0.765 (P=2.95e-62, Figure 2I,J).
Construction and evaluation of the methylation prognostic prediction model by using machine learning
Samples with survival period less than 30 days were removed, and the remaining samples were stratified to obtain the training (n=206) and the testing (n=88) groups. First, the Lasso method was used to screen the variables and establish the survival prognosis model in the training group. As shown in Figure 3A,B, the Lasso logistic regression was performed in 242 methylation-driven genes. Certain coefficients were accurately reduced to 0 by forcing the total absolute value of the regression coefficients to be less than the constant value, and the most powerful prognostic predictors were selected. The survival prognosis model included four gene methylation data, namely, FOXI2, USP44, EVI2A, and TRIP13 with Lasso coefficients of 1.7373362, 0.4491624, −1.7746901, and −3.2954915, respectively.
We used the present model to score and divide the training group into high-and low-risk groups in accordance with the best segmentation point. Compared with the survival rate of the two groups, the Kaplan-Meier (K-M) curve showed that the survival rate of the high-risk group was significantly lower than that of the low-risk group (P<0.001, Figure 3C). The ROC curve was used to evaluate the prediction efficiency. The areas of 1-, 3-, and 5-year survival rates were 0.810, 0.824, and 0.799, respectively. Results indicated that this model had a good prediction ability for the training group ( Figure 3D).
Similarly, we used the same model to score the testing group and compared the survival rate of the high-and the low-risk groups. The K-M curve also showed that the survival rate of the high-risk group was significantly lower than that of the low-risk group (P=0.001, Figure 3E). The ROC curve indicated that the 1-, 3-, and 5-year survival rates were 0.794, 0.752, and 0.731, respectively. The prediction ability of the model was also desirable for the testing group.
Analysis of methylation positions of the key genes in the prediction model
We further investigated the methylation and the main methylation positions of the four key genes in the prediction model. Results showed that USP44 was a hypermethylation gene in the tumor group ( Figure 4A), and the most related methylation positions were cg22538054, cg23982858, cg22802813, and cg17368254 ( Figure 4B). FOXI2 was also a hypermethylation gene ( Figure 4C), and the most related methylation positions were cg19509778, cg24718722, cg08829841, and cg26115633 ( Figure 4D). For the low methylation gene ( Figure 4E,G), the most relevant methylation positions of EVI2A were cg2332595 and cg22473770 ( Figure 4F), and the most relevant methylation positions of TRIP13 were cg03637066, cg11421768, and cg12705693 ( Figure 4H). These selected methylation positions will contribute to further identify the role of these genes in KIRC development.
Easy trans-omics prognostic nomogram combined with methylation prediction model and clinical information
Tumor prognosis was closely related to clinical stage and other indicators. We combined several key clinical indicators, such as age, gender, tumor stage, and methylation model score, to create a simple prognostic nomogram and establish a comprehensive and easy prediction system. Figure 5A shows that the training group had the largest weight of the tumor stage, same extent of age and methylation prognosis score, and small gender effect. Through the score of these indicators, the 1-, 3-, and 5-year survival rates were easily obtained by calculating the total scores and the corresponding survival rate. The C-indices of the training and the testing groups were 0.8015 and 0.8389, respectively, which indicated that the prediction system was effective and would be verified in other cases. The nomogram displayed high levels of accuracy to predict the 1-, 3-, and 5-year overall survival of patients with KIRC in the training and the testing groups, as shown in the calibration curves ( Figure 5B-G). Therefore, the prognostic nomogram works accurately for patients with KIRC.
Discussion
In our study, we have found many differential methylation genes and positions. Methylation occurs in coding and noncoding genes. For coding genes, DNA methylation has been observed in the suppression of specific tumor suppressor genes [16]. The methylation loss of CDKN2A and CDKN2B is demonstrated in myeloma patients [16]. The hypomethylation of SALL4, a member of the zinc-finger transcription factor gene family, is responsible for the aberrant expression of SALL4 in B-ALL [17]. Another level of methylation is the influence of noncoding genes on protein expression regulation. microRNAs and ncRNAs may have important roles [13]. The methylation-mediated loss of miR-34b/c can up-regulate its oncogenic target genes [18,19]. The hypermethylation of mir-127 and mir-125b-1 in breast cancer is closely associated with tumor metastasis [20]. The methylation of noncoding genes has an important prospect, and their direct or indirect regulation with gene expression needs to be studied. Our study also indicates that the methylation of coding and noncoding genes extensively exists in KIRC.
We have used the negative correlation between the gene methylation level and expression value to select methylation-driven genes via the R package MethylMix, a generally accepted method [21,22]. Otherwise, establishing the corresponding relationship between methylation and expression value is difficult. In accordance with this criterion, more than 200 methylation-driven genes are identified. Their functions are concentrated on immunoregulation, such as the activation, proliferation, and adhesion of immune cells, which are closely related to the mechanism of tumorigenesis and development. KIRC has a high level of tumor-infiltrating immune cells [23]. The activated CD8+T cells are associated with the prognosis of many cancers, including RCC, and the infiltrating CD4+ T cells can regulate RCC cell proliferation by modulating [24]. Our GO results suggest that these methylation-driven genes can be significant in immune infiltration in KIRC.
In addition, we have found that the methylation of some genes has different distribution patterns in different samples, indicating that different subtypes may be present in KIRC. Previous studies have reported that several genes, such as NTHL1, ZCCHC24, and SNX1b, have different methylation status in different subtypes of breast cancer [25]. These features need further study, and these genes may become potential clinical biomarkers for KIRC subtyping. We have calculated the average methylation level of the genes in the present study. However, the methylation The Lasso regression, a kind of machine learning method, is suitable for the multivariate selection. This method shrinks the regression coefficients, thereby effectively selecting important predictors and the established model that can compress the number of variables as much as possible [26,27]. In our study, we have used the selected methylation-driven genes as variables and established a multivariate prognostic model by using the machine learning method. The model only contains four methylation genes and is relatively simple. These genes and their methylation have not been reported in KIRC. For the functions of these genes, USP44 is related to proliferation, migration and invasion, induced apoptosis, and arrested cell cycle in the G2/M phase in the established glioma cell lines [28]. FOXI2 methylation may be associated with increased risk of oral and colorectal cancers [29,30]. EVI2A hypomethylation may be associated with head and neck squamous cell carcinomas [31]. TRIP13 plays an indispensable role in cell progression, contributing to tumorigenesis and drug resistance [32]. The functions of these genes are closely related to tumors, further indicating that they may play important roles in KIRC. By using these gene methylation statuses as score criterion, we can easily distinguish high-and low-risk patients through this model. The model's predictive efficiency (i.e. AUC) is also satisfactory, which is verified in the testing group. The change in the DNA methylation pattern may be a good indicator of tumorigenesis and development. Several studies have mentioned the clinical application of multivariate predictive model through DNA methylation. Studies have reported the methylation profiles of HCC tumor DNA and successfully constructed a diagnostic prediction model to predict prognosis and survival rate [33]. Another study has also constructed a survival prognosis model by using the DNA methylation signature in ovarian serous cystadenocarcinoma, which shows high sensitivity and specificity to predict the prognostic survival of patients [34]. Our results demonstrate that the model shows a superior performance for prediction and has potential clinical application.
Tumor prognosis cannot be determined using a single factor. We combined the methylation prognostic prediction model with the clinical information of patients and constructed an easy and effective trans-omics prognostic nomogram to further study the clinical prediction model. Trans-omics integrates clinical and molecular multiomics [35]. Many research studies combine biomarkers and clinical indicators to construct the clinical model. In lung cancer, serum biomarkers ProGRP, CEA, SCC, and CYFRA21-1 are combined with clinical information to construct patient and nodule risk models [36]. A prognostic prediction model reveals that C1QTNF3 is a promising biomarker for prostate cancer [37]. Our trans-omics prognostic nomogram is accurate, simple, and easy to apply. We have constructed this novel nomogram in KIRC, which has great value for the prediction of survival rate.
Recently, studies report a few prognostic models related to methylation in KIRC. The model reported by Xu et al. is based on five methylated CpG sites without trans-omics data [38]. In Guang Chen's study, the top 15% of the nodes in the enriched pathway networks are selected to screen the methylated genes related to prognosis. However, in the process of variable screening, some important information may be missed. Moreover, the model's performance has not been evaluated [39]. Evelönn et al. has built a model from local patients' CpG methylation data combined with CNV data and only used TCGA data for validation. This model has many complicated parameters. which may lead to limitation for clinical application [1]. The above models are completely different from the model established in our study. The model built by Hu et al. is similar to ours [40]. However, methylation-driven genes are not used for variable screening. Seven genes are incorporated in their model, whereas only four genes are incorporated in our model. Moreover, nonsilent mutations in VHL are incorporated in their model, which may lead to increased difficulty of practice. The factor of gender is not incorporated in their model. The AUC values of their model in the testing dataset are only 0.677, 0.66, and 0.71 (1-, 3-, and 5-year survival rates, respectively), whereas the AUC values of our model are 0.794, 0.752, and 0.731 (1-, 3-, and 5-year survival rates, respectively). Therefore, the model discovered in the present study is simple and efficient.
Conclusion
Several limitations were reported in the present study. First, the external data were not available for further verification. Second, the analysis results should be supported and verified by clinical and experimental tests. In general, the present study was the first to reveal the overall perspective of methylation-driven genes in KIRC. We used the machine learning method to establish a multivariate methylation prognostic prediction model and combined with clinical information to build the trans-omics prognostic nomogram. The model discovered in the present study is novel, simple, and efficient. These results can help in the accurate evaluation of the prognosis of KIRC patients and provide new clues and data resources for the further study of the pathogenesis and the development of the disease.
Data Availability
All data generated or analyzed during this study are available from the corresponding author on reasonable request. | 5,380.2 | 2020-07-07T00:00:00.000 | [
"Biology"
] |
Numerical Investigation of the Combined Slot Effect on the Erosion Pattern Around a Combination of Spur Dikes in Series
Modifying the river course for flood control, prevention of bed erosion, bank protection, and the regulation of river width are among the goals of spur dikes incorporation. The common spur dikes have simple (I), L and T geometrical shapes. The present research has been conducted to reduce the scour depth in front of the spurs dikes and improve the sedimentation conditions for the LTT combination of spur dikes in series by investigating different combinations of slots in the body of the spur dike; using numerical methods. The slot dimension was taken equal to 10% of the effective area of the spur dike body. Finally, the (LS-W-Wi, TS-W, TS-W-Wi) combination contained the slots in the web and wing of the first and third spur dike also the slot at the web of the middle spur dike was found as the best combination of slots. This combination conducted to reduce the scour depth about 6.8% and increase the deposition about 52% comparing by the spurs dikes without slots. Reducing the scour depth and increasing the sedimentation rate of materials between the spurs dikes. Also, the maximum scour depth decreases up to 20%. The results revealed that the presence of slots in spur dike structures and their different positions have complicated and considerable influences on the form and morphology of the erodible bed which could be the topic for further researches.
Introduction
Rivers bank erosion and bed changes have always been of interesting for engineers. Various methods and structures such as spur dikes to control bank erosion and bed river changes. Spur dikes could be implemented with simple, L -shaped, T-shaped, triangular and other forms have different angles with respect to the flow direction. They decrease the flow velocity between each other which conducts the reduction in flow intensity and increases the sedimentation. Control of scouring around hydraulic structures is one of the important stability issues in design to prevent the damage of structures. Scouring around spur dikes is produced by down-flow and initial vortices at the upstream corner of the spur dike and also secondary vortices and wake at the middle and its downstream corner (Barbhuiya & Dey, 2004;Coleman et al., 2003). Therefore, different methods are proposed to reduce scouring and prevent undesirable effects against the stability of the structure. One of these controlling methods is the change in the flow pattern and reduce its strength. Using collars, vanes, a combination of spur dikes in series, and slots are among the main solutions for changing the flow pattern (Chiew, 1992;Nayyer et al., 2019). Slots reduce the strength of down-flow and horseshoe notices by diverting the downflow at the upstream face of the spur dike and the side flows around it. Slots induce horizontal flow jet in the vicinity of the bed which conducts the reduction in pressure gradients and downflow transfer away from the structure. All these effects lead to a reduction in scouring around spur dikes (Kumar, 1996). Chen and Ikeda (1997) performed many studies regarding the flow pattern around a single spur dike in the straight reach, experimentally. They studied the formation, development, and transfer of horizontal eddies around the spur dike nose and concluded that the shedding eddies are separated from the tip of the spur dike and periodically transferred downstream. Scour depth reduction around bridge pier; using the slots under different conditions is investigated by various researchers such as Kumar et al. (1999), Babar et al. (2000), Heydarpour et al. (2007), Moncada-M et al. (2009), Heidarnejad et al. (2010), Tafarojnoruz et al. (2012). In all of these researches, the efficiency of slots is approved, but fewer researches have been done on the erosion around the spur dikes which are reviewed here. Ho et al. (2007), using Yeo et al. (2005) results done on the flow pattern around a spur dike to validate the flow-3D numerical model. Finally, they found the best position of the spur dike in a rectangular canal. Zhang and Nakagawa (2008), performed some experiments for permeable and impermeable single spur dikes on the erodible beds. They showed that the maximum scour depth around the permeable spur dike is 50% lower than the impermeable spur dike. In the research which performed by Keshavarz et al. (2009) through numerical modeling of the flow around the spur dike which was positioned both perpendicular and oblique to the bank, they demonstrated that the K-ε turbulence model provides the best estimation of scour depth in comparison with the experimental results. Acharya and Duan (2011) conducted a 3D numerical study of the turbulent flow pattern around a series of sharp-edged spur dikes along a straight channel with constant and mobile beds using FLOW-3D software. They used the K-ε turbulence model for simulation of the flow and compared the results of the simulation with the experimental ones. Hasanpour et al. (2012) investigated the slot effect on the temporal development of scouring around the spur dike. Their research revealed that the slots reduce the scour depth around spur dikes. Maximum scour depth is reached for the slot height equals to the flow depth. Scour depth reduction has been reported by about 28%. Influence of flow rate on the flow pattern around a simple spur dike was investigated by Vaghefi et Al. (2016). They used the VOF model to detect the water surface and K-ε model for turbulence for turbulence model. They found that the numerical model could simulate the flow pattern around a spur dike. 3D simulation of flow around different spur dikes was performed by Kumar and Malik (2016) using the Ansys Fluent software. They investigated various types of spur dikes and stated that the effect of Froude number on the protection length or rotational region is negligible. Also, they found that the shape of spur dike was more efficient on the bed layer than the other layers. Lee and Jang (2016) investigated the effect of distance between two adjacent spur dikes in a series on the bed scouring and flow pattern. They stated that the scour depth was increased by increasing the distance between the spur dikes. Gu et al. (2016) studied flow around spur dike in series for finding appropriate CFD models. The comparisons between the CFD yields and experimental data appear that all turbulence models (K-ɛ, RNG and LES) are able to simulate the three-dimensional flow around spur dike in series with proper conformity. Monjezi et al. (2017) studied the heights, widths and distance effects of the slot in the spur dike body on rip-rap stability at the bend for flow rate equals to 27 l/s. They proposed a dimensionless parameter named slot ratio (x/l) which described the ratio of the distance between the slot edge and the spur dike tips (x) and spur dike length (l). They showed that for the slot ratio equals to 0.25 was more stable than for the slot ratio equals 0.75. Dorosti et al. (2018) studied the effect of distance between the slot and spur dike tip on erosion and sedimentation pattern. They showed that the presence of the slot in the body of spur dike near the bed conducted better performances in providing the balance between sedimentation height and local scouring in comparison with the conditions in which the slot situated near water surface or a spur dike without slot. The former improved the performance of spur dike up to 60%. Masjedi and Jafari (2018) used a slot in the body of the spur dike to control scouring around the spur dike located within a 180-degree bend. The experiments were performed for a single spur dike without slot (control state) and the spur dike with slot situated in four positions and two heights for four flow rates. They showed that the amounts of reduction in scouring related to the location and height of the slot and minimum scour depth occurred for the slot in the vicinity of the spur dike tip. Also, the scouring is increased by increasing the distance between the slot and the spur dike tip. Nayyer et al. (2018) investigated the scouring around a simple (I shape) spur dike under the influences of the other shapes of spur dikes in its neighborhood. They showed that the best combination for minimizing the scour depth around a series of spur dikes is I shape, T shape, and L shape, respectively (ITL) from upstream to downstream. Kumar et al. (2018) investigated scouring around the simple and T-shaped spur dikes. Regarding the results, they concluded that the T-shaped spur dikes were more effective in the protection of the bed and also the reduction in the scour depth and damage to the hydraulic structures. Therefore, where the protection of hydraulic structures is essential, using a T-shaped spur dike is recommended and the simple spur dike should be implemented where diversion and displacement of the flow are of high importance. Vaghefi et al. (2018) studied flow conditions around the T-shaped spur dike in the presence of a protective structure situated oblique or perpendicular to flow, by the numerical model. This study revealed that the maximum shear stress in the vicinity of bed for the protective structure in the attracting and repelling states increased by 23.5% and 17.6%, respectively, in comparison with the vertical state. This study showed for the angles less than 15 degrees in the oblique protective structure, the strength of secondary flow increased in the vicinity of the main spur dike is 24% for attractive spur dike and 15% for repellent spur dike in comparison with the perpendicular one. By increasing the degrees to 20 and 30 degrees, the strength of secondary flow decreased about 14.6% and 15.5%, respectively for attractive and repellent spur dike in comparison with the perpendicular one. The morphological effects around a triangular spur dike positioned a slot in its web in a 90degree bend was the topic of the research done by Meymani et al. (2019). They investigated bed changes for a rectangular slot with an opening equals to 10% of the effective area of the spur dike in plane for angles and hydraulic conditions in the problem. Their studies revealed that in the presence of the slot augmented the distance between the scour pits at the outer bank and reduced the maximum scour depth. Farzin et al. (2019) have used GMDH and GEP models for analyzing different parameters of protective spur dike for reduction of bed scour depth. They reported that the ratio of protective spur dike's length to the main spur dike length is the most impressive parameters for decrease of scour depth. Scouring around a spur dike with the mixture of gravel and sand performed by Pandey et al. (2019) revealed that it was mainly affected by the sediment mixture properties. Thus by reducing the heterogeneity of the sediment mixture, the scouring rate increased. Yang et al. (2019) investigated the maximum water depth upstream of the permeable spur dike within a bend. They stated that the change in the depth is a function of the pattern used for spur dikes installation within the river bend. They showed that the maximum depth occurs where the spur dikes are installed at the half of the bend with 75-degree angle. Furthermore, the maximum water depth occurs at the section where the spur dike encounters the outer bank of the bend. Nayyer et al. (2019) investigated the flow parameters around the spur dikes in a combination with different usual shapes (I LT) situated in series, experimentally and numerically. They stated that the (LTT) combination had the highest effect on reducing the velocity, shear stress and turbulence intensity around the spur dikes. So, it seems that a combination of different geometries could have a considerable effect on reducing the scouring and increasing the sedimentation between spur dikes. Temporal scour depth around spur dike were investigated by pandey et al. (2020). They used vertical wall spur dike and measured scour depth according to time variation. They concluded that with increasing the threshold velocity ratio, the Froude number and the flow depth-particle size ratio, scour depth increased. Zamani et al. (2021) have compared the Flow-3D and experimental output of the effect of spur dikes position on the hydraulic characteristic and scouring conditions of lateral intakes. They reported that the location of the spur dike in front of the intake has achieved the best scour and bifurcation ratio. As reviewed here, widespread studies have done on the performances of spur dikes in different conditions, which indicate the importance of this structure in the river engineering including the control of river bed and banks protection from erosion and also control of flow conditions in the river. So, the stability control of the structure from scouring is of great importance. Different methods such as the use of protective spur dikes, collars, rip-rap, changes in the geometry of spur dikes and creating slots in the body of spur dikes are among the methods used for this purpose. Use of the combination of spur dikes in series with different geometries is the approach which has reduced scouring around the spur dikes. The utilize of numerical models to anticipate and evaluate erosion around hydraulic structures is essential in various water resource management issues, such as scouring and erosion control, hydraulic characteristic and streamline analyzing and sedimentation. Therefore, in this research, after verification tests of the numerical model, a combination of slots in the body of the combined series of spur dikes are done, and the changes in the scour depth around them are investigated, using a numerical model.
Numerical model and the governing equations
Conservation of mass and momentum in differential forms are the governing equations for fluid and solid phases. These. Equations are solved by numerical methods. Soft-wares such as Flow -3D developed to model different phenomena. This software profits some special techniques which permit to model various physical and numerical conditions for real or experimental models. The general forms of mass and momentum conservation equations in Flow-3D software are given by expressions (1) and (2), respectively: where u, v, w, are velocity components and Ax, Ay and Az are the fractional area opened to flow respectively, in x, y, z directions, VF is the fractional volume opened to flow, ρ is fluid density, G is body acceleration, f is a term of viscous acceleration and b is flow losses in across porous baffle plate or porous media (Flow Science, Inc., 2008).
In Flow-3D software, FAVOR (Fractional Area Volume Obstacle Representation) and VOF (Volume of Fluid) methods are used in simulating. FAVOR method is used for modeling of the solid surfaces, geometries, and volumes. VOF method is used to trace the water surface in a water-air two-phase flow. Expression (3) is proposed for defining the free surface profile.
where F function is the index of volume percentage of the water phase in a cell; it ranges between zero ) for the case in which the cell is full of air ( and one (for the case in which the cell is full of water) (Hirt & Nichols, 1981). Sediment transport is formed by two mechanisms; bed load and suspended load. For modeling the bed load different methods such as Meyer Peter and Müller, Von Rijn are proposed. The suspended load is modeled, using ADE (Advection-diffusion equation) given by expression (4): In this equation, c is the concentration of sediment, U average Reynolds velocity of flow, Ws denotes fall velocity of sediment particles, x, and z represent the dimension along the main direction and vertical direction, respectively. Also is the coefficient of dispersion defined as the ratio of turbulent viscosity to Schmidt number (Flow Science, Inc., 2008).
Turbulent models
Studying the characteristics of turbulent flow is very complex and time-consuming, as in this type of flow; currents with different momentums encounter each other and reduce the fluid kinetic energy. This dissipated energy is converted to heat in a one-way process. All the abovementioned issues should be taken to account when investigating the turbulent flow. Therefore, numerical models are capable of presenting valuable information to solve turbulent problems.
K-Turbulence model
K-ε Model is a two equations model that means it includes two equations to represent the turbulent properties of the flow. K is the variable that represents the turbulent kinetic energy and ε (m2/s3) is the variable that represents the turbulent dissipation. The transport equation,ε for turbulent dissipation, is defined as expression (5): In expression (5)
LES turbulence model
This turbulence model is time dependent and three-dimensional. Also an initial value should be given to the fluctuations or at the inflow boundaries. Applying these is costly but this model presents more accurate results with respect to RNG turbulence model (Hirt & Nichols, 1981).
Evaluation and comparison criteria
The evaluation and comparison of numerical and experimental values were made using three criteria, mean absolute error (MAE) and root mean square error (RMSE) and coefficient of determination ( 2 ) and defined by Equation (6): where O, P and n are the experimental values, values obtained in the numerical model and total number of data.
Experimental model
In order to determine the convenient turbulence model and the mesh sizes for a particular phenomenon, verification tests are usually proposed. For this purpose, the results obtained by the numerical model are compared with those measured in laboratory tests or field data. The experimental set up in this study is a rectangular canal with 14m length, 1.5 m width and walls made of Plexy-glass for performing the experiment on the combination of spur dikes in series with L-shaped, T-shaped, and I-shaped geometries. The canal bed is covered with a layer of homogeneous sediment with a mean diameter of 1mm (D50=1mm) and geometric standard deviation of 1.21 (σ = 1.41) and height of 30cm done by Nayyer et al. (2019). In their experiment the flow rate (Q) and flow depth (y) corresponding to the threshold of sediment motion (U/Ucr=0.95) were 28.5 l/s and 6cm, respectively. Also, the geometric characteristics of the used spur dikes in the research by Nayyer et al. (2019) are shown in Fig.1 in which a=3L, L=Lt, and L/B=0.23. Experimental conditions and results of scouring around different combinations are given in Table 1. In this table ds1, ds2, ds3, and ds,ave are the scour depth in the first, second, and third spur dikes and their average, respectively. As shown in this table, the LTT combination of spur dikes in series exhibits the minimum scour depth in the first, second, and third spur dikes. Therefore, this combination of spur dikes is used for simulation and investigation of creating slots in the web and wing of the spur dikes in the numerical model.
Numerical model
As stated before, Flow-3D software was incorporated for simulation of the experimental model. In Fig.2 a view of the geometry and boundary conditions used in the modeling are shown. The boundary condition in the model includes wall, symmetry and pressure which were used for the bottom (Zmin) and canal walls (Ymax and Ymin), free surface (zmax), inflow boundary (xmin), and output boundary(xmax), respectively. The sediment bed is defined at the bottom of the canal according to the experimental canal with the mean sediment diameter equal to 1mm. Modeling was performed in the software using two flow rates values of 28.5 and 25.65 l/s with 6 cm depth and velocities of 0.32 and 0.29 m/s. For investigating the numerical model and obtaining the appropriate result RNG, K-Ɛ, and LES turbulence models were used. statistical comparison of scour depth for the experimental and numerical results for LTT combination are shown in Table 2. According to the FAVOR method, and in order to increases the simulation accuracy of the numerical model, in the vicinity of sensitive geometries, as shown in Fig. 2 (B), several mesh planes were used. The total number of mesh cells in the longitudinal, lateral and vertical directions is equal to 222, 84 and 30, respectively, so the cell size is variable for each direction. Also, the maximum aspect ratio in all directions is equal to 3. Therefore, for the continuation of simulation and considering the dimensions of the intended meshing, use was made of the K-Ɛ turbulence model. A comparison of bed change between the eroded bed in the experimental model by Nayyer et al. (2019) and the numerical model in this research is shown in Fig. 3.
3.3.Slots dimensions and shapes
As stated before, the goal of this research is the investigation the effect of the slot on the scour depth of spur dikes in series. Therefore, the LTT series of spur dikes was introduced as the optimum combination obtained by Nayyer et al. (2019) and a slot introduced in the first the designed slots also were considered for this combination. The slot shape was considered as a horizontal rectangle in the body of spur dikes with the ratio of as/bs=4 (as is the length, bs is the width and t is the thickness of the slot) and the opening area of 10% of the structure effective area (Chiew, 1992). Also, the position of the horizontal slots was taken close to the bed level (Dorosti et al., 2018). Fig.4 shows the defined geometry and position of the slot in the spur dike body. In cases where the slot is defined in both the web and wing of the spur dike, the opening area is 10% of the structure effective area, too. The intended combinations in the present research were designed considering the slots in the web and wing of the spur dikes, which are given in Table 3. In this table, the position of the slot (s), in the web (W) and wing (Wi) are written at the side of each spur dike and its situation. The change in the position of the pits also was determined according to the obtained results.
Results and Discussion 4.1. A Comparison Between Numerical and Experimental Results
According to the results of section 3.2, all the models in table 1 were simulated for the purpose of comparison. The results of maximum scour depth for the first, second and third spur dikes in comparison to the experimental results are shown in Fig.5. As seen, this numerical model has presented acceptable results with good accuracy. The statistical indices values for these models are equal to R 2 =0.97, RMSE=0.28 and MAE=0.24. The LTT combination of spur dikes in series was modeled during 700 seconds and it was observed that after 500 seconds of simulation, the scour depth reached the equilibrium condition as shown in Fig. 6. As is observed at about 30% of simulation duration the scour depth reached 85% of the equilibrium scour depth.
Slot Effect On Erosion
Due to the complexity associated with investigating the effect of the slot on the bed and also changes in the scour depth around the spur dikes, this effect should be investigated with respect to different aspects. Therefore, the maximum scour depth, sedimentation, and overall erosion of the bed are among the parameters which are analyzed in this research. The first spur dike in all the combinations of Table 1 had the highest scour depth. Therefore, in order to investigate the effect of the slot in the first spur dike, models No. 1 to 8 were simulated to be examined for two different flow rates. Models No. 1 and 5 were without slot, models No. 2 and 6 had a slot in the web of the first spur dike, models No. 3 and 7 had a slot in the wing of the first spur dike and models No. 4 and 8 had slots in the web and wing of the first spur dike. The results showed that the scour depth in the first spur dike is reduced in any slot position. The minimum reduction corresponds to models No. 3 and 7 with the slot in the wing which is about 5% with respect to the state without slot. The maximum reduction corresponds to models No. 2 and 6 with a slot on the web which is about 55% with respect to the state without slot. Models No. 4 and 8 also had about 25% reduction in the scour depth at the position of the first spur dike. The noteworthy point in all these models is that although the scour depth is reduced at the position of the first spur dike, it has increased at the positions of the second and third spur dikes. Therefore, changes in the erosion and sedimentation should be investigated over the entire bed length so that the effect of slot in the first spur dike is determined along the entire bed length. Fig. 7 shows changes in the scour depth and in the ratio of the sedimentation to erosion in models No. 1 to 8. As is seen, in models No. 2 and 6 the mean scour depth of the spur dikes is reduced but the ratio of sedimentation to erosion is reduced, too. The other point is related to models No. 2 and 6 where the maximum scour depth has occurred at the position of the second spur dike which is different with respect to other models. Models No. 3 and 7 also exhibit an increase in the mean scour depth at the position of spur dikes which is due to an increase of the scour depth at the positions of the second and third spur dikes in the combination of the spur dikes. In these models also the ratio of bed sedimentation to erosion is reduced. Ultimately in models No. 4 and 8 it is seen that the mean scour depth has a significant reduction and on the other hand the ratio of sedimentation to erosion had a considerable increase. In fact, models No. 4 and 8 which have slots in the web and wing of the first spur dike exhibit acceptable performance in terms of both reductions in the mean scour depth and sedimentation. Fig. 8 shows the bed level for both of these models. As stated before, the presence of the slot in the first spur dike, in any case, causes a reduction of scour depth in the first spur dike and increase of scouring in the second and third spur dikes. Therefore, in models No. 9 to 14, the slot in the web and wing of the first spur dike is constant and the slot in the spur dikes at the second and third positions is investigated. Considering that scouring increases at the second and third spur dikes, that model would yield the best result. It shows a minimum increase in the scour depth at the spur dikes of the second and third positions, the maximum rate of sedimentation, and minimum rate of erosion over the entire bed length. Table 4 depicts a summary of the obtained results from simulations of models no.9 to 14. As is observed, changes in the bed in all the models are associated with increased sedimentation and reduced erosion of the entire bed length. The maximum sedimentation corresponds to model No. 11 and the minimum one corresponds to model No. 15. In model No. 11, the slot is created in the web and wing of the first and third spur dikes and concerning the middle (second) spur dike it is created on the web. Also in model No. 15, the slot is created in the web and wing of the first spur dike and in the web of the second and third spur dikes. Change in the bed erosion over its entire length is not significant in all the models and there is not much difference between them, but considering the ratio of sedimentation to erosion it is seen that the best ratio belongs to model No. 11. In fact, the height of sedimentation in model No. 11 has a higher value with respect to the scour depth which indicates good sedimentation in this model alongside similar erosion in other models. Considering the maximum scour depth also it is seen that all the models No. 9 to 14 had been associated with a perceptible reduction in the scour depth at the position of the first spur dike. Also in all the cases, the first spur dike had maximum scour depth and no longer performed similar to models No. 2 and 6. The maximum reduction in scour depth corresponds to model No. 12 although model no.11 had a considerable reduction in the maximum scour depth, too. Considering the condition of sedimentation and also decrease in the scour depth, it could be stated that model No. 11 had acceptable and appropriate performance. The bed elevation and flow lines around the series of spur dikes in models No. 1 and 11 are shown in Fig.9. As is seen, creating slots in the body of spur dikes totally changes the flow lines. In model No. 1, the vortex flow is formed between the spur dikes (between the first and second spur dikes, and between the second and third spur dikes) and is completely surrounded. Also, the diverted flow lines at the position of the first spur dike have greater flow interference which is due to high diversion of flow at this location. In this model, the inflow between two spur dikes ultimately exits the entrance section. But in model No. 11 the conditions are different. The vortex flow between the spur dikes in model No. 11 is smaller and is formed in between the spur dike web and downstream wall. Also, the inflows between the two spur dikes travel a different path. Two groups of flows enter the field between the two spur dikes; the flows which enter from the slots and those which enter from the section between the two consequent spur dikes. The outflow also takes place from slots in the body of the spur dike at downstream and the section between the two consequent spur dikes. The other point seen in the flow lines of model No. 11 is that some flows enter the space between the first and second spur dikes and finally exit the third spur dike body, whereas in model No. 1 this type of flow does not exist.
Conclusion
Using spur dikes for controlling the erosion near the river bed, banks and paths of culverts has always been under the focus of attention. Using this structure due to the high rate of erosion around is associated with many problems and continuous research has been done for reducing the erosion and optimizing the structure performance. Using a combination of spur dikes in series is an approach that Nayyer et al. (2019) have investigated and analyzed. They ultimately introduced the LTT combination as the optimum combination of a series of 3 spur dikes to reduce the scour depth. As the utilize of numerical models to anticipate and evaluate erosion around hydraulic structures is essential in various water resource management issues, in the present research, by employing CFD model, applying this optimum combination, the effect of the presence of combined slots within the web and wing of a series of spur dikes was investigated and ultimately the (LS-W-Wi TS-W TS-W-Wi) combination with slots in the web and wing of the first and third spur dike and also slot in the web of the middle (second) spur dike was selected as the best combination for reducing the scour depth and increase of sedimentation. In continuation, the other results are presented.
The Flow-3D numerical model has been successful in modeling and analysis of the flow condition around the spur dike and erodible river. It has been capable of simulating changes in the scour depth in various combinations with high accuracy. The statistical indices values for making a comparison between the experimental and numerical results in models of this research were R 2 =0.97, RMSE=0.28 and MAE=0.24 Use of the slot only in the body of the first spur dike could be significantly effective in reducing the maximum scour depth. With an equal ratio of opening area to the effective area of the structure, where the slot is created in the web of the spur dike, it could reduce the scour depth at the first spur dike up to 55%. This value where the slot is created both in the web and wing of the first spur dike is 25%. In a condition where only the slot exists in the web and wing of the first spur dike, the ratio of sedimentation height to the total bed erosion is about 6.5% higher with respect to the case where there is no slot.
Presence of the slot in the body of the second and third spur dikes also causes a reduction in the scour depth. In the case where the slot is in the body of the second spur dike and in the web and wing of the third spur dike, sedimentation is reduced up to 52% and the ratio of the sedimentation height to the total bed erosion is increased up to 6.8%, also the maximum scour depth reduces up to 20%. The flow condition around the spur dikes with slots is significantly different from the state that there is no slot. The inflow and outflow in the field between the spur dikes in these two states travel different paths. This issue has a considerable effect on the bed morphology.
Presence of the slot in the spur dike structure and its various positions have a complex and significant effect on the form and morphology of the erodible bed. As providing a slot in the spur dikes has positive impacts, therefore there is a need for further investigation and applying different other combinations that could be considered by other researchers. Table 3 Characteristics of the used models in the present study | 7,869.4 | 2021-12-20T00:00:00.000 | [
"Geology"
] |
A Neural Network Controller for Variable-Speed Variable-Pitch Wind Energy Conversion Systems Using Generalized Minimum Entropy Criterion
This paper considers the neural network controller design problem for variable pitch wind energy conversion systems (WECS) with non-Gaussian wind speed disturbances in the stochastic distribution control framework. The approach here is used to directly model the unknown control law based on a fixed neural network (the number of layers and nodes in a neural network is fixed) without the need to construct a separate model for the WECS. In order to characterize the randomness of the WECS, a generalized minimum entropy criterion is established to train connection weights of the neural network. For the train purpose, both kernel density estimation method and sliding window technique are adopted to estimate the PDF of tracking error and entropies. Due to the unknown process dynamics, the gradient of the objective function in a gradient-descent-type algorithm is estimated using an incremental perturbation method. The proposed approach is illustrated on a simulated WECS with non-Gaussian wind speed.
Introduction
With the rapid growth of the global wind industry, wind energy has become one of the most important renewable energy sources [1].Wind energy conversion system (WECS) technology has undergone rapid development in response to the demands for increasing use of renewable energy [2].WECSs present two operating modes according to how the wind turbine is connected to the grid.In the fixed-speed mode, the turbine is directly connected to the grid, fixing the rotational speed to the grid frequency.In the variablespeed mode, an electronic converter is inserted between the generator and the grid, or a doubly fed generator (DFIG) controlled by the rotor circuit is used.Thus, the rotational speed can change independently of the grid frequency.In this paper a variable-speed variable-pitch wind energy conversion system is considered.This combination aims to compensate the limitation of each strategy working independently and may improve the transient response and the overall performance.
Control of WECSs is essentially important in terms of energy generation efficiency, power quality, and installation's life.Nevertheless, due to the nonlinearity, uncertainty, and various disturbances that exist in WECSs, it is a challenging problem for controller design.Various control syntheses such as PI regulator [3,4], optimal control in LQ [5], and LQG form [6] have been developed.These control strategies which use the pitch angle as a control input give acceptable results for rotor speed regulation but showed poor performances in power regulation.In [7], it was shown that the generator torque alone is able to regulate the electrical power in an acceptable way.However, it generates large variations of the rotor speed that are not desirable for the wind turbine structure.Most of the work reported ignores the multivariable nature of WECSs.
Recently, the control of variable-speed variable-pitch WECSs' operation has attracted a lot of attention.A PI controller in the power loop and a self-tuning regulator in the speed loop are proposed in [8].Considering the nonlinear and time-varying characteristics of the WECS, advanced control theory and intelligent control scheme have been developed.In [9], sliding mode control is used to cope with system uncertainty and reduce mechanical efforts and chattering.Model predictive control (MPC) is discussed in [10,11], where the constraints on pitch angle and performance specifications can be handled.A gain scheduled H-infinity controller is proposed in [12].Fuzzy logic based control is an effective approach to address the problem of parameter uncertainties [13].Based on adaptive subspace predictive control (SPC) method [14,15], investigated the wind turbines control problem in the data-driven framework.However, all these control methods have not fully considered the possible random noises involved in the WECS.
The LQG strategy has been shown to be effective in accommodating plant uncertainties and random disturbance in a systematic and straightforward way in energy conversion control for wind generating systems [16][17][18].LQG synthesis design method is based on linear model, and the noises and disturbance are assumed to be Gaussian.However, there are two main problems for WECSs: (1) since it is subjected to nonlinearity and random noises, the accurate model is very complex and even cannot be built; (2) the practical random signal from measurement device and disturbances from wind speed are often of non-Gaussian nature.In this case, the LQG control method may not achieve satisfactory performance.As such, in this study, the entropy of tracking error is employed to characterize the randomness of non-Gaussian WECSs.
Based on the minimum error entropy principle, the stochastic systems with non-Gaussian disturbances can be well controlled [19,20] in the data-driven framework.Since equations governing the system dynamics are unknown, it is very difficult or even impossible to obtain the gradient of the proposed performance function, which is one of the important steps in this method.Therefore, the neural network model is proposed to approximate the unknown nonlinear dynamics firstly, and then the gradient-descenttype control law can be obtained [21].Without assuming or constructing a separate model for the unknown process dynamics [22,23], use a neural network controller to directly regulate stochastic systems and the gradient of the objective function is estimated by using the simultaneous perturbation stochastic approximation approach.Motivated by this idea, a neural network controller is proposed for WECSs, where the non-Gaussianity of wind speed disturbances is taken into full account and the control problem is solved in the stochastic distribution control framework.Since the accurate model of WECS is difficult to be established, the neural network controller is designed based on the measurable input and output data according to the generalized minimum entropy principle.An incremental perturbation method is adopted to estimate the corresponding gradient in a gradientdescent-type algorithm.Simulation results show that the proposed minimum error entropy (MEE) control strategy can effectively reduce the influence of the non-Gaussian disturbances from the wind speed.
System Description and Modeling.
A model of the entire WECS can be structured as several interconnected subsystems as shown in Figure 1.The aerodynamic subsystem describes the transformation of kinetic energy stored in the wind into mechanical power via the wind turbine rotor.The drive train subsystem represents the mechanical parts that transfer the aerodynamic torque on the blades to the generator shaft.The pitch actuator subsystem models the pitch control system that controls the pitch angle of the wind turbine's blades.Finally, the electrical subsystem describes the electric generator, the power electronic converters, and the generator control system.
In general, the generator control system is based on a field-oriented vector control strategy where the machine variables are expressed in a synchronously rotating reference frame.Vector control stems from decoupled flux-current and torque-current control in AC drives.
In Figure 1, the input signals coming from the turbine control system are the generator torque set point * and the desired pitch angle .The measured outputs are assumed to be the generator speed and the generator power .The wind speed V is the disturbance signal affecting the WECS.
Based on the aforementioned works, the state-space representation of the WECS concerned in this study can be written as [24] (1)
Control Problem Description.
Designing an effective control system for the WECS is not an easy task.The system variables must be regulated in the presence of severe fluctuations in the input turbine power caused by erratic variations in the wind speed.Fluctuations in can lead to harmful effects on the system [8].Large variations in the drive train torsional torque can occur, thus reducing the life time of the mechanical parts of the system.Input power fluctuations can result in electric power fluctuations supplied to the grid.This, in turn, can cause voltage flicker problems and a reduction in the power quality.
Based on the analysis above, the main control objectives in the full load regime are to regulate both the generator power and the generator speed at their rated values ,rat and ,rat , respectively.These objectives can be achieved by manipulating the desired pitch angle and/or the generator torque set point * .This can be inferred from (2), where the aerodynamic power extracted from the wind is determined by the power coefficient, (, ).This coefficient can be interpreted as a variable gain controlled by and : where ≜ /V is the tip speed ratio and is the rotor speed of the wind turbine.Thus, in the full load regime, to regulate the power at its rated value, the power coefficient should be reduced by increasing , decreasing , or changing both variables.Consequently, manipulating the pitch angle results in deviations in the power extracted by the wind turbine and, indirectly, induces deviations in the turbine speed via the drive train dynamics.Similarly, the generator torque can affect the turbine speed through the drive train dynamics and can be used for controlling the power extracted by the wind turbine by controlling .
Different from the decentralized control strategy in [25,26], the multivariable stochastic control approach, shown in Figure 2, is adopted here to establish the WECS control purpose.
The control problem here is solved in the stochastic distribution control framework, and the WECS in (1) is firstly discretized as where The control problem for WECS can be expressed as follows: find proper generator torque set point * and the desired pitch angle such that the generator speed and the generator power can track their set points ref and ref
𝑔
as closely as possible in the presence of non-Gaussian wind speed disturbance V.
From the above presentation, even though the model of WECS is established under a lot of assumptions and simplifications, the obtained model equations ( 1)-( 4) are still very complex.However, with the development of technology, the input and output data can be easily measured.Therefore, the controller for WECS is designed in the data-driven framework in this paper.
Neural Network Controller Design
3.1.Formulation of Objective Function.Since the wind speed V is non-Gaussian, the tracking error in ( 4) is probably non-Gaussian.And it is well known that all the randomness information can be characterized by the whole PDF.The main purposes of controller design are twofold: to make the PDF of tracking error follow a narrow and sharp Gaussian-like distribution and to drive the tracking error approaching to zero.Therefore, the chosen objective functions should obey the above two principles.
It is noted that entropy is a general measure of randomness.Minimum entropy of tracking error corresponds to a sharp and narrow PDF; this means that the randomness of the tracking error is minimized.Thus the objective function is where ∈ [ , ] ( = 1, 2) ( and are lower and upper bounds of the tracking error ( ) is the quadratic Renyi's entropy [27] of each tracking error ( = 1, 2), and is the joint entropy of tracking errors 1 and 2 , where (⋅) stands for the PDF of corresponding random variable.
On the other hand, mean value can reflect the magnitude of the tracking error, which calls for another performance function where ( To design the optimal controller for WECS, two objective functions ( 5) and ( 6) should be minimized simultaneously.
And the constrained control energy also should be considered.In this paper, the weighting method is used by forming a linear combination of the objectives: where ( = 1, 2, . . ., 6) are corresponding weights.
Remark 1. Weights in performance index (7) denote the different relative importance of different objectives.The value of the weights usually could be only decided by try-and-error method, based on engineering experiences, repeating simulations, and other information.By parametrically varying the weights in the combined single objective function (7), we could get different optimal control inputs, which are called Pareto optimal solutions.In this paper, repeating simulation approach is used to decide weight values to obtain tradeoff optimal control inputs.
Nonparametric Estimation of Objective Function.
In this section, two nonparametric estimation approaches are proposed to estimate the objective function (7).
According to definitions, the estimation of quadratic Renyi's entropy and mean value can be formulated as Substituting (9a)-(9c) into ( 7), the single objective function then can be obtained.
Remark 2. Kernel density estimation method is verified to be an effective PDF estimation approach.However, samples are generated at each instant , which requires mass memory and results in heavy computing burden.
(2) Sliding Window Technique.To enhance the computing efficiency, an alternative method called sliding window technique is proposed here.
At instant , quadratic Renyi's entropy can be rewritten as Drop the expectation and use the most current sample of tracking error in the PDF to obtain the following stochastic estimate for entropies: where ( = 1, 2) denote the most recent samples of tracking error at instant .Next, the "sliding window" technology is employed to estimate the (joint) PDF of tracking error over the most recent samples { ,− , ,−+1 , . . ., ,−1 }.When < , the needed data can be complemented according to the history data of the system.Then, the estimation of (joint) PDF is given by Thus the stochastic estimate of the (joint) entropy of tracking error at instant becomes Here, the output of the NN will correspond to the value of the control .Associated with the NN producing will be a vector of connection weights ∈ that should be trained.Therefore, the control problem in this paper is equivalent to finding to minimize the performance index (7).And after the optimal has been found, the optimal control would be the output of neural network.
As a result, in theory, the training of can be obtained by minimizing to give However, the gradient-descent-type algorithms are not generally feasible in the model-free setting here.As such, the above algorithm can only be regarded as a guideline for the theoretical inside into the training scheme.Motivated by the method in [21], the following steps should be employed.
(1) Set the current sample time as − 1, and fix (2) According to the fixed neural network used in this paper, the control input −1 can be calculated.
(3) Calculate the performance function as ( 0 −1 ) according to (7).(6) Formulate the required gradient vector in (14): (7) Calculate based on (14).Note that every time when the dataset is generated, the vectors { } need to be updated as well.
Based on the obtained , the optimal control input can be calculated easily.
Remark 3. The estimation of gradient vector ( / )| = −1 in ( 14) can be obtained by using the simultaneous perturbation stochastic approximation approach in [22].And the convergence of this method also can be found in [23].
Simulation Results
In this section, the proposed control method is applied to a certain variable-speed variable-pitch WECS.The model in (1)-( 4) is used for producing the measurements.Model parameters are shown in Table 1.
The simulation is carried out on the basis of the working condition with = 150 rad/s and = 1.5 MW.The PDF of wind speed is given in Figure 3.The sampling period is = 1 s, sliding window width is = 100, and forgetting factor is = 0.0095.The weights in (7) are 1 = 0.1, 2 = 0.1, .The controller is modeled by using a NN with two layers, one of 20 nodes and one of 10 nodes [23].The inputs to the controller include the current and most recent output, the most recent control, and the current set point, yielding a total of eight input nodes.Therefore, the total number of weights to be trained is 412.
The advantage of the proposed method is shown by comparing with PI controllers whose optimal PI parameters are tuned using the MATLAB NCD toolbox.Transfer functions of rotary speed controller and power controller are 1 () = From Figures 4 and 5, it can be seen that, compared with the conventional PI control strategy, the proposed control method can make generator speed and power have smaller fluctuations.Small oscillation of the generator speed can reduce the mechanical load of wind turbines, corresponding to a reliable operation of the wind power system.On the other hand, small fluctuation of the generator power can guarantee more stable supply power and consumers' required power quality.The variation of control inputs is presented in Figure 6.It is clear that the changes of electromagnetic torque and pitch angle are smooth.In Figure 7, the objective function ( 7) is decreasing with time and finally approaching a small value, and this means that the WECS can achieve a satisfying performance under the proposed control.
The shapes of PDFs of generator speed and power in (8a) and (8b) become narrower and sharper along with sampling time using the proposed control method; this indicates that the WECS has a small uncertainty in its closed-loop operation.It also can be verified from Figure 9, in which the PDFs of generator speed and power at several typical instants are shown.
The above simulation results illustrate that the proposed control approach can obtain better performance over the PI controller.
Conclusions
In this paper, a neural network controller design approach in the data-driven framework is proposed for the wind energy conversion system (WECS).The proposed method differs from previous results in minimum entropy control: this method avoids the construction of a system model and focuses directly on regulating the WECS via the construction of a closed-loop control algorithm based on a neural network with fixed structure.Since there are no assumption equations describing the WECS, it is not possible to calculate the gradient of the objective function for use in standard gradientdescent-type search algorithms.Therefore, an incremental perturbation approximation method is proposed to estimate the gradient.The proposed approach is applied to a certain WECS where the control objective is to track the target values for both generator speed and output power as closely as possible.Simulation results show that the proposed control method can achieve good performance.Blade length of the wind turbine : Time constant of the generator system :
Nomenclature
Harvested mechanical power of the wind turbine : Time constant of the pitch system.
Mathematical Problems in EngineeringDenote the set point vector of generator speed and generator power by = [ 1 2 ] = [ ref ref ] 5, = [ 1 2 ] ∈ 2 are system state and output vectors, respectively.=[ 1 2 ] ∈ 2 is the control signal applied to the plant.(⋅)andℎ(⋅) are the system state and output dynamics, respectively.V ∈ is a non-Gaussian bounded random variable with known PDF V ., and then the tracking error 13b) 3.3.Neural Network Controller.Based on the known information = { , −1 , . . ., − ; −1 , −2 , . . ., − ; } ( and are numbers of previous measurements and controls), a neural network with fixed number of layers and nodes is used in this paper to directly model the resulting unknown control law without the need to construct a separate model for the unknown WECS dynamics. | 4,306.4 | 2014-08-12T00:00:00.000 | [
"Engineering",
"Environmental Science",
"Computer Science"
] |
A Study of Stirling Engine Efficiency Combined with Solar Energy
A R T I C L E I N F O A B S T R A C T Article history: Received: 25 December, 2020 Accepted: 27 March, 2021 Online: 04 April, 2021 Fossil fuel can no longer supply the constantly spiking demands of energy around the world, hence the increasing research on renewable energies as an alternative. The Stirling Engine is an external combustion engine, giving us a wide range of heat sources: solar, nuclear. The Stirling engine makes best of use of solar sources in an environmentally friendly way. It has no emissions and live longer as compared to Photovoltaic cells. The Stirling engine can operate at Low Temperature difference, which makes it prominent. In order to study the efficiency of a conversion from thermal energy to work, we need to take into account the energy efficiency, which is a key parameter in Low Temperature Difference Stirling Engine, even if its efficiency is lower than those of high temperature Stirling engine. In this article, we are studying the efficiency of the Stirling engine as a first step using a parabolic mirror to focus the sun's radiation onto the engine. In this article, we are studying the efficiency of the Stirling engine as a first step, by making isothermal and adiabatic analysis of the engine to detail the operation throughout its process, and be able to act on the various input parameters that impact the value of the final yield, and in a second step, using a parabolic mirror to focus the sun's radiation onto the engine.
Introduction
Solar energy is an energy that falls into the category of renewable energies, because it is considered inexhaustible. Technologically, two ways are practiced in the use of direct solar energy; solar thermal energy and Photovoltaics. Regarding the solar thermal, it's a system that uses solar energy to produce heat by heating a fluid at more or less high temperature. We can therefore produce energy, like the case of classical thermal power stations. In this case, we are talking about thermodynamical central power plants. As to the Photovoltaics, it is a system which is composed of photovoltaic cells. It directly converts a part of solar rays to electricity with photovoltaic effect. A solar powered Stirling engine is a type of external combustion engine, which uses the energy from the solar radiation to convert solar energy to mechanical energy. The resulting mechanical power is then used to run a generator or alternator to produce electricity. Initially, Stirling engine was invented by Robert Stirling in the year 1816 [1]. Solar power generation could be accomplished using various methods, such as linear Fresnel systems, Parabolic through Solar tower systems, and most importantly Solar dish systems (Figure 2), which happen to be one of the most intuitive and efficient ways of concentrating solar heat on the receiver that drives the Stirling engine-generator unit. It is applied in several situations. Due to the available sizes of Stirling engines, this method is most useful in small capacity cases that do not exceed tens of kW. Furthermore, we are studying the efficiency of the Stirling engine and comparing it with existent internal combustion engines to see if it is worth using it as an alternative. Furthermore, we are going to study the possibility to combine the Stirling engine with solar energy for a more environmental-friendly solution.
Engine operation
A Stirling engine is a piston engine operating on the general principle of the Stirling cycle. The Stirling cycle and engine were defined in 1989 by the international scientific community as [3]: "A Stirling cycle is defined as a process that occurs in any closed space containing a working fluid in which changes in volume induce cyclical changes in pressure of the fluid and its displacement in the closed space induce changes in cyclic temperatures in the fluid." The Stirling engine offers the possibility of having one of the best efficiencies with less emissions unlike internal combustion engines. Its older models are less efficient and huge, but the current models are more developed, which improves efficiency, as well as the use of any external heat source for very high temperatures [4]. The theoretical Stirling cycle is similar to the Carnot cycle, except that the in th Stirling cycle the isochoric processes replace the adiabatic heating and cooling processes of the Carnot cycle. The Stirling cycle then involves four successive evolutions of an ideal gas between two heat sources that has constant temperatures Tc and Te, which in turn are separated by a perfect exchanger which has an isochoric process. When applying the first principle of thermodynamics, we get the same efficiency as the Carnot cycle [4].
The thermodynamic cycle can be plotted on a PV diagram that represents the variation of the pressure versus the volume. ( Figure 3).
In a theoretical case, this thermodynamic cycle can be split into four reversible processes ( Figure 3).
2→3: (heat transfer from an external source to the working fluid). The cold cylinder piston (the working piston) is at the top of its downstroke, while the hot cylinder piston (the expansion piston) is in the middle of its upstroke; the expansion piston moves down, while the working piston remains stationary. This is the engine time; the hot source supplies the gas with thermal energy, and the descent of the expansion piston drives the crankshaft. On the theoretical indicator diagram, this cycle time corresponds to curve 2-3. As the volume of the gas increases and its temperature is constant, the pressure of the gas in the hot cylinder decreases.
3→4: (heat transfer from the working fluid to the regenerator). The last stroke being completed, the cycle is returned to its initial state, the mechanical coupling between the two pistons is such that the working piston begins to rise, while the expansion piston goes down; during this double movement, the gas being hot, it gives up its heat to the regenerator and the gas cools as it passes from one cylinder to another. As its volume remains constant, its pressure decreases; which is represented by segment 4-3 of the theoretical diagram. The engine has returned to the starting point, the regenerator is ready to absorb heat again, and a new cycle can begin again. 4→1: (heat transfer from the working fluid to the cold source). the ingenious coupling between the pistons allows the expansion piston to be stationary while the working piston descends. The gas is compressed, but its temperature does not increase, because the compression takes place in the cylinder connected to the cold source. Energy is rejected to the cold source and the compression is isothermal; this time is represented by curve 4-1 on the theoretical indicator diagram.
1→2: (heat transfer from regenerator to working fluid) The expansion piston goes up and the working piston goes down, which allows the movement of the gas from the hot side, without changing the volume; segment 1-2 of the theoretical diagram is therefore vertical. Passing through the regenerator, the gas recovers the heat that was stored there and, at the same time, returns this element to its initial temperature. [1] During this cycle, the system releases an amount of energy, which is needed later to heat the fluid, and restart the cycle as a loop.
Robert had the idea to use a regenerator to recover the transferred energy and then use it for heating. Ideally, the curve is elliptic, hence all amounts of energy are recovered ( Figure 4).
We mostly find Stirling Engines in one of the three common configurations which are α, β and γ.
Alpha-type Stirling Engine
The α-type engine is composed of two separated cylinders ( Figure 5). The two cylinders are exposed, respectively, to a hot and a cold temperature source. And to each cylinder, is sealed a "hot" piston and a "cold" piston. contains two separate power pistons in separate cylinders, a "hot" piston and a "cold" piston. It also has a pipe that connects the two cylinders. This pipe is usually filled by a regenerative material in order to enhance the thermal efficiency.
Beta-type Stirling Engine
In contrast with the α-type, β-type engine is composed of one unique cylinder with one piston sealed to it, and a displacer as shown in Figure 6. At the top of the cylinder, is placed a heat source, and a cold source at the bottom. The gas flow through the small clearance between the cylinder wall and the displacer; when it flows towards the hot end of the cylinder, then the expansion process occurs, and when it flows towards the cold end, then the compression process occurs . It is the displacer that allows the gas to move between the cold zone and the hot zone. The system is linked to a flywheel.
Gamma-type Stirling Engine
The γ-type engine is similar to a β-type engine, the main difference is that the cooling chamber is mounted in a separate cylinder as demonstrated in Figure 7, but it is still connected to the same flywheel.
Comparison between the three types
To make a comparison between the 3 types of architectures, the compression rate must be defined: The compression rate is defined here as the ratio of the maximum volume VM by the minimum volume Vm that it will occupy during this same cycle: For given hot and cold source temperatures, and for identical displacements, Alpha engines have higher compression ratios than those of type Beta, which in turn are slightly higher than those of Gamma engines. This has the consequence of being able to extract more power from an Alpha engine because it will run more quickly. The downside is that they require more design and manufacturing rigor.
Advantages
• Quiet operation: In contrast with internal combustion engines, there is no relaxation in the atmosphere. With that absence of gas that will eventually escape, plus the absence of the open-close valves, this engine is quiet and has a reduced mechanical stress.
• High efficiency: Stirling engines have the best efficiency compared to an internal combustion engine, and it could even exceed 40 % as efficiency.
• The multitude of possible "hot springs" and ecological aptitude: Due to its heat supply method this engine can operate from any heat source.
• Reliability and easy maintenance: The technological simplicity of this engine allows engines to be very reliable and require little maintenance.
• The long service life: Due to its simplicity, the life of this engine is, in theory, longer than that of conventional engines. Indeed, it requires less maintenance and its replacement is much faster and less dangerous.
• Reversible operation: The Stirling cycle is reversible, when a Stirling engine is driven by another engine; it becomes a heat pump capable of working in cooling and heating mode.
Disadvantages
• The price: The main drawback of this engine is its manufacturing cost, which is about twice that of a diesel engine. Stirling engines require inlet and outlet heat exchangers, which contain the high temperature working fluid, and must withstand the corrosive effects of the heat source and the atmosphere.
• Lack of flexibility: Quick and efficient variations in power are difficult to achieve with a Stirling engine. This is more suitable for running at constant nominal power. This point is a big handicap for the automotive industry.
• Height and weight: External combustion, which requires heat exchangers at both hot and cold spots, makes the Stirling engine generally bigger in size and heavier than a generic internal combustion engine with the same power output.
Advantages of Stirling engine compared to an internal combustion engine
In comparison with a combustion engine, Stirling engine overtakes it on many levels, for example we can look at fuel flexibility; A Stirling engine does not require a highly refined liquid to operate, it can use a variety of liquid and gases, which makes it more flexible than a Diesel engine that required refined Diesel fuel. And because of the external-combustion process of the Stirling engine, it also burns any given fuel cleaner than an internal-combustion engine. In addition, Stirling engines can be balanced mechanically, making them less noisy by eliminating the mechanical vibration problems. On the other hand, internalcombustion engines have severe noise because of the periodic nature of their combustion and mechanical motion processes. We can enhance them to be more silent, by using mechanical isolation and acoustic design, but this would increase the cost of the engine, and would make it unpractical in some situations. [6].
Applications of the Stirling engine
Besides the academical use of the Stirling Engine, we can find this technology in various daily useful applications. The American Stirling Company offers one of these applications which is the wood stove Stirling fan (Figure 8): A silent fan that does not need electricity to move the heat from a wood stove, and have more heated area in the house instead of having only a restricted heated area near to the wood stove.
Another interesting application of the Stirling engine is the Combined Heat and Power systems (CHP) that can be very useful in businesses such as a commercial laundromat, since it generates electricity and utilizes the waste energy produce heat. There is also a smaller version of CHP systems, called micro CHP that have residential use [7]. The SAAB company [9], which is a Swedish company specialized in building submarines, uses the Stirling engine in their Gotland and Södermanland submarines classes, essentially because the Stirling engine is silent compared to Diesel engines. SAAB claims that the secret to their world's most silent submarine is Stirling engine based submarines do not need to surface and recharge the batteries, using the air-independent propulsion [9].
Enhancement of the Stirling Engine performance
To act on the Stirling engine performance, we are led to optimize the temperature of the cold and hot sources, to obtain an optimal temerature difference. We can also modify the geometry of the engine to keep the losses to an absolute minimum.
The role of the regenerator is to recover the heat from the cooling of the gas to heat it again. It therefore plays a key role in the operation of the engine.
Thus, it seems legitimate to seek to optimize the operation of the regenerator to improve that of the engine.
The MOD II automobile engine [10] that was produced in the 1980's [10] was among the most efficient Stirling engines, it reaches a maximum efficiency of 38.5% [10] , compared to a petrol engine which has a yield of (20-25%).
It was abandoned due to high development costs and fears of not being able to compete with internal combustion engines in terms of reactivity.
To reach a high efficiency in a Stirling engine we are led to use a regenerator; imperfect heat transfer that occurs between the engine and the source may lead to external losses of energy as well as internal losses.
The efficiency of an engine varies with the operating speed due to the different losses interactions 1→2 : In the isochoric heating phase, both the compression piston and the expansion piston moving respectively towards the regenerator and away from the regenerator, simultaneously, keeping volume between the two pistons constant. The working fluid is flows from the compression area to the expansion area, making its temperature gradually increasing from Tmin to Tmax. This gradual increase of the fluid temeprature while it pass through the regenerator creates a gradual increase of pressure. There is no work done and there is an increase in the entropy and the internal energy of the working fluid.
The volume remains constant throughout this process. With referring to the energy ratio defined by [11].
The provided heat Q will then be equal to: Since there is no work done.
Change of entropy = ( 2 − 1 ) = ln ( 1 ) 2→3: The temperature is constant during the isothermal relaxation, while the volume increases as well as the entropy. There is no change of energy. The provided heat will then be equal to 2 Ln ( ) ) and the released heat will be equal to 4 Ln( ).
We can write the output as follows: η = n( 2 Ln( ) − 4 Ln( )) n 2 Ln( ) (17) We then find the ideal Carnot efficiency that corresponds to the possible maximum theoretical efficiency in a two heat sources engine.
Isothermal analysis
According [12], we can obtain an efficient heat transfer by assuming that the total mass of the working gas inside the engine remains constant: And considering that the pression in all of the engine is constant, and: Then we can numerically define the assumption that the temperature profile is linear, by the equation of a straight line The sum of mass of the gas is: With ρ the density, = Where dx is the derived volume for a constant free flow area, and By integrating (25), we obtain: The definition of the regenerator effective average temperature (T), in terms of ideal gas equation: Then, by comparing the equations (26) and (27) we obtain: Thus, the pression of the cycle can be written as: The total work of the cycle is the sum of the work of compression and expansion.
Isothermal modeling
To study the heat transfer, it is primordial to consider the energetic equation of ideal gas.
In [14] the author's has modeled a generalized workspace cell, and as shown in Figure 12, it can be reduced to a workspace cell or a heat exchanger cell.
The enthalpy transfer out the cell (resp. into the cell) occurs with a mass flow rate 0 at temperature 0 (resp. a mass flow rate at temperetaure ). The derivative operator is noted D and D m refers to the mass derivative (d m / d t ).
Adiabatic modeling
The principle of this method uses a numerical resolution approach, it divides the volume of the engine into a certain number of control volumes, then it applies conversion equations to the momentum of the gas associated with its equation of state. The properties of the gas are considered uniform in each control volume. The interaction between volumes is taken into account when solving differential equations which is done simultaneously.
The complexity of the system to be solved and the calculation time depend on the assumptions used.
The adiabatic model is a model developed by [12] who used the work of [11]. In this model, the Stirling machine is divided into five control volumes, including two variables compression and expansion. The volumes of the cold and hot exchangers and of the regenerator are considered constant. The assumptions used are as follows: • Thermodynamic compression and expansion transformations are adiabatic, • The gas pressure is uniform throughout the machine, • He movement of the piston and of the displacer is sinusoidal, • The working fluid follows the ideal gas law, • The machine rotation speed is constant The energy equation is applied to a generalized whole can be written as: The state equation is given by PV= nRT Therefore: By taking the logarithm on both sides of the equation and differentiating them, we get a differential form of the equation of state: Since the volume and the temperature are constant, the differential equation is shortened to: = + + ( + + ℎ )= 0 (41) We apply the energy equation the equation that we obtain: However, the compression space is adiabatic, =0, plus the realised D = pDVc, for continuity reasons, the accumulation rate of gaz is equal to the mass difference of the gas given by . We consider the continuity equation, given by: We apply successively the equation above to each of the cells as shown in the figure: The total work done by the engine is the algebraic sum of the work done by the compression and expansion areas: Dans les espaces d'échangeur de chaleur, aucun travail n'est effectué, car les volumes respectifs sont constants.
The actual Stirling cycle engine is subject to heat transfer, internal heat losses and mechanical friction losses, to estimate hese losses [15] defined certain engine temperature ratios.
The ratio of the lower operating temperature to the upper operating temperature of the engine is defined by: £ = ; The ratio of the cooler temperature to the heater temperature is defined as: € = ℎ ; The ratio of the expansion area temperature to the heater temperature is noted as: ₴ = ℎ . with α= and β= β= , heat transfert coefficients.
Therefore, the average energy of the cycle is expressed by: The thermal efficiency must not exceed the efficiency of the Carnot cycle.
Simulation and results
Now once we have characterized the engine geometrically, were all set to implement the isothermal model. The flow diagram in Figure 13 illustrates the steps of the model; we first use as an entry parameter the total mass of the initial engine m and an effective average pressure p. The algorithm then, for each value of θ, calculates the total average pressure as a function of the initialized mass m, and compares it with the reference pressure we defined at first. The algorithm iterate this comparison until we have convergence, i.e. the difference between the average pressure and the reference pressure is lower than the error.
The indicated work of the cycle is the obtained. W = 1.9822 J In Figure 14 we observe that the curve is closed and cyclical, and in each cycle, the work is obtained by calculating the area inside the curve, while the area below the curve is the heat absorbed during that cycle. The thermodynamic efficiency is the ratio of the work on the amount of heat absorbed
Conclusion
The article studies the efficiency and the uses of the Stirling engine, and relies on the process of transforming renewable thermal energy into mechanical work. Renewable thermal energy is available at low cost for the long term and enviroment friendly. Such engine relying on this process is certainly interesting, even if it has a low thermal efficiency.
We have seen how the Stirling engine's perks can be used in different situations; it can be more advantageous than internalcombustion engine. Especially, at low maximum temperature and low temperature difference, the Stirling engine has virtually no substitute.
We can enhance the thermal efficiency by manipulating the shape of the displacer, the heat exchanger, the crank angle or use lf other working fluids. Stirling engine research and enhancements stimulate green education and can help reduce global warming and emissions.
This article is a first part of an modeling and simulation with MATLAB SIMULINK which will have as objective the improvement of the efficiency of the engine based on the difference of temperature, all while working on a concrete model. | 5,315.8 | 2021-04-01T00:00:00.000 | [
"Engineering",
"Physics"
] |
Covert Symmetry Breaking
Reduction from a higher-dimensional to a lower-dimensional field theory can display special features when the zero-level ground state has nontrivial dependence on the reduction coordinates. In particular, a delayed `covert' form of spontaneous symmetry breaking can occur, revealing itself only at fourth order in the lower-dimensional effective field theory action. This phenomenon is explored in a simple model of $(d+1)$-dimensional scalar QED with one dimension restricted to an interval with Dirichlet/Robin boundary conditions on opposing ends. This produces an effective $d$-dimensional theory with Maxwellian dynamics at the free theory level, but with unusual symmetry breaking appearing in the quartic vector-scalar interaction terms. This simple model is chosen to illuminate the mechanism of effects which are also noted in gravitational braneworld scenarios.
Introduction
This paper is about a surreptitious kind of local symmetry breaking in a lower dimensional effective field theory developed from an initial variational principle formulation of a gauge-invariant theory in a higher dimension. Surreptitious, because the symmetry breaking waits two orders in an expansion of the action in fields before it reveals itself. This phenomenon derives from a groundstate solution with nontrivial dependence on the spacetime coordinates transverse to the lower dimensions, unprotected by Killing symmetries. Given the hidden onset of such breaking at higher order in an expansion, we choose to call this 'covert' symmetry breaking.
The analysis of theories with local gauge symmetries via the constraints required for consistent coupling to conserved currents has a long history in classical and quantum field theory. This has been a persistent topic in the study of gravitational theories when studied from the viewpoint of local gauge theories, with frequent comparison to the structure of Yang-Mills theories and gauge-theory couplings to symmetric matter systems. Viewing gravity as a self-coupled spin-two gauge theory with an expansion in powers of the square root of Newton's constant dates back at least to the classic ADM papers [1], Feynman's 1962-63 lectures on gravitation [2] and in particular to papers by Weinberg [3] and Deser [4]. This approach has also been central to the derivation of supergravity theories [5,6]. The general lesson that one might wish to draw from such investigations is that once a massless field of spin one or higher is coupled consistently to symmetry currents formed from other fields, or from itself, the coupling process must thereafter continue on in lock-step fashion order-by-order in an expansion in the corresponding coupling constant. Of course, exceptions to this general pattern can certainly exist if one includes also higher-order or higher-derivative seeds of new invariants such as tr(F ρσ ∇ µ ∇ µ F ρσ ) in Yang-Mills theory, and so on.
A related question is the nature of the effective theory obtained in a lower dimension in a Kaluza-Klein reduction scenario, in which modes of a higher-dimensional theory are expanded into modes of a lower dimensional theory, forming mode-towers of increasing masses. In an expansion permitting a consistent truncation, the field equations of the higher modes may be satisfied when those modes are set to zero, yielding a dimensionally reduced theory of the lowest "zero-level" modes alone. However, consistent-truncation reductions involve very particular structurese.g.
based upon truncation to the invariant sector under some symmetry, or more general structures such as the S 7 reduction of D = 11 supergravity [7]. Indeed, the S 7 reduction of D = 11 supergravity falls into a somewhat different category, since retention of the full zero-level N = 8, D = 4 gauged supergravity supermultiplet involves a reduction ansatz in which some dependence on the transverse-space coordinates is retained (angular coordinates on S 7 in that case). The question of consistency of that reduction has an involved history [8][9][10][11][12], but one important aspect of it is the existence of SO (8) Killing vectors in the reduction space, coupled with unbroken gauged N = 8 supersymmetry.
Some reductions which do not correspond to consistent truncations to lower dimensional theories are of considerable physical importance, notably reductions on compact Calabi-Yau spaces, which have no Killing symmetries. Such reductions are still in a sense "trivial", however, in that they involve reductions in which all dependence on the transverse-space coordinates is suppressed.
Nonetheless, such Kaluza-Klein reductions are in fact technically inconsistent: the equations of motion of the non-zero-level modes can be sourced by the zero-level modes, leading to an inconsistency in setting those higher modes to zero. A proper procedure in such cases is to integrate out the higher modes instead of truncating them, and to incorporate the resulting corrections into the lower dimensional effective theory of zero-level modes. An intermediate level of consistency in some such effective theory derivations can be identified, however: one where the effects of integrating out the heavy non-zero-level modes produce only higher-derivative corrections to the effective theory of the zero-level modes. In such a case, the structure of the effective theory when approximated by retaining only a maximum of two spacetime derivatives (with higher-derivative terms suppressed by appropriate powers of the compactification-space volume) can in some cases prove to remain unchanged with respect to a standard Kaluza-Klein reduction which simply suppresses the transverse-space coordinate dependence. Examples of such intermediate consistency to at most second-order in derivatives are the Calabi-Yau reductions of N = 2, D = 10 supergravity theories [13].
In this paper, we consider a situation without any of the above handholds of full or second-orderin-derivatives consistency. The question we address here is motivated by an observation that one can make in the massless effective theory of supergravity localised on a braneworld submanifold in D = 11 supergravity [14], where the transverse space has an H(2, 2) hyperbolic noncompact structure [15]. This hyperbolic transverse-space structure can be used for dimensional reduction in a standard Kaluza-Klein fashion with fields independent of the transverse coordinates, but, owing to the the noncompact transverse structure, the resulting lower dimensional Newton constant vanishes.
There is, however, an alternate zero-eigenvalue normalisable transverse wavefunction which can be used successfully to localise the theory in the lower dimension. Localisation to the lower dimension in that case arises because there is a mass gap between the zero-level massless fields and the massive fields which, owing to the transverse space's noncompactness, form a continuum in mass starting at the edge of the gap. The transverse-space structure of Reference [14] has the additional advantage that the corresponding Sturm-Liouville problem is integrable when considered as a Schrödinger equation, with a potential of Pöschl-Teller type. This opens the way to analysis of the lower-dimensional effective braneworld theory's field equations beyond linearised order, since integrals over products of the zero-mode transverse wavefunction can be done explicitly. At the quadratic order in the action, such integrals give finite normalisation factors. At the trilinear order they give a value to the effective theory's expansion constant (i.e. the square root of Newton's constant)finite in that case owing to convergence of the relevant integrals.
The kind of puzzle which we wish to explore here arises at the very next order: cubic in the field equations, or quartic in the action. At this order, the interaction coefficient expected from the two preceding orders turns out not to have the value expected from the square of the trilinear-order expansion constant, although it is explicitly calculable and finite. This poses our key question: what happened to the gauge and diffeomorphism symmetries expected from the linearised theory's massless character and the anticipated lock-step nature of the expansion? Such problems have not heretofore been widely studied, perhaps owing to the general technical inconsistency of the reduction problem. 1 In order to confront this phenomenon in a simpler case than the hyperbolic transverse-space braneworld supergravity setting, we work here with a simpler setup: just Maxwell theory coupled to a complex scalar field and a one-dimensional transverse space which is a z ∈ I = [0, 1] line element. In order to provoke a covert symmetry-breaking structure in the effective theory one dimension lower, we impose, however, a non-standard set of boundary conditions on the fields.
For the Maxwell vector field, we pick standard Dirichlet boundary conditions at the z = 0 end of the interval I, but Robin boundary conditions (∂ z − 1)A µ = 0 at the z = 1 end. This causes the zero-mode transverse wavefunction to have non-trivial dependence on the transverse coordinate z, similarly to the dependence of the braneworld system of Reference [14] on a transverse radial coordinate.
The paper is organised as follows. We work in a general higher spacetime of dimension d + 1.
In Section 2 we accordingly first consider pure Maxwell theory in (d + 1)-dimensional spacetime, but with one 'transverse' dimension restricted to an interval with mixed Dirichlet/Robin boundary conditions at the two ends. When expanded in terms of d-dimensional fields, these boundary conditions give rise to a zero-level effective theory with a transverse wavefunction linear in the d+first dimension. In this free theory with linear field equations, however, the dynamics of the zerolevel theory remains identical to that of Maxwell theory, just with a preselection of Lorenz gauge. In Section 3, the discussion is then extended to an interacting (d+1)-dimensional model of scalar QED 1 That the key problem starts at fourth order in expansion of the action and is unlikely to be resolved by field redefinitions has recently been highlighted in [16]. The integrals of general products of the hyperbolic transverse-space wavefunction were given in Reference [14], and the unanticipated values of the resulting effective-theory expansion coefficients starting at fourth order were commented upon in [17].
with the same interval and boundary conditions. The model allows for explicit evaluation of all the relevant integrals over the transverse dimension in evaluating the zero-level d-dimensional effective theory. It is here that we encounter the phenomenon of covert symmetry breaking. At bilinear and trilinear orders in the action, nothing untoward happens -the trilinear level determines the effective coupling constant e eff for vector-scalar interactions. The symmetry breaking occurs at the fourth order, however: the anticipated e 2 eff coefficient for vector-scalar interactions does not occur with the right coefficient. The explanation of this phenomenon lies in the surreptitious behaviour of a nonlinearly-transforming Stueckelberg field which makes its first impact only at this level.
The paper ends with a Conclusion and Outlook section in which extensions of the study of this phenomenon are considered. In the Appendices, we present some details of the calculations.
Maxwell on an Interval
In this section, we shall study dimensional reduction of Maxwell theory on an interval, where the worldvolume components of the gauge field have a non-constant zero mode. Such a system arises from choosing non-standard boundary conditions. For these conditions to be incorporated into Maxwell theory consistently, the usual action needs to be augmented by a boundary term to render the variational problem well-posed. Interestingly, the variational problem only requires boundary information on the worldvolume components of the gauge field. This leads to a bifurcation of the behaviour of the worldvolume and transverse components of the gauge field on the boundary.
To obtain a lower-dimensional theory on the Minkowski worldvolume, we substitute the generalised Fourier expansions for the components of the gauge field into both the higher-dimensional equations of motion and the higher-dimensional action. For standard S 1 reductions, it is known that both procedures yield the same theory. In our case, we find the same happens, but the commutativity of these procedures depends, crucially, on the addition of the boundary term in the higher-dimensional action. In other words, given that the higher-dimensional action principle is well-posed, we obtain the following commutativity diagram for higher and lower dimensional presentations: Since Maxwell theory is a free theory, the truncation of the lower-dimensional theory to the zero mode sector is consistent. Going back to the standard S 1 reductions, we recall that the zero mode sector of Maxwell theory describes a free, massless gauge field together with a massless scalar which is decoupled from the gauge field, whereas the higher modes describe massive gauge fields with masses arising from coupling to corresponding Stueckelberg scalars. In our case, we find that the theory describing the higher modes agrees with the usual S 1 results, but that the zero-level sector is markedly different. We will show that this sector describes a massless gauge field with an accompanying Stueckelberg scalar, which does not, however, give rise to a mass, as well as with another scalar that acts as a Lagrange multiplier imposing a Lorenz gauge condition. Onshell, this noninteracting lower-dimensional theory describes a massless photon, but it possesses one propagating degree of freedom fewer than the zero-level sector of a standard S 1 -reduced Maxwell theory: neither the Stueckelberg scalar nor the second scalar contribute a physical degree of freedom.
The appearance of the Stueckelberg field in the zero-level sector is a direct consequence of the nonconstant transverse space zero mode chosen for the worldvolume components of the gauge field. Its presence also indicates that the U(1) symmetry associated to the zero-level sector of the theory has become non-linearly realised.
Higher-Dimensional Equations and Boundary Conditions
Consider Maxwell theory on a background M d+1 = M 1,d−1 × I, where I = [0, 1]. The metric on M d+1 will be taken to be where x µ are the coordinates on M 1,d−1 , and z is the coordinate on the interval I. Consider the following modification of the usual Maxwell theory given by the action for any Λ = Λ(x, z).
The variation of (2.1.2) after integrating by parts on the Minkowski boundary at infinity, where the fields A µ and A z and their associated derivatives are assumed to vanish, is given by From this, we see that the action is extremised given imposition of the Maxwell equations of motion It is precisely due to the boundary term in (2.1.2) that the Robin condition for the field A µ can be incorporated into a well-posed variational problem. Gauge invariance of this system requires the boundary conditions on A µ to be gauge invariant. This requirement leads to the following restrictions on the form of valid gauge parameters: where c 1 and c 2 are constants. Our main interest will lie in the case where c 1 = c 2 = 0.
Considering only field configurations A µ that obey the Dirichlet/Robin boundary conditions (2.1.7), the action (2.1.2) is also invariant under the following transformation where d Γ = 0 and ∂ 2 z Γ = 0. This is separate from the U(1) transformations, and will be called the harmonic symmetry. The boundary conditions on A µ are only invariant under this transformation Again, we will mostly be interested in the case c 3 = c 4 = 0.
Given that A µ satisfies the Dirichlet/Robin boundary conditions, it can be expressed as a linear combination of a complete set of functions satisfying the same boundary conditions. Such a set of functions can be obtained by solving a Sturm-Liouville (SL) eigenvalue problem. From (2.1.5), the natural choice for the self-adjoint SL operator is ∂ 2 z , and the corresponding SL eigenvalue problem is where the primes indicate z derivatives. The solutions to this are where tan ω i = ω i for ω i > 0, and n i = √ 2 csc ω i are normalisation factors. These eigenfunctions are orthonormal with respect to the L 2 (I) inner product.
With these eigenfunctions, we can write Unlike A µ , the behaviour of A z on the boundaries must be learned from the equations of motion, as the only term containing δA z on the boundary in the variation of the action vanishes when the equations of motion are satisfied. By substituting (2.1.13) into (2.1.5), we have where ω 0 = 0. This suggests that ∂ z A z lies within the span of {ξ i (z)}, so . Integrating this expression, and noting that for i > 0 the antiderivative of ξ i (z) is proportional to its derivative, we have where ζ(z) = √ 3z 2 /2 is such that ζ ′ (z) = ξ 0 (z), and g (0) (x) takes the role of an integration constant for the transverse wave equation.
The set of functions {ζ(z), ξ ′ i (z)} is linearly independent but not L 2 (I) orthonormal. The second claim is easily seen by performing the requisite integrals, and to prove the first, consider the expression for constants c and f i . Taking the ∂ z derivative of this, we find that
Lower-Dimensional Equations and Gauge Invariance
To obtain the equations of motion for the component fields a , and g (i) (x) given the equations of motion for A µ and A z , we substitute their previously derived expansions into (2.1.5) and (2.1.6). This gives By linear independence of {ξ i≥0 (z)}, (2.2.1) implies the following set of lower-dimensional equations from which we observe that a So far, we have been working at the level of the equations of motion, but we can ask whether the same lower-dimensional equations can equivalently be obtained by inserting the expansions of A µ and A z directly into the action. Being careful to include both S Max and S BT , the lower-dimensional action is given by µ . This yields the same equations of motion, (2.2.3)-(2.2.6), as those obtained via the higher-dimensional equations of motion, and so the dimensional reduction square diagram Figure 1 commutes. This commutativity depends crucially on the inclusion of the boundary term in the original action. From the higher-dimensional perspective, it is this term that ensures that the variational principle is well-posed. From the lower-dimensional perspective, it is this term that ensures the decoupling of the massive sectors from the massless sector.
At this point, it is useful to consider the gauge transformations of the lower-dimensional component fields. Recall that the U(1) gauge parameter Λ must obey the same boundary conditions as A µ , and so it can be written as a linear combination of {ξ i (z)} with The harmonic symmetry parameter Γ also obeys the same boundary conditions as A µ with the added requirement that ∂ 2 z Γ = 0, so where d γ (0) = 0. The U(1) transformations of A µ and A z in terms of the component fields are Similarly, only a (0) µ participates in the harmonic symmetry transformation of A µ , with
2.11)
From these U(1) transformations, we observe that g (i) (x) is a Stueckelberg field associated to The appearance of Stueckelberg fields is not new in dimensional reductions, but what is rather non-standard here is that there is also a Stueckelberg field accompanying the massless vector a (0) µ . To understand this more, we need to analyse the lower-dimensional equations of motion. Since the massive sectors decouple from the massless sector, the analysis will be done in two parts.
Massive Sectors:
The massive sectors are decoupled from each other in the noninteracting theory, and each sector is described by an action Here, a (i) µ is a massive spin-1 field with mass ω 2 i , and g (i) is its associated Stueckelberg field. The number of physical degrees of freedom is d − 1.
Massless Sector:
The zero-level massless sector is described by the action where for brevity, the superscript (0) has been removed. To diagonalise the scalar kinetic terms, consider the field redefinition The positive sign in the kinetic term of ϕ 2 appears to suggests that it is a ghost. It seems odd that the lower-dimensional theory could contain a ghost, since the higher-dimensional Maxwell theory is ghost-free. However, (2.2.15) tells us that one of ϕ 1 and ϕ 2 is pure gauge under the U (1) symmetry, so we can always choose the gauge where ϕ 2 = 0, meaning that the theory is ghost-free.
To see this more clearly, consider a further field redefinition These transform under U(1) as Choosing the gauge Ψ 2 = 0 and integrating by parts, the action becomes The scalar Ψ 1 is non-dynamical and acts as a Lagrange multiplier imposing the Lorenz gauge condition ∂ µ a µ = 0. Although there is no residual U (1)
Orthonormality and Interactions
Up to this point, our work has been centred around two expansion bases: {ξ i (z)} and {ζ(z), ξ ′ i (z)}. The first basis is L 2 (I) orthonormal, as guaranteed by the Sturm-Liouville theorem, but the second is not. The lack of orthonormality in the second basis did not present a problem so far because the lower-dimensional equations were obtained from the higher-dimensional ones via linear independence alone. However, when interactions are added, the higher-dimensional equations are no longer linear. In this case, we are required to expand such terms into our chosen bases.
In anticipation of interactions, consider using an L 2 (I) orthonormal basis {ψ α (z)} instead of {ζ(z), ξ ′ i (z)} for our noninteracting Maxwell example. For brevity, summations over the basis labels will be suppressed. The functions ψ α (z) can be obtained from ζ(z) and ξ ′ i (z) by the Gram-Schmidt procedure, and we can write This shows that from a lower-dimensional perspective, the difference between using the {ζ(z), ξ ′ i (z)} basis and the {ψ α (z)} basis is a set of algebraic field redefinitions {h(x), g (i) (x)} ↔ {χ α (x)}. It is now crucial that substituting this new expansion into the higher-dimensional equations of motion and action yields the same lower-dimensional equations, since algebraic field redefinitions do not change the physics. Since this only affects the A z sector, we only need to check the A z equation.
At the level of the higher-dimensional equations, substituting (2.3.2) into (2.1.6) gives whilst the higher-dimensional action becomes . This must be equal to (2.2.7), which allows us to derive the following properties of the coefficients b α and c i;α : The equation of motion for χ α obtained from this action is where
Scalar QED on an Interval
Having seen how to dimensionally reduce Maxwell theory on an interval with a non-constant zero mode, the natural progression is to see how this can be done for an interacting gauge field. As such, we now consider the above (d + 1) dimensional Maxwell system coupled to a complex scalar "matter" field, i.e. scalar QED on M 1,d−1 × I, with the gauge field obeying the above boundary conditions (2.1.7). The boundary conditions on the complex matter scalar will be chosen to be Dirichlet/Dirichlet, as this is convenient for gauge invariance. As in the previous section, this requires augmenting the usual scalar QED action by a boundary term to ensure that the variational problem is well-posed.
Unlike pure Maxwell theory, the interactions in scalar QED will in general couple zero modes to higher modes, so truncating to the level zero sector is now generally inconsistent. We find, in our case, that the source of this inconsistency is the non-constant zero mode. Our interest is in deriving the gauge invariant effective theory describing the zero-level sector. This is obtained by integrating out all fields whose mass is greater than or equal to the mass ω 1 of the least massive gauge field.
A common impression might be that the integrating-out procedure of such modes leads only to higher-derivative corrections. However, we will show that this is not the case for our system. The lowest lying mode for the complex scalar is also massive, but it is lighter than the aforementioned cutoff, so it still constitutes part of the lowest-level lower-dimensional effective theory.
Our effective theory exhibits two novel features that are not present in standard reductions of scalar QED. In the previous section, we saw that the U(1) gauge symmetry associated to the zeromode gauge field is non-linearly realised due to the presence of a Stueckelberg field. This is also true in the effective theory. Furthermore, we will find that the naïvely anticipated relation between the coupling constants of the cubic and quartic interactions between the zero mode gauge field and the complex scalar is not obeyed. We will show that this seemingly covert symmetry breaking, due to the mismatch between the cubic and quartic couplings, is explained by the presence of the Stueckelberg field. Consequently, the unusual quartic coupling and the non-linear realisation of the gauge symmetry go hand-in-hand to create a nonetheless gauge invariant effective theory.
Interacting Higher-Dimensional Equations and Boundary Conditions
We now turn to the effect of coupling our Maxwell system (2.1.2) to matter, which we shall take to be a complex scalar field Φ charged under the U(1) symmetry. Once again, we shall consider our theory on M 1,d−1 × [0, 1], and we shall take the following boundary conditions for our fields: The action governing the dynamics of our theory is with e the charge of the complex matter scalar. This action is invariant under the following gauge transformations: In order for the boundary conditions in (3.1.1) to be gauge invariant, we require Λ to obey (2.1.8).
The action is extremised given the scalar QED equations of motion subject to the boundary conditions (3.1.1).
Interacting Lower-Dimensional Theory
As in the previous section, the expansions for A µ and A z are
2.1)
For the complex matter scalar, we introduce another complete set of functions, {θ n (z) = √ 2 sin(m n z)} with n ∈ {1, 2, . . . } and m n = nπ, which satisfy Dirichlet/Dirichlet boundary conditions. Using these, the scalar field is expanded as The complex scalars φ (n) transform under the U(1) gauge symmetry non-diagonally with
2.3)
where the matrix I i is defined as
2.4)
We can now substitute the expansions of A µ , A z , and Φ into the higher-dimensional equations of motion or into the higher-dimensional action to obtain a lower-dimensional theory. It is a straightforward albeit long calculation to show that both procedures give the same result, and so the Figure 1 dimensional reduction square once again commutes. The route involving substituting the expansions into the higher-dimensional equations is a bit subtle, and involves projecting the nonlinear interaction terms into the relevant bases. For example, in (3.1.4), we notice that the terms Φ∂ µ Φ and ΦΦA µ obey Dirichlet/Robin conditions, and so can be written as linear combinations of the {ξ i (z)} basis. In particular, we have
2.5)
where summations over the index labels are suppressed, and We will refer the reader to Appendix B for a full treatment of the higher-dimensional equations of motion.
To present the lower-dimensional action in a recognisable form, we define the covariant derivative Then, defining the inner product (u, v) = u (n) v (n) over the space of complex scalars, and defining the matrices J, K, and L i with components
2.9)
the lower-dimensional action becomes where ω 2 0 = 0, and W = J − iehK − ieg (i) L i . The term W φ transforms covariantly under the U(1) transformations given in (2.2.10) and (3.2 . This is expected, as it is just the lower-dimensional analogue of the higher-dimensional D z Φ term, which by definition transforms covariantly under U(1) transformations. We also note that the lowest-order term in the scalar potential (W φ, W φ) is (Jφ, Jφ) = m 2 n φ (n) φ (n) , which means that the lowest lying scalar φ (1) is massive with mass m 1 = π.
An Unusual Coefficient
The lower-dimensional action (3.2.10) containing the modes a (i) µ , h, g (i) , and φ (n) is simply a rewriting of the higher-dimensional action (3.1.2) in a particular choice of bases. Our goal is now to build a gauge invariant effective theory from the lower-dimensional action containing only a (0) , h, g (0) and φ (1) after integrating out the modes above level zero. 4 We shall show that this effective theory realises gauge invariance in a non-standard manner, notably the usual relationship between the cubic and quartic coupling constants in scalar QED is not present. In order to demonstrate this, we need to perform a set of field redefinitions on φ (n) to obtain a set of fields ϕ (n) that transform canonically under the U(1) symmetries. 5 From the covariant derivative operator (3.2.7), we observe that the effective coupling of φ (n) to each a (i) µ is eI nn i , with no sum over n. This motivates the following set of field redefinitions These transform under the U(1) symmetries as Note that exp ieg (i) I nn i is a phase and not a matrix. The matrix X nm is unitary, so the mass of ϕ (n) is m 2 n . This field redefinition can be interpreted as a two-step process, each of which relies on the existence of the Stueckelberg fields, especially the zero-mode Stueckelberg, g (0) . Since the Stueckelberg fields transform inhomogeneously by gauge parameters, we can use them to nullify or create any gauge transformation. In the case of (3.3.1), we first define a set of non-transforming scalars Then, from this, we use the Stueckelberg fields to write down the canonically transforming scalars in (3.3.1).
The stage is now set for us to write down an effective theory of a (0) µ , h, g (0) , and ϕ (1) , but before that, let's look at the portion of the theory that contains only the interactions between a (0) µ and ϕ (1) . These terms are given by As ϕ (1) transforms canonically under the U(1) symmetry associated with a (0) µ , we might expect this to look like a standard scalar QED coupling. However, in scalar QED, the quartic coupling 4 The cutoff scale is Λ 2 = ω 2 1 , noting that ω 2 1 > m 2 1 . 5 A discussion of the effective theory in the original variables is given in Appendix C.
constant is equal to the square of the cubic coupling constant. This is not the case here, since I 11 00 = (I 11 0 ) 2 . Since the full theory, given in (3.2.10), is gauge invariant, the remedy to this unusual coefficient problem clearly lies in the modes that we have neglected. As such, we might assume that integrating out the massive vectors and heavier scalars will modify the coupling constants in (3.3.4) such that the usual scalar QED structure reappears. However, this is not what happens, as we will see in the next subsection.
Integrating Out
To integrate out the heavy modes in this theory, we will work with the assumption that the action of a massive propagator ( d − M 2 ) −1 acting on a current J can be approximated to be Since our immediate goal is to investigate whether integrating out the massive vectors and matter scalars modifies the coefficients in (3.3.4), it is sufficient to consider only those terms in their equations of motion containing themselves, the fields a (0) µ , and ϕ (1) , a maximum of one derivative, and contributing to a cubic and a quartic interaction. Taking this into account, the relevant parts of the theory are From this, we find that the heavy fields are given by In effect, by expanding scalar QED in modes of a lower-dimensional theory, we have obtained an effective theory of a complex matter scalar coupled to a gauge field where the presence of Stueckelberg fields at all levels, including level zero, plays a crucial role in establishing gauge invariance. It is also interesting to note that, contrary to a variety of examples in the literature, integrating out the massive fields here does not solely produce higher-derivative corrections, but contributes as well to achieving gauge invariance in the lower-dimensional effective theory. For instance, the mass terms m 2 n ϕ (n) ϕ (n) produces a sixth-order, zero-derivative correction of the form e 4 (a (0) µ a (0)µ ) 2 |ϕ (1) | 2 /6.
The Fourth-Order, Two-Derivative Effective Theory
We now wish to make a full presentation of the lower dimensional effective theory after putting the heavy modes on-shell. The easiest method for this calculation is to perform the integrating out procedure in the non-transforming variables given in (3.3.3), then transform back into the canonically transforming variables. In the non-transforming variables, the lower dimensional Lagrangian density takes the form: Putting a (i) and ψ (n) on-shell while gauge fixing the higher-mode Stueckelberg fields g (i) to zero, we find that the effective Lagrangian density to fourth-order in interactions and second-order in derivatives is where we removed the superscripts (0) and (1). The overlap integrals P nm and T nm are defined in Appendix B. The coefficient of the h 2 ψψ quartic interaction can be calculated exactly: 2 , I 11 00 = 1 − 3 2π 2 , and I = I 11 00 − (I 11 0 ) 2 = 1 4 − 3 2π 2 . Finally, transforming back into the canonically transforming variable, we find that the effective Lagrangian density is where e eff = eI 11 0 is the effective electric charge, D µ = ∂ µ − ie eff a µ is the canonical covariant derivative, I = I/(I 11 0 ) 2 , and X = X/(I 11 0 ) 2 . The effective theory is Maxwell, with a standard gauge-fixing term, coupled in the usual way to an electrically charged scalar ϕ with charge e eff = eI 11 0 , out to order e 1 eff in the action. If one only considers this leading behaviour in the effective charge of the theory, its dynamics is physically indistinguishable from that of the usual dimensional reduction 6 case. At e 2 eff order, however, we find covert symmetry breaking identical to the symmetry breaking originating in coupling to the zero-level Stueckelberg field arising in the term (a µ − ∂ µ g) (a µ − ∂ µ g) ϕϕ.
In a usual dimensional reduction, the zero-level lower dimensional theory inherits the corresponding projection of the higher dimensional symmetries linearly, and this is sufficient to fix the form of the lower dimensional theory. This is not so in the present case because of the non-constant transverse wavefunction zero-mode, and its associated Stueckelberg field. We can write new structures that are invariant under the higher dimensional symmetry using this nonlinearly transforming Stueckelberg field, which are however physically distinct from the structure of the linearly realised theory in the lower dimension. Accordingly, the higher dimensional symmetry becomes nonlinearly realised in the lower dimension. By explicitly calculating the effective theory, however, we find linear symmetry breaking only appears in a 'covert' way, starting at a 2 |ϕ| 2 in the action or |ϕ| 6 order in scalar only physical processes.
Conclusion and Outlook
In this paper, we have focused on what we considered to be the simplest case in which covert symmetry breaking reveals itself. This was stimulated by observation of the explicit structure [14] of an effective lower-dimensional theory of gravity with a noncompact transverse space, but localised in the lower dimension thanks to a mass gap in the spectrum of the associated Schrödinger problem.
Clearly, a return to that system needs to be made to carry out a similar investigation to that of this paper. Along the way, an analogous study of pure Yang-Mills theory in d + 1 dimensions with the Dirichlet/Robin boundary conditions considered here can be done [18].
More generally, one also needs to consider what is the best way to approach the evaluation of an effective gravitational theory in a lower dimension when the transverse space is noncompact. The key problem in such cases is the vanishing of the effective Newton constant, as pointed out originally in Ref. [19]. There is, however, one known way to get nontrivial interactions in a number of such cases: restrict attention to pure gravity in the lower dimension, or, in the case of a supersymmetric theory, restrict attention to pure supergravity with unbroken supersymmetry. For example, there are lower-dimensional supersymmetric braneworld constructions where such pure supergravity on the brane worldvolume exists as a consistent reduction from the higher dimensional theory [20][21][22].
For such pure lower-dimensional supergravity solutions, there really is no clearly defined Newton constant -for example, any Ricci-flat metric in the lower dimension will continue to give a solution to the higher dimensional field equations. A related feature of such lower-dimensional systems is that they retain a 'trombone' symmetry of the lower dimensional field equations, as do all pure supergravity theories. A clear meaning to a gravitational coupling constant arises only when one couples to fields outside the lower-dimensional supergravity supermultiplet. An example of such coupling could be to another kind of braneworld supermultiplet -branewaves arising as Goldstone modes from broken symmetries of a background brane solution. In such cases, with an infinite transverse space, the problem of a vanishing Newton constant is likely to recur: the branewave modes may couple directly only to the non-zero-level modes of the higher dimensional theory. The kind of system investigated in this paper and in Ref. [14] with a zero-level transverse wavefunction which has nontrivial dependence on the transverse dimensions can guarantee a nonvanishing interaction coupling constant. One then also needs to consider what the physical implications of the resulting covert style of symmetry breaking might be.
Appendices A Maxwellian Degrees of Freedom and Hamiltonian
Within this appendix our aim is to provide a detailed account of the physical degrees of freedom and the Hamiltonian for the massless sector of the system that arises in Section 2. To do this we begin by using the gauge symmetry of the ϕ i , (2.2.15), to fix ϕ 2 to zero. Within this gauge, the equations of motion arising from (2.2.16) are where we have relabelled a µ as A µ , and ϕ 1 as ϕ.
If we take a Fourier transform of (A.1), and perform the decompositioñ whereà µ is the Fourier transform of A µ and p µ andã µ are assumed to be linearly independent vectors at the momentum-space point p µ , then we obtain the equations which confirms that the system described by (2.2.16) possesses only d − 2 propagating degrees of freedom. As a result of this analysis, we see that the system is equivalent to standard Maxwell theory, once we go on shell.
Another way to look at the dynamics of the zero-level system (2.2.19) including the Lagrange multiplier field Ψ 1 is to consider its Hamiltonian formulation. The inclusion of this field, which pre-selects the Lorenz gauge for a µ , leads to a modified Hamiltonian formulation since there is no longer an unrestricted λ(x) gauge symmetry. This gives rise to a conjugate momentum to a 0 , i.e. π 0 = kΨ 1 , which is not ordinarily present. The canonical action becomes where Here, H t is the usual positive semidefinite Maxwell Hamiltonian density while H v is a separate quantity whose spatial integral Q v = d d−1 x H v is independently conserved in time by virtue of the field equations for the canonical action (A.14). As usual, Noether's theorem relates such a conserved quantity to a global symmetry and here that symmetry is: where ρ is a spacetime-constant parameter. The conserved quantity Q v is of indefinite sign, but this does not imply the presence of ghost degrees of freedom; the conserved energy can be considered to be just E = d d−1 x H t , which is positive semidefinite. It is helpful to consider what happens to Q v in a standard Maxwell theory presentation without π 0 : one finds Q v = 0 using the usual Gauss's law ∂ i π i = ∂ i F 0i = 0 for noninteracting Maxwell theory. The symmetry (A.17) is still there (setting π 0 → 0), but it is then a symmetry with a vanishing charge, somewhat reminiscent of the vanishing-charge symmetries of supersymmetric theories without auxiliary fields.
B Details of the Commuting Square Diagram for Scalar QED
In this appendix, we give details of the equivalence between higher and lower dimensional presentations of the scalar QED dynamics as represented in Figure 1 With these overlap integrals, we can use linear independence to read off the lower-dimensional equations coming from (3.1.4). We have For (3.1.6), we define the overlap integrals Using these, the lower-dimensional complex scalar equations are for n ∈ {1, 2, . . . }.
For (3.1.5), it is much more convenient to rewrite the expansion of A z in terms of the orthonormal basis {ψ α (z)}. Defining the overlap integrals we find the lower-dimensional equations are To convert this equation into equations for h and g (i) , we contract it with operators D αβ b β and D αβ c i;β , using the relations (2.3.5). After some manipulation, we arrive at the following equations of motion:
C Effective Theory in the Original Variables
At the end of Section (3.4) we stated that the system in the original (gauge covariant) higher dimensional variables retains gauge covariance (or invariance at the level of the action) after integrating out all of the (more) massive matter scalars. We described in broad strokes the details of how this occurs, specifically that the action is augmented by new terms at quartic order and the transformation is augmented at quadratic order and together these define an unusual but gauge 8 It is important to note that these equations are internally consistent, as all Bianchi identities are satisfied.
invariant action (or oddly covariant equations of motion). Here we will show how that invariance works at the level of the action for one term, specifically the a 2 φ 2 'unusual coefficient' term. 9 To show the invariance of just this term it is sufficient to only consider only the leading (in fields and derivatives) corrections arising from integrating out the level ℓ > 0 massive matter scalar fields to both the gauge transformation and action. The relevant approximate solutions to the level ℓ > 0 massive matter scalar (n = 2, 3, . . .) equations of motion are Here Φ indicates all corrections arising from recursively putting fields on-shell in their own equations of motion and ∂ µ indicates arbitrary corrections with more world-volume derivatives, and all integrals (I, T , and U ) are as given in Appendix B. The new terms in the Lagrangian arising from putting these fields on-shell are e 2 π 2 n 2 (2a µ ∂ µ φ + ∂ µ a µ φ) I 1n 0 + hφ T n1 − T 1n + 2gφU Only two of these new terms are relevant to the terms in the gauge transformation of the action containing one a and two φ: The relevant terms arising from gauge transforming the above are the terms coming from the transformation of the Stueckelberg field alone: 2e 2 λφ (2a µ ∂ µ φ + ∂ µ a µ φ) I 1n 0 U n1 0 π 2 n 2 + c.c. .
(C.4)
Similarly, we recall from (3.2.3) that the lightest scalar field transforms under gauge transformations into scalar fields at all levels, so when we put the heavy fields on-shell we must also put them on-shell in the lightest field's gauge transformation, The above term quadratic in fields will generate, when substituted into the φ's mass term, terms with one gauge parameter, one gauge field, and two matter scalars. Specifically the correction is δ −π 2 |φ| 2 = . . . − π 2 φ e 2 λ (2a µ ∂ µ φ + ∂ µ a µ φ) I Lastly, we remember that the coefficient of the quartic term is "unusual" because it is not the anticipated square of the cubic term's coefficient. Taking the transformations of these two terms 9 Here again a is the massless vector and φ is the lightest matter scalar.
(C.7)
These are all the terms in the gauge variation of the Lagrangian that are of the '∂λaφφ' variety. If we take all the terms that we've detailed above and integrate by parts we find that they may be written as − 2e 2 a µ ∂ µ λφφ I 11 00 − I 11 In order for the above Fourier basis, each of these integrals is known. 10 The resulting sums are also doable (1 + (−1) 2 ) 2 (n 2 − 1) 4 + 48 π 5 ∞ N =2 (1 + (−1) 2 ) 2 (n 2 − 1) 3 = 0 . (C.10) To summarise, we have, for the effective theory in the original variables, gauge transformed, then collected all terms including one power of the gauge parameter, one power of the gauge field, two powers of the scalar, and one world-volume derivative and have shown that these terms sum to zero. While this only shows the invariance in the action of a single term, it is torturous enough to calculate this. Furthermore, we know that these variables are simply a field redefinition away from the more easily manifestly gauge invariant variables used in Section (3.5), so the final action expressed in either set of variables proves to be invariant. 10 Each is done by repeated integration by parts. | 10,582.6 | 2020-07-23T00:00:00.000 | [
"Physics"
] |
Enhanced tolerance of transgenic potato plants expressing choline oxidase in chloroplasts against water stress
Background Glycinebetaine, whose biosynthesis could be catalyzed by choline oxidase (COD), is an extremely efficient compatible solute for scavenging oxidative stress-inducing molecules and protecting the photosynthetic system in plants. To study the effects of the codA transgene for choline oxidase on the drought resistance and recovery, a transgenic potato cultivar (SC) bearing codA gene and a non-transgenic (NT) control cultivar were raised in pots under moderate and severe drought stress. The experiment was constituted by a two-day-pretreatment with 20% PEG and a four-day-water stress combined with two-day-recovery treatment. Results Under the four-day-water stress, plants were provided with normal water condition, 10% or 20% polyethylene glycol. The results of pretreatment showed an expression of codA gene in transgenic potato and an accumulation of glycine betaine (GB); leaf water potential was higher in SC than in NT. In the stress-recovery-treatment, SC showed stronger antioxidant ability, more efficient photosynthetic system, higher chlorophyll content, lower malondialdehyde content and better recovery from water deficit stress than NT. Conclusion Although this work concentrated on the short-term water stress and recover treatments on transgenic potato plants with the over-expression of CodA gene and its control line. The datas shows that the exogenous codA gene provided potato a stronger drought resistance and recovery ability. Electronic supplementary material The online version of this article (doi:10.1186/1999-3110-54-30) contains supplementary material, which is available to authorized users.
Background Glycinebetaine (GB, N,N,N-trimetrimethylglycine; hereafter betain) is a quaternary ammonium compound that occurs naturally in a wide variety of plants, animals and microorganisms. The accumulation of GB is induced and synthesized in the chloroplasts of higher plants under various abiotic stress, such as high salt, drought and cold (Jagendorf and Takabe 2001;Rontein et al. 2002), and the exogenous GB could enhance the resistance ability to drought (Mahouachi et al. 2012). Glycinebetain affords osmoprotection for plants and protects cell components from harsh conditions by functioning as a molecular chaperone .
Furthermore, it could stabilize the higher-order structure of protein and protect the activities of intracellular protein and metabolic enzymes (Demiral and Turkan 2004). In photosynthetic systems, GB efficiently protects various components of the photosynthetic machinery, such as rubulose-1,5-bisphosphate carboxylase/oxygenase (Rubisco) and the oxygen-evolving photosystemII (PSII) complex from stress ). It preserves the normal cellular turgor pressure, playing an important role in respiration and photosynthesis. An exogenous application of GB improves the growth and survival of a wide variety of plants under various stress conditions (Ashraf and Foolad 2007;Hoque et al. 2007;Park et al. 2006;Chen and Murata 2008). The fact that many agronomically important crops, such as rice and potato, are betain-deficient has inevitably led to proposals that it might be possible to increase stress tolerance by genetic manipulation that would allow nonaccumulators or low-level accumulators to accumulate betain at protective levels (McCue and Hanson 1990).
Glycinebetaine has three main synthetic pathways in different organisms (Sakamoto and Murata 2000). Therefore, different methods could be used to introduce a GB synthetic system into non-GB-accumulating plants to improve their stress tolerance. One of the methods was the introduction of the BADH (betaine aldehyde dehydrogenase) gene, which has been frequently introduced into a variety plants including tomato (Jia et al. 2002), tobacco (Yang et al. 2005;Ci et al. 2007;Zhou et al. 2008), wheat (Guo et al. 2000) and potato ) for enhanced tolerance of salt, drought or extreme temperatures. The other was COD (choline oxidase), which itself does not exist in higher plants at all. Previous reports showed that the COD gene was also introduced into Arabidopsis ( Sulpice et al. 2003;Waditee et al. 2005), tobacco (Huang et al. 2000), rice (Konstantinova et al. 2002;Mohanty et al. 2002;Kathuria et al. 2009), tomato (Goel et al. 2011;PARK et al. 2007;Park et al. 2004;Li et al. 2011), maize (Quan et al. 2004), potato (Ahmad et al. 2008) and Eucalyptus globulus (Matsunaga et al. 2012) to improve their stress tolerance.
Potato (Solanum tuberosum) is one of the leading crops throughout the world. It is cultivated in more than one hundred countries and regions, with total yield and cultivated area ranked fourth among crops, only after wheat, rice and maize (Jackson 1999). With the rapid economic development in recent years, potato is increasingly becoming an important cash crop and the potato industry has seen strong development recently. Hence, an excellent potato variety with resistance or tolerance of abiotic stress is required for the steady development of the potato industry (Jiang et al. 2008).
Different types of promoters have been used in plant transformation research; these promoters can be divided into three classes: constitutive promoter, organ-specific promoter and inducible promoter (Potenza et al. 2004). The most representative constitutive promoter is CaMV 35S (Odell et al. 1985), which is one of the most widely used promoters. However, the constitutive promoter might result in the over-expression of exogenous gene and break the regular growth process of plants (Scheid et al. 2002). The use of organ-specific promoters, such as potato tuber-specific patatin promoter, can make up for this weakness. SWPA2 (Oxidative stress-inducible peroxidase promoter) is inducible promoter cloned by Kim in 2003 from sweet potato. This was confirmed by transforming the glucuronidase gene with SWPA2 and CaMV35S promoters respectively into tobacco. Under water stress, the gene expression of SWPA2-GUS was 30-fold that of CaMV35S-GUS, suggesting that SWPA2 has a very strong stress-inducible ability (Kim et al. 2003).
In this experiment, we aimed to determine the influence of an introduced codA gene on transgenic potato under water stress and rewatering treatment, and provide a basis for research on new potato varieties and glycinebetain.
Materials
We used potted transgenic potato plants (SC) expressing codA gene (from A. globiformis) in chloroplasts under the control of an oxidative stress-inducible SWPA2 promoter (Kim et al. 2003) and non-transgenic (NT) control plants (Solanum tuberosum L. cv. Superior) (Ahmad et al. 2008). The vector structure with codA gene is shown in (Figure 1). This experiment was divided into pretreatment and stress-rehydration-treatment.
Pretreatment
Five four-week-old plants respectively from SC and NT were transferred to buckets filled with daily-aerated Hoagland nutrient solution. One week later, all plants were subjected to drought stress simulated with PEG6000 (polyethylene glycol 20%) treatment for 48 h. The fourth and fifth leaves were sampled from each plant at 0 h and 48 h after stress to determine the GB content, leaf water potential and expression of codA gene.
Stress-rehydration-treatment
A total of 90 pots (45 for SC and 45 for NT) were used for this treatment. The two potato types were allocated to three drought treatments of no stress -normal water condition, PEG 10% (moderate stress) and PEG 20% (severe stress); each treatment has 15 replications arranged into a completely randomized design. Nutrient solution (with or without PEG) was changed at 9 a.m. each day just before the determination of photosynthetic activity. The stress continued for four days and then in the following two days all plants were provided with normal water condition. Five leaves (the fourth or the fifth leaf from each plant) from five plants were chosen randomly from each treatment everyday at 10 a.m. All the samples were immediately frozen in liquid nitrogen and stored at −80°C until required for analysis.
Growth conditions
Each pot contained 1 kg disinfected dry vermiculite which was then watered to 70% of maximum field moisture capacity with Hoagland nutrient solution, with only one seedling cultivated in each pot. Pots were placed in a growth chamber under a 16 h photoperiod with light intensity of 300 μmol photons m -2 s -1 , 60% (w/v) relative humidity at day 25°C / night 20°C. Soil moisture was controlled by weighing each pot during the growth period.
PCR analysis
Total genomic DNA was extracted from transgenic potato and control plant with DNA kits bought from Beijing TaiKe Biotechnology Limited. First-strand cDNA synthesis was performed in a 20 ul reaction mixture containing 1 ul of total plasmid DNA. The PCR was conducted with 0.5 ul first-strand cDNA with the primers of 5′-GCT GCT GGA ATC GGG ATA-3′(forward) and 5′-TGG GCT TAT CGC GGA AGT-3′(reverse). The amplification reactions occurred at 94°C for 5 min, followed by 30 cycles (94°C 30 s, 62°C 30 s and 72°C 1 min) and finally an extension cycle of 10 min at 72°C. The PCR products were separated on 1% agarose gel, stained with ethidium bromide, and visualized under UV. The expected size of the PCR fragment was 450 bp. The UV transilluminator was obtained from Thermo Company (USA), dNTP from Roche Company (Sweden) and Taq polymerase from Fermentas Company (USA).
Glycinebetain content
Glycinebetain was measured via UV-VIS spectrophotometry (Huang et al. 2009). In brief, GB reacts with Reinecke's salt under acidic condition to produce sediment of Reinecke's salt, which was then dissolved in 700 ml/L acetone until the color turned pink. Acetone was used as blank control to produce an absorbance standard curve under 525 nm and the standard curve was used to determine the GB contents in samples.
Photosynthetic system and leaf water potential
From 9 a.m.-10 a.m. each day, the fifth mature and well-exposed leaves from top in five randomly tagged plants in each treatment group were sampled for measurement with a portable photosynthetic meter, LI-6400; leaf water potential was determined according to the method of Turner using a pressure chamber (Turner 1988).
Chlorophyll content
Mature and well-exposed leaves from plant (0.5 g fresh weight) were homogenized in a mortar and pestle using 10 ml of chilled 80% acetone. The homogenate was centrifuged at 10,000 rpm at 4°C for 10 min. The absorbance of the supernatant was measured at 646, 663 and 750 nm, respectively, and chlorophyll content was calculated as per the method of Arnon et al. (Arnon et al. 1974).
Extraction and assays of the activities of reactive oxygenscavenging enzymes
The determination of the activities of catalase (CAT), superoxide dismutase (SOD) and peroxidase (POD) followed those reported in Lee et al. (Lee and Lee 2002). Leaf samples (0.5 g) were homogenized in 8 ml of 50 mM potassium phosphate buffer (pH 7.0) that contained 1 mM EDTA, 1 mM ascorbic acid (ASA), 1 mM dithiothreitol (DTT), 1 mM L-glutathione (GSH) and 5 mM MgCl2. After sufficient grinding with little quartz sand, the homogenate was centrifuged at 20,000 rmp for 15 min at 4°C. The resultant supernatant was deep-freezed under −80°C and used for assays of enzymatic activity. Total protein concentration was determined according to the Bradford method (Bradford 1976) using the Bio-Rad protein assay reagent.
The activity of superoxide dismutase (SOD) was measured according to McCord and Fridovich (1969) with slight modification, by immediately monitoring the absorbance at 560 nm due to the reduction of cytochrome c. The reaction mixture contained 50 mM phosphate buffer (pH 7.8), 0.1 mM Nitrotetrazolium Blue chloride (NBT), 0.1 mM EDTA and 13.37 mM methionine.
POD activities were determined specifically at 420 nm. The reaction mixture contained 0.4 ml of 100 mM potassium phosphate buffer (pH 6), 0.16 ml of 147 mM H 2 O 2 , 0.32 ml of 5% Pyrogallol and 2.1 ml of DW. The reaction was initiated by adding 20 ul plant extract and after 10 min. The POD activity was determined by following the consumption of H 2 O 2 (extinction coefficient 39.4 mM -1 cm -1 ) at 420 nm for 20 s.
The activity of catalase (CAT) was assayed by monitoring decreases in absorbance at 240 nm due to the decomposition of H 2 O 2 . The reaction mixture contained 670 ul potassium phosphate buffer (pH 7.0), 330 ul H 2 O 2 and 30 ul of the extract. The CAT activity was determined by following the consumption of H 2 O 2 (extinction coefficient 39.4 mM -1 cm -1 ) at 240 nm for 1 min.
Lipid peroxidation
Lipid peroxidation was determined as the amount of malondialdehyde (MDA, e = 155 mM ± 1 cm ± 1), a product of lipid peroxidation. 1 ml of saved supernatant (which has also been used to determinate antioxidant enzyme) was mixed with 3 ml reaction buffer including 5% Trichloroacetic acid (TCA) and 0.5% Thiobarbituric acid (TBA) was heated in 100°C water for 15 min, then cooed immediately and centrifuged. The absorbance was monitored at 450, 532 and 600 nm.
Confirmation of codA DNA in transgenic potato
As shown in (Figure 2), codA does not exist in NT, while the exogenous codA gene introduced into transgenic SC plants could be observed. The total length of codA was 450 bp as we expected.
Effect of dehydration pretreatment on GB accumulation and leaf water potential in transgenic and non-transgenic potato
To determine whether the expression of codA induced the synthesis of GB in the transgenic plants, the GB was analyzed after 0 h and 48 h of 20% PEG stress ( Figure 3A). Also, the leaf water potential (LWP) ( Figure 3B) of transgenic and non-transgenic potato was determined. At 0 h after stress, no GB accumulated in NT but a slight accumulation was noted in SC. Even at 48 h of stress, no GB was observed in NT, but the amount of GB increased significantly in SC. Leaf water potential of both potato types was significantly reduced from 0 h to 48 h after stress but the LWP of SC remained higher than that of NT.
Effect of introduced codA gene on enzyme system in transgenic and non-transgenic potato following stressrehydration treatment The changes in antioxidant enzyme system (SOD, CAT and POD) of the two potato types under stress-rehydrationtreatment are shown in (Table 1, Table 2 and Table 3). Activities of SOD and CAT varied similarly and without significant changes under normal water condition. However, under stress conditions, the SOD and CAT activities in the two potato types increased from DAY 0 to DAY 2; although at DAY 3 the values declined, they were still higher than those on DAY 0 (as an exception, the CAT activities in NT under 20% PEG stress started dropping from DAY 2). From DAY 4 to DAY 5 (rehydration period), SOD and CAT activities in both SC and NT rose again. In general, the SOD and CAT activities of the two potato types were higher under 10% PEG stress than under 20% PEG stress; under the same stress condition, SC showed higher antioxidant enzyme activities than NT.
Variations in POD activities differed entirely from those for SOD and CAT during the drought stress and rehydration period. Under the four PEG stress conditions, from DAY 0 to DAY 3 POD activities increased progressively before dropping from DAY 4 to DAY 5 in the rehydration period. The POD activities in both potato types were significantly higher under severe than moderate stress; under the same water condition, NT showed a higher POD activity than SC.
Effect of exogenous codA gene on MDA and chlorophyll contents in transgenic and non-transgenic potato under stress-rehydration treatment
Data for chlorophyll and MDA contents in transgenic and non-transgenic potato are shown in (Tables 4-5).
Under normal water condition, both chlorophyll and MDA contents had no significant variation and were within the regular range. Under drought stress, the MDA contents in both potato types increased from DAY 0 to DAY 3 (the period of PEG stress) but reduced in DAY 4 and DAY 5 (the period of hydration).
However, the MDA content in NT under 20% PEG stress started dropping earlier at DAY 3 instead of DAY 4. Chlorophyll contents in both plant types decreased from DAY 0 to DAY 3 and leveled off from DAY 4 to DAY 5. In general, the MDA contents in both SC and NT were significantly higher under 20% PEG stress than under 10% PEG stress; under the same stress condition, NT exhibited a higher MDA content than SC. On the contrary, under the four stress conditions, both plant types much higher chlorophyll under 10% PEG stress than under 20% PEG stress; for each stress level, SC showed a significantly higher chlorophyll content than NT.
Effect of exogenous codA gene on photosynthetic activities in transgenic and non-transgenic potato under stress-rehydration treatment
Data for changes in photosynthetic parameters, including photosynthetic rate ( Figure 4A), stomatal conductance ( Figure 4B), intercellular carbon dioxide concentration ( Figure 4C) and transpiration rate ( Figure 4D) of the two potato types due to the introduced gene are showed. There was no apparent variation among photosynthetic indexes of SC and NT under normal water condition. However, all photosynthetic parameters gradually and significantly decreased in both potato types during DAY 0 to DAY 3 under PEG stress, but increased markedly during the period of rehydration from DAY 4 to DAY 5. Throughout the observation period, SC showed higher photosynthetic parameters than NT and 10% PEG was less stressful to plants than 20% PEG.
Discussion
Glycinebetain is an extremely efficient compatible solute and its presence was strongly associated with enhanced tolerance of plants in stress environments (Rhodes and Hanson 1993). Since some major crops do not produce GB by themselves, studies have been carried out to determine if exogenous application of GB could improve the growth and survival of a wide variety of plants under various stress conditions (Allard et al. 1998), the biosynthesis of GB, and its mechanisms to enhance the tolerate abiotic stress have been deeply reported Muata, 2008, 2011). The codA gene in this experiment, which was obtained from Arthrobacter globiformis, directly converts choline into GB and H 2 O 2 (Deshnium et al. 1995), and H 2 O 2 , is known to be not only a ROS but also plays many crucial roles as a signalling molecule to induces tolerance to stress (Xiong et al. 2002;Jiang et al. 2012). Kathuria et al. (2009) reported after the transformation of codA into rice, the produced H 2 O 2 could simultaneously activate the response of transgenic plants to stresses. Therefore, GB might not be the only factor that is responsible for the tolerance improvement but the increase of H 2 O 2 also makes contributions.
From our results, water stress simulated with 20% PEG under the control of SWPA2 promoter, the transgenic potato produced the codA gene and accumulated GB while the control plants had neither the codA gene nor GB accumulation since potato is a non-GB-accumulating plant. This demonstrated that the gene was successfully transformed and expressed in transgenic potato. When plants were subjected to water stress, the cells consequently lost water. Since one of the key functions of GB is osmoregulation, it could help to maintain the osmotic equilibrium of cells. During pretreatment, the overwhelming advantage of leaf water potential in SC relative to NT was based on the accumulation of GB.
When plants were subjected to water stress, reactive oxygen species (ROS) accumulated; at the same time the antioxidant system, especially SOD, the most important antioxidant enzyme was induced to scavenge the newly produced ROS. To determine the influence of accumulated GB on the antioxidant enzyme system and photosynthetic system of transgenic potato during water stress Table 3 The activities of POD enzyme in non-transgenic (NT) and transgenic (SC) potatoes during and after stress and recovery stage, we measured the daily activities of SOD, POD and CAT, as well as photosynthetic parameters of two plant types.
The changes in SOD and CAT activities in the two plant types followed a similar trend under water stress. The activities of both enzymes increased across stress treatments from DAY 0 to DAY 2. At DAY 3, the activities slightly reduced compared to DAY 2 but were still higher than DAY 0 because the amount of accumulated ROS exceeded the scavenging ability of antioxidant enzymes, hence, plant cells might been hurt. Ultimately from DAY 4 to DAY 5 (rehydration period) when plants were provided with normal water condition, the injured plant cells regained their structure, function and the enzyme activities rose again. Also, the activities of SOD and CAT increased more significantly in SC than NT during rehydration stage. Therefore, after stress, SC was more able to eliminate ROS and protect plants.
The POD activities of two plant types changed in a quite different way compared with SOD and CAT during the whole treatment. They gradually increased from DAY 0 to DAY 3 under water stress and reduced from DAY 4 to DAY 5 in the recovery stage. This result can be explained by the dual functional role of POD in plants. On the one hand, it can express a protective effect as a member of the scavenging enzyme system that removes H 2 O 2 at the earlier stage of stress or aging; on the other hand, it can be also express injurious effects at the later stage of stress or aging, prompting the generation of reactive oxygen species, degradation of chlorophyll and peroxidation of membrane lipids which are products as well as indices of aging or stress. The main role of POD is generally considered to be the later one (Zhang and Kirkham 1994). Thus, due to the water stress, especially under the severe drought treatment (20% PEG), POD activities increased markedly in both potato types. During the recovery period, the antioxidant system and other physiological reactions regained their abilities resulting to removal of ROS and reduction of POD activities. In contrast to SOD and CAT, the POD activity was higher in NT than in SC, possibly because the promoter used in this experiment was derived from moderating clips of POD enzyme (SWPA2) in sweet potato. Exogenous promoters can cause transcriptional gene silencing of endogenous unlinked homologous promoters (Mette et al. 1999). Therefore, the expression of endogenous POD in SC was effected by the introduced CodA expressing construct which were regulated by the SWPA2 promoter. The expression of the endogenous promoter of POD enzyme might be suppressed by the inserted exogenous promoter and the activity of POD was inhibited to some level in SC. The SOD and CAT activities in SC were markedly superior to NT during the whole experiment. Hence, it could be suggested that the GB produced by the introduced codA gene in potato could protect its antioxidant system and enhance its drought resistance. The protection of photosynthetic system against photodamage Because GB could internally stabilize the photosynthetic apparatus, improve the practical efficiency of photosynthetic system II and cell osmotic equilibrium (Papageorgiou and Murata 1995), the transgenic potato with extra GB exhibited much higher photosynthetic parameters than NT. For example, at DAY 1 of water stress with 10% PEG, the photosynthetic rate (Pn) and stomatal conductance (Gs) in SC dropped only slightly compared with control treatment. In addition, the intercellular carbon dioxide concentration (Ci) and transpiration rate (Tr) were higher in SC than in NT indicating that these parameters were directly related to Pn and Gs. During the recovery stage, as GB could protect the repair machinery of PS (Ohnishi and Murata, 2006;Murata et al. 2007), the photosynthetic parameters of SC recovered more efficiently than NT after stress. Moreover, the higher leaf water potential in SC (which has been proved in pretreatment) sustained a higher Tr. The MDA content was higher in NT than in SC while chlorophyll content showed an opposite trend in this experiment. This demonstrated that the GB produced by introduced codA gene in SC could prevent membrane lipid peroxidation and degradation of chlorophyll caused by stress.
Conclusion
In conclusion, the exposure of transgenic potato to fourday-stress (with 10% and 20% PEG) and two-day-recovery periods favoured the accumulation of GB and H 2 O 2 by codA transgene; consequently the transgenic potato showed stronger antioxidant enzyme ability, more efficient photosynthetic system, higher leaf water potential and chlorophyll content and lower MDA content. It also showed better recovery from stress than the non-transgenic potato. The exogenous codA gene provided potato a stronger drought resistance and recovery ability. | 5,523 | 2013-09-03T00:00:00.000 | [
"Biology",
"Environmental Science"
] |
PD-1 blockade delays tumor growth by inhibiting an intrinsic SHP2/Ras/MAPK signalling in thyroid cancer cells
Background The programmed cell death-1 (PD-1) receptor and its ligands PD-L1 and PD-L2 are immune checkpoints that suppress anti-cancer immunity. Typically, cancer cells express the PD-Ls that bind PD-1 on immune cells, inhibiting their activity. Recently, PD-1 expression has also been found in cancer cells. Here, we analysed expression and functions of PD-1 in thyroid cancer (TC). Methods PD-1 expression was evaluated by immunohistochemistry on human TC samples and by RT-PCR, western blot and FACS on TC cell lines. Proliferation and migration of TC cells in culture were assessed by BrdU incorporation and Boyden chamber assays. Biochemical studies were performed by western blot, immunoprecipitation, pull-down and phosphatase assays. TC cell tumorigenicity was assessed by xenotransplants in nude mice. Results Human TC specimens (47%), but not normal thyroids, displayed PD-1 expression in epithelial cells, which significantly correlated with tumour stage and lymph-node metastasis. PD-1 was also constitutively expressed on TC cell lines. PD-1 overexpression/stimulation promoted TC cell proliferation and migration. Accordingly, PD-1 genetic/pharmacologic inhibition caused the opposite effects. Mechanistically, PD-1 recruited the SHP2 phosphatase to the plasma membrane and potentiated its phosphatase activity. SHP2 enhanced Ras activation by dephosphorylating its inhibitory tyrosine 32, thus triggering the MAPK cascade. SHP2, BRAF and MEK were necessary for PD-1-mediated biologic functions. PD-1 inhibition decreased, while PD-1 enforced expression facilitated, TC cell xenograft growth in mice by affecting tumour cell proliferation. Conclusions PD-1 circuit blockade in TC, besides restoring anti-cancer immunity, could also directly impair TC cell growth by inhibiting the SHP2/Ras/MAPK signalling pathway.
(NSCLC) and in T-cell lymphomas, PD-1 behaved as a tumour suppressor [7,8]. These data indicate that PD-1 could exert context-related tumour-intrinsic functions other than the suppression of immune response, and suggest the need of wider studies on ICI effects on the entire tumour context. Thyroid carcinoma (TC) is the most frequent endocrine malignancy. Follicular cell-derived TC includes different histotypes ranging from well differentiated (WDTC) to poorly differentiated (PDTC) and undifferentiated/anaplastic (ATC) carcinomas. WDTCs include papillary histotype (PTC), representing the majority of these tumours, and follicular histotype (FTC). WDTCs show an indolent behaviour and are mainly cured by surgery and 131 I radioiodine (RAI) therapy; only a small percentage of them exhibits recurrence, metastasis and resistance to RAI over time. By contrast, aggressive forms of TC (PDTC and ATC) represent a clinic challenge displaying a remarkable chemo-and radio-resistant phenotype from the beginning [9,10]. Interestingly, aggressive forms of TC exhibit increased immune checkpoint expression and ine cient immune in ltrate [9,[11][12][13][14], features that are being evaluated for the treatment of the disease [9,14,15].
Here, we analysed the PD-1/PD-Ls circuit in TC showing that: i) TC cell lines and TC human samples express, besides PD-Ls, as already demonstrated [16][17][18], also PD-1 at epithelial level, whose levels correlated with tumour aggressiveness; ii) intrinsic PD-1 sustains proliferation and migration of TC cells through a SHP2/Ras/MAPK signalling cascade; iii) PD-1 overexpression promotes, while PD-1 blockade inhibits, ATC xenograft growth by affecting cancer cell proliferation.
Thus, TCs express an intrinsic pro-tumorigenic PD-1 circuit. In TC context, the oncogenic role of PD-1 is dependent on the activation of the Ras/MAPK cascade. PD-1 blockade may represent a rational therapeutic choice in aggressive forms of TC for both immune response reconstitution and direct antitumour effects.
Cell culture and transfection
Human thyroid cancer cell lines BcPAP, TPC-1, 8505c, CAL62, SW1736, FRO, BHT101, HTH7 and OCUT1 were obtained and maintained as previously described [20]. The normal thyroid cells H-6040, isolated from normal human thyroid tissue and cultured in Human Epithelial Cell Medium with the addition of Insulin-Transferrin-Selenium, EGF, Hydrocortisone, L-Glutamine, antibiotic-antimycotic solution, Epithelial Cell supplement, and FBS were purchased from Cell Biologics (Chicago, IL, USA). H-6040 cells were used at passages between 3 and 6.
Transient transfections of TC cells were performed using polyethylenimine according to manufacturer's instructions (Merck, Darmstadt, Germany). Cells were harvested 48 hrs after transfection. Electroporation was used (Neon® Transfection System for Electroporation, Life Technologies, Carlsbad, CA, USA) to obtain stably transfected cells [21].
For RNA interference, we used SMART pools of siRNA from Dharmacon (Lafayette, CO, USA) targeting PD-1 or SHP2. Transfection was carried out by using 100 nM of SMARTpool and 6 μl of DharmaFECT (Dharmacon) for 48 h [22].
Immunohistochemistry
Thyroid carcinomas were selected from the Pathology Unit of the University of Perugia upon informed consent; the protocol for the study was approved by the institutional committee of University of Perugia. Thyroid tissues were formalin xed and para n embedded (FFPE). Sections of 4 µm were obtained.
Finally, the ready to use Bond™ Polymer Re ne Detection System allowed the detection of antigenantibody reaction [11]. We used a cut-off of 5% to determine the positivity of immunohistochemistry: cases showing immunostaining in more than 5% of neoplastic cells were considered positive, regardless of the intensity of the staining.
S-phase entry S-phase entry was evaluated by Bromodeoxyuridine (BrdU) incorporation. Cells were serum-deprived and treated with stimuli for 24 h. BrdU was added at a concentration of 10 μM for the last 1 h. BrdU-positive cells were revealed with Texas Red conjugated secondary Abs (Jackson Laboratories, West Grove, PA, USA). Fluorescence was detected by FACS analysis [23].
Cell lysates were subjected to immunoprecipitation with different antibodies or subjected to pull-down binding assays with puri ed recombinant proteins immobilized on agarose beads. The glutathione-Stransferases (GST) fusion proteins were expressed in Escherichia coli and puri ed with glutathioneconjugated agarose beads (Merck) by standard procedures. The protein complexes were eluted and resolved by sodium dodecyl sulphate-polyacrylamide gel electrophoresis (SDS-PAGE). Immunoblotting with speci c antibodies and enhanced chemiluminescence (ECL; Thermo Fisher) were employed for immune-detection of proteins in complexes [25].
Cell fractionation experiments were performed using the Subcellular Protein Fractionation Kit for Cultured Cells according to manufacturer's instructions (Thermo Fisher). Membrane fraction's protein content was normalized by using anti-transferrin receptor antibody (Invitrogen).
Immuno uorescence
Cells, grown on coverslips, were washed with phosphate-buffered saline (PBS), xed with 4% paraformaldehyde (PFA) and quenched with 50 mM NH 4 Cl. Then, cells were permeabilized with 0.2% Triton X-100 for 5 min and blocked for 30 min in PBS containing 5% FBS and 0.5% bovine serum albumin (BSA). Primary antibodies were detected with Alexa Fluor546-conjugated secondary antibodies (Abcam, Cambrige, UK). Images were acquired using a laser scanning confocal microscope (LSM 510; Carl Zeiss MicroImaging, Inc, Oberkochen, Germania.) equipped with a planapo 63X oil-immersion (NA 1.4) objective lens by using the appropriate laser lines and setting the confocal pinhole to one Airy unit. Z-slices from the top to the bottom of the cell by using the same setting (laser power, detector gain) were collected as previously described [26]. SHP2 activity assay SHP2 phosphatase activity was determined using the human/mouse/rat active DuoSet IC kit (R&D Systems). Brie y, cellular SHP2 bound to anti-SHP2 antibody conjugated to agarose beads was exposed to synthetic phosphopeptide substrate, which is only dephosphorylated by active SHP2 t. The amount of free phosphate generated in the supernatant was determined, as absorbance at 620 nm, by a sensitive dye-binding assay using malachite green and molybdic acid and represents a direct measurement of SHP2 activity in the cellular system [27].
Statistical analysis
The results are expressed as the mean ± SEM of at least 3 experiments. Values from groups were compared using the paired Student t test or Duncan test. The association between PD-1 expression and clinic-pathologic parameters in immunohistochemistry experiments was conducted using χ 2 . P value < 0.05 was considered statistically signi cant.
PD-1 receptor and its ligands are expressed in thyroid carcinoma cells
We evaluated the expression levels of PD-1, PD-L1 and PD-L2 in a panel of human TC cell lines derived from PTC (BcPAP, TPC-1) or ATC (8505c, CAL62, SW1736, FRO, BHT101, HTH7, OCUT1) compared to a primary human thyroid cell culture (H-6040). Cyto uorimetric analysis demonstrated that all the cell lines expressed PD-1 on the plasma membrane, though to a lesser extent than PD-Ls, and that PD-1 protein levels were higher in cancer compared to normal thyroid cells (Fig. 1A). PD-1, PD-L1 and PD-L2 mRNA levels were comparable between normal and cancerous thyroid cells, suggesting that post-translational mechanisms could be responsible for the protein increase observed in cancer cells (Suppl. Fig. 1).
Immunohistochemical (IHC) staining of whole sections from 34 PTC surgical samples with anti-PD-1 antibodies showed that PD-1 is expressed in TC cells (Fig. 1B), but not in normal thyroid epithelial cells (not shown). Figure 1B shows a representative PTC case with negative PD-1 staining (PTC1), intense PD-1 staining in the tumour immune in ltrate (PTC2), and PD-1 immunoreactivity, cytosolic and/or localized at the plasma membrane, in thyroid cancer epithelial cells (PTC3).
PD-1 expression was detectable in epithelial cancerous cells of 47% of tumour samples (Table 1). By analysing clinic-pathologic features of the PTC samples, we found that tumour stage and lymph-nodal metastasis signi cantly correlated with PD-1 staining (Table 1) in our casistic.
These data indicate that TC cells can express PD-1 together with its ligands [16][17][18], and that PD-1 expression correlates with tumour malignancy. Since PD-1 expression levels in human TC samples correlated with lymph-nodal metastasis, we asked whether PD-1 could also stimulate the motility of TC cells. To this aim, we performed migration assays on 8505c cells stably overexpressing or not PD-1 [pCMV3 PD-1 cl13 and cl16 compared to pCMV3 empty vector-transfected cells (Suppl. Fig. 3A)] or on parental 8505c cells treated or not with sPD-L1 (1 mg/ml) in the presence or absence of Nivolumab (10 mg/ml) (Fig. 2C). PD-1 overexpressing TC cells showed increased migratory potential compared to control cells. Consistently, sPD-L1 induced, and Nivolumab inhibited, both basal and sPD-L1-induced migration (Fig. 2C).
These data indicate that PD-1 intrinsic circuit sustains TC cell proliferation and migration.
Since the BRAF/MEK/MAPK signalling is potentiated by PD-1 in TC cells, and Ras GTPase is the main upstream activator of this cascade [28], we asked whether PD-1 could activate Ras. To this end, we used a pull-down assay with the GST-RAF1-Ras binding domain (RBD), which speci cally binds the GTPloaded active form of Ras. 8505c and TPC-1 cells were transiently transfected with empty vector (pFLAG) or PD-1 (pFLAG PD-1) in combination with pCEFL H-Ras AU5 or the relative empty vector (pCEFL). PD-1 enforced expression increased Ras activation, as assessed by Ras pull-down, in comparison to control (Fig. 3D), suggesting that PD-1 potentiates Ras activation in TC cells.
PD-1 recruits and activates the SHP2 phosphatase in thyroid carcinoma cells
In immune cells, PD-1 signalling requires the tyrosine phosphatase SHP2 (PTPN11) [27]. Upon phosphorylation of tyrosine residues in its cytosolic domain, PD-1 binds to the SH2 domains of SHP2 that, in turn, dephosphorylates signalling components of the immune receptors, thus down-regulating the immune responses [29]. In cancer cells, SHP2 acts as a signalling molecule downstream receptor tyrosine kinases (RTKs), displaying oncogenic activity [30]. In particular, SHP2 can contribute to Ras activation either by recruiting the GRB2/SOS complex to the plasma membrane [31] or through its phosphatase activity on Ras inhibitory tyrosine residues [31,32].
We rst asked whether PD-1 could physically interact with SHP2 in TC cells. Reciprocal coimmunoprecipitation experiments showed that endogenous and exogenously expressed PD-1 bind SHP2 in 8505c and TPC-1 cells (Fig. 4A). Moreover, pull-down assays with N-or C-terminal SH2 domain of SHP2 demonstrated that SHP2 can bind PD-1 mainly through SHP2 C-terminal SH2 domain (Fig. 4B). In support of these observations, we found that both endogenous and exogenous PD-1 are tyrosine phosphorylated in TC cells (Suppl. Fig. 4A), condition necessary to allow the SH2 domains of SHP2 to bind PD-1 [31].
Cell fractionation of 8505c cells transiently or stably transfected with PD-1 was used to demonstrate that PD-1 binding to SHP2 enforced the membrane localization of SHP2. Subcellular fractions of membranes (M) or cytosol (C) were obtained from PD-1 overexpressing and from control cells (pFLAG-PD-1 vs pFLAG or pCMV3 PD-1 cl 16 vs pCMV3). Enrichment of SHP2 levels in the membrane fractions was observed in PD-1 overexpressing cells compared to empty-vector transfected cells. Normalizations of each extract were obtained by using antibodies to transferrin receptor for membrane fraction and a-tubulin for cytosolic extract (Fig. 4C). In agreement with these observations, immuno uorescence (IF) assay of PD-1 overexpressing TC cells showed a signi cant increase of SHP2 staining at the plasma membrane in cells overexpressing PD-1 compared to controls (Fig. 4D and Suppl. Fig. 4B).
SHP2 dephosphorylates and activates Ras in TC cells
We then searched for the molecular mechanism of Ras activation mediated by the PD-1/SHP2 complex.
We rst asked whether PD-1 could enhance GRB2 recruitment by SHP2. To this aim, we used pull-down assays with GST-SH2-GRB2 fusion proteins and co-immunoprecipitation assays showing no increased GRB2 binding to SHP2 in PD-1 transfected TC cells compared to controls (Suppl. Fig. 4D). In accordance with these observations, PD-1 enforced expression did not signi cantly increase SHP2 tyrosine phosphorylation levels (Suppl. Fig. 4A), on which GRB2 binding to SHP2 is dependent, nor changed substantially GRB2 compartmentalization as demonstrated in cell fractionation experiments (Fig. 4C).
Since the GRB2/SOS complex is not involved in PD-1-mediated Ras activation, we asked whether Ras could be activated by SHP2 through the dephosphorylation of its inhibitory tyrosine residues [27,33]. We evaluated the phosphatase activity of SHP2 and, in parallel, the levels of Ras tyrosine phosphorylation in cells overexpressing or not PD-1. We used a speci c SHP2 phosphorylated substrate in the presence of the Malachite Green tracer, a colorimetric method for the detection of free inorganic phosphate [27]. We observed that SHP2 phosphatase activity was signi cantly increased in PD-1-versus empty-vectortransfected TC cells (Fig. 4E). Similar results were obtained in PD-1 stably transfected cells (not shown).
Consistently with the increased phosphatase activity of SHP2, Ras total phosphorylation levels, in the presence of PD-1, were signi cantly reduced in TC cells transfected with pCEFL H-Ras AU5 (Suppl. Fig. 4E). To assess whether Ras dephosphorylation occurs in its inhibitory residues 32 and/or 64 [27], we used (pan)Ras immunoprecipitation followed by immunoblotting with anti-phospho Y32 (Ras) or Y64 (Ras) antibodies. These experiments demonstrated that PD-1 enforced expression in 8505c cells reduced the Ras phosphorylation levels in the inhibitory tyrosine residues 32 in pCEFL Ras AU5-transfected cells compared to controls (Fig. 4F). Similar results were obtained in TPC-1 cells (not shown). No differences in phosphorylation levels of inhibitory residues 64 were observed (not shown).
Taken together, these data indicate that, in TC cells, PD-1 binds SHP2, which in turn dephosphorylates Ras in its inhibitory tyrosine, thus leading to the activation of the MAPK signalling cascade.
PD-1-induced biologic activities in thyroid cancer cells require the SHP2/BRAF/MEK signalling proteins To investigate the role of SHP2 in PD-1 functional activity, we treated TC cells, overexpressing or not PD-1, with siRNA targeting SHP2 (siSHP2 -100 nM) or with a SHP2 allosteric inhibitor that blocks its phosphatase activity (SHP099 -350 nM) [34]. As shown in Figure 5A, siSHP2 was able to signi cantly reduce SHP2 protein levels compared to scrambled siRNAs (siCTR). By BrdU incorporation assays, we demonstrated that siSHP2 signi cantly decreased DNA synthesis (Fig. 5B) in PD-1-, and to a lesser extent in empty vector-transfected, 8505c cells. Consistently, SHP099 inhibitor signi cantly reduced PD-1induced DNA synthesis in 8505c cells (Fig. 5C).
To investigate the role of the downstream signalling cascade in PD-1 dependent biologic TC responses, we conducted BrdU incorporation assays in TC cells overexpressing or not PD-1, in the presence or in the absence of Vemurafenib (Vemu -10 mM) [35], a BRAF inhibitor, or Selumetinib (Selu -10 mM) [36], a MEK inhibitor. As shown in Figure 5D, both drugs were able to signi cantly revert PD-1-induced DNA synthesis in 8505c cells.
Similar experiments were performed to assess the role of the signalling cascade in PD-1-mediated TC cell migration. Figure 5E shows that SHP099 and Vemurafenib, and to a lesser extent Selumetinib, were able to inhibit the migration of 8505c cells induced by sPD-L1. Similar results were obtained in TC cells transfected with PD-1 (not shown).
These data demonstrate that PD-1-induced cell proliferation and motility of TC cells are dependent on the SHP2/BRAF/MEK pathway.
Intrinsic PD-1 signalling enhances xenograft growth of TC cells in immunocompromised mice
To verify whether PD-1 intrinsic signalling and biologic activity could affect tumorigenicity of TC cells, we xenotransplanted 8505c pCMV3 PD-1 (two clones) and control 8505c pCMV3 (a mass population) cells in athymic mice. 8505c pCMV3 PD-1 xenografts displayed increased tumour growth rate that was statistically signi cant at 4 weeks after injection, in comparison to empty vector transfected cells (Fig. 6A). End-stage tumours were excised and analysed for cell proliferation (Ki-67), apoptotic rate (cleavedcaspase 3) and vessel density (CD31) by immunohistochemistry. 8505c pCMV3 PD-1 and 8505c pCMV3 xenografts exhibited statistically signi cant differences in cell proliferation rate, but not in apoptotic rate or vessel density (Fig. 6B and Suppl. Fig. 5A).
To verify whether the inhibition of PD-1 by Nivolumab could affect xenograft growth of parental 8505c cells, mice were xenotransplanted, randomized in two homogeneous groups, and administered intraperitoneally (i.p.) with Nivolumab or control IgG 4 (30 mg/kg) twice a week. 5 weeks after xenotransplantation, Nivolumab-treated tumours showed a signi cant decrease in growth rate in comparison with the IgG 4 -treated group (Fig. 6C). Consistently, Nivolumab signi cantly reduced TC xenografts' proliferation without affecting apoptotic rate or vessel density ( Fig. 6D and Suppl. Fig. 5B).
Despite these experiments were carried out in immunocompromised mice, we could not exclude that Nivolumab anti-tumour activity could be ascribed to its ability to affect innate immunity that is present and functional in athymic mice. Thus, we analysed the density and activation of immune cells in ltrating 8505c xenografts treated with Nivolumab or with IgG 4 by cyto uorimetric analysis. We found that Nivolumab treatment did not change the percentage of CD45+ leucocytes in ltrating xenografts in comparison to IgG 4 controls, at least at 5 weeks of treatment. Moreover, the density and the expression of polarization/activation markers of tumour-associated macrophages (TAM), of Ly6C+ and Ly6G+ immature myeloid cells, of mature and immature dendritic cells and of regulatory or activated NK, and NKT cells, were comparable between Nivolumab-and IgG 4 -treated 8505c xenografts (Suppl. Table 1).
These data indicate that, in our model system, PD-1 blockade by Nivolumab inhibits TC cell xenograft growth by affecting tumour cell rather than immune cell compartment.
Discussion
Several reports point to a promising role of immunotherapy in the treatment of advanced forms of TCs [15,37]. TCGA analysis of TC provided a classi cation of PTC, in spite of their low mutational burden, as "in amed" tumours and ATC as hot tumours [38]. Interestingly, a pro ling of TC con rmed that ATC and PTC are strongly in ltrated by macrophages and CD8 + T cells, but that these cells displayed a functionally exhausted appearance [11]. In TC, high PD-L1 levels signi cantly correlated with immune in ltrate, increased tumour size and multifocality [17,18]. Furthermore, the presence of PD-1 + T lymphocyte in ltrating TC is associated with lymph-nodal metastasis and recurrence [13]. Altogether, these data suggest that immune checkpoint inhibitors (ICI) might represent a promising tool for the treatment of these carcinomas.
Our report, for the rst time, investigated the expression of the PD-1 receptor in epithelial thyroid cancer cells, demonstrating that a signi cant percentage of human TC samples displayed PD-1 expression on these cells, although at lower levels compared to the expression found on immune cells in ltrating the tumour. Consistently with the evidence obtained for PD-L1 [17,39], our data indicate that PD-1 expression levels correlated with tumour stage and lymph-nodal metastasis in TC. Accordingly, we demonstrated that PD-1 activity could induce proliferation and motility of TC cells in culture. This suggests that the PD-1 intrinsic pathway might have a role in TC cell aggressiveness and invasive ability.
The expression of PD-1 on cancer cells, rather than on immune cells, has been observed recently in melanoma and hepatocellular carcinoma (HCC) [5,6,40]. In these cancer types, intrinsic PD-1 activity sustains tumour growth through an mTOR/S6K1 signalling [5,6,40]. In TC cells, similarly to melanoma and HCC, PD-1 intrinsic signalling sustains cancer cell proliferation, but at variance from these neoplasias, this biologic activity is mediated by the activation of the Ras/MAPK pathway. Interestingly, mutations causing the activation of the Ras/MAPK signalling pathway are found in > 70% of PTC (e.g., RET/PTC rearrangements and point mutations of the BRAF and Ras genes) and regulate transcription of key genes involved in TC cell proliferation [41]. Thus, PD-1 expression could provide a selective advantage to some TC by enhancing the activation of MAPK pathway, thus promoting proliferation and migratory behaviour of cancer cells. Interestingly, besides PD-1, also the immune-checkpoint Cytotoxic T lymphocyte-associated antigen 4 (CTLA-4), classically expressed on leukocytes, has been found to be expressed and functional on cancer cells [42,43].
Our data also highlighted the key role of the SHP2 tyrosine-phosphatase in PD-1-mediated activities in TC cells. Interestingly, SHP2 is recruited by PD-1 in T lymphocytes, and inhibits immune receptor signalling by dephosphorylating several downstream substrates [29,44]. In cancer cells, SHP2 has been described to exhibit oncogenic properties [30,31]. SHP2 functions as an adapter that binds activated receptor tyrosine kinases (RTKs) and recruits the GRB2/SOS complex on the plasma membrane, enhancing SOSmediated GTP loading on Ras and activating the Ras/MAPK cascade [30,31]. SHP2 can also directly enhance Ras activity by dephosphorylating speci c inhibitory tyrosine residues on Ras [27,33,45]. In our model system, we found that PD-1 exploits this last mechanism. However, we cannot exclude that other PD-1 functions may contribute to Ras/MAPK activation. Whatever the case, we demonstrated that, in TC cells, SHP2 is a critical factor in PD-1 downstream signalling, as SHP2 inhibition hampered PD-1mediated biological activities.
The majority of TC are driven by mutations that activate the Ras/MAPK pathway. Inhibitors targeting different proteins in this signalling cascade have been developed, but their e cacy has been limited by adaptive feedback reactivation of the pathway [46]. Interestingly, SHP2 has been identi ed as one of the main mediators of adaptive resistance to inhibitors of the Ras/MAPK pathway in many tumors, including TC. In 8505C cells, carrying a BRAF(V600E) mutation, targeting both BRAF and SHP2 with Vemurafenib and SHP099 led to a reversion of adaptive resistance to either inhibitor alone [47,48].
Furthermore, in TC samples, increased SHP2 expression was detected compared to normal thyroids, and this correlated with poor tumour differentiation, TNM stage and lymph-nodal metastasis [49]. These evidences suggest that SHP2 may represent a potential target for TC therapy both alone and in combination with PD-1 and/or Ras/MAPK targeting.
The evaluation of PD-1 expression in cancer cell might be important to identify tumours and/or patients that will be likely to respond to ICI administration by taking advantage of both drug effects on immune compartment and on cancer cell proliferation. In few case reports or in "basket clinical trials" in which ICI
Declarations
Ethics approval and consent to participate The experimental protocol for animal studies was reviewed and approved by the Ministero Italiano della Salute and the institutional committee of University of Naples Federico II. Thyroid carcinomas were selected from the Pathology Unit of the University of Perugia upon informed consent, the protocol for the study was approved by the institutional committee of University of Perugia. | 5,392.8 | 2020-11-04T00:00:00.000 | [
"Medicine",
"Biology"
] |
Performance of the Finite Element and Finite Volume Methods for Large Eddy Simulation in Homogeneous Isotropic Turbulence
Department of Mathematics, Shahjalal University of Science & Technology, Sylhet-3114, Bangladesh Institute of Industrial Science, the University of Tokyo, 4-6-1 Komaba, Meguro-ku, Tokyo 1538505, Japan Graduate School of Engineering, Hokkaido University, Kita 13 jou Nishi 8 chome, Kita-ku, Sapporo-shi, Hokkaido 060-8628, Japan Department of Mechanical and Aerospace Engineering, Tokyo Institute of Technology, 2-12-1 Ookayama, Meguro-ku, Tokyo 152-8552, Japan
Introduction
During the past decades, LES has been demonstrated to be an accurate and sophisticated predictive method for flows of engineering interest.The position of LES approach is intermediate between DNS and RANS (Reynolds-Averaged Navier-Stokes) techniques.Although recent development of supercomputers enable to carry out the DNS [1-5], which is considered as the exact approach of turbulence simulations but the grid dependence is very high (proportional to Re 9/4 ) and calculation is fairly time consuming, so that the DNS is not appropriate to the practical use.On the other hand, LES is less expensive and can simulate very complex flow fields in turbulence.Unlike the full-scale turbulence modeling of RANS technique, in LES method, large-scale motion is exactly calculated and the effects of subgrid-scale (SGS) motion is modeled.Although LES being superior to RANS, it still has some theoretical and practical drawbacks; and still today, people are putting their efforts to eliminate this drawback for LES.
However, in recent days the important issues for LES are numerical method and subgrid-scale (SGS) modeling.It is known that the numerical methods which are widely used for LES are either spectral or the conventional finite difference method with structure grids [6][7]; but for the case of complicated flow the use of structure grids method is often unsuitable.Since Finite Element Method (FEM) is based on unstructured grids, this method is very useful for engineering applications to the complicated flow fields.
The objective of our present study is to develop "Front Flow" next generation fluid simulator based on LES using FEM and FVM in order to apply to the engineering and practical problems.However, before applying there, it is necessary to examine the effectiveness and performance of our numerical solver through the benchmark problem which other researchers have examined.
In this study, LES is performed in homogeneous isotropic turbulence using FEM and FVM formulation and the results are compared with DNS result which is calculated by a spectral method.The results with SSM and DSM in both FEM and FVM are also compared.Our interest is in whether any effects of the subgrid-scale model should appropriately be damped out.In both cases, we also discussed about vortical structures in the computed flow fields by LES comparing with those in the instantaneous DNS data by visualization of flows.
Numerical scheme and SGS model
In this study Front Flow numerical solvers based on FEM and FVM are used for the computations.The detailed mathematical formulation and developing procedure of these numerical solvers are too long.Since the purpose of this study is to show the performance and effectiveness of these two codes in a benchmark problem so the mathematical formulations are not given here.However, a brief description of these Front Flow numerical codes is given as follows:
Front Flow/FEM code
The Front Flow/FEM numerical code used in this study is general-purpose fluid simulation code which calculates incompressible unsteady flows in arbitrary shaped geometry that involves moving (but not deforming) boundaries.It is particularly designed for computing unsteady flows on turbomachinery and simulating sound pressure spectra that result from unsteady fluid motion.In order to obtain an accurate sound spectra in general, it is of up most importance to simulate the source, i.e., the fluid fluctuations accurately in terms of their spatial and frequency spectrum.
This code is based on an explicit streamline-upwind finite element method with second order accuracy both in time and space.In this numerical scheme, the spatial discretization is performed by an explicit hexahedral FEM and the coordinate system is the three dimensional Cartesian.
The pressure algorithm is based on the ABMAC (Arbitrary Boundary Marker and Cell) method proposed by Viecelli [8], where both the velocity components and static pressure are simultaneously corrected until the maximum divergence of the flow field decreases to less than a prescribed critical value.The subgrid-scale model for LES supported in this code are the SSM [9] incorporating the wall-damping function and the DSM proposed by Germano et al. [10].The details of the numerical methods and developing procedures of this code have been given in the previous studies by Uddin et al. [11] and Kato et al. [12].
Front Flow/FVM code
In this code, the spatial discretization is performed by a finite volume method and the coordinate system is the three-dimensional Cartesian.The usable elements in this code include hexahedrons, triangular prisms, square pyramids, and tetrahedrons, however, in this study we use hexahedrons elements only.In this numerical solver the SIMPLE (Semi-Implicit Method for Pressure-Linked Equations) scheme is adopted for computations of compressible flows and a formula based on the low Mach number approximation can be used, however, using this code the incompressible flow computation is also possible.Like FEM code, in this solver the adopted SGS models include the SSM and DSM for SGS modeling.The detailed numerical methods and developing procedures of this code have been given in the previous study by Unemura et al. [13].
The computation
The computational domain of the fine mesh was selected to a periodic box (2π×2π×2π) and the computation was performed using 64 3 grids.The present FEM and FVM codes assume the solution at t = nΔt is known, where Δt is the time step increment, t is the nondimensional time and n is the time step, and the solution at the next time step, t = (n+1)Δt, can be calculated using the residual terms described in the finite element and finite volume formulation [11][12][13].Since we are interested to compare our present results with the result of spectral DNS, the initial flow field for LES is calculated with same condition and procedure that is done for spectral DNS by Tanahashi et al. [2][3].The calculation presented here was done with nondimensional Δt = 0.00316 and a ν = 1/Re = 8.26×10 -3 .In the case of SSM, the Smagorinsky coefficient, C s = 0.2 is used in both FEM and FVM simulations [14].
DNS database
The reference DNS is performed at 64 3 resolutions by using a spectral code developed by Tanahashi et al. [2][3].At the end of calculation, the Taylor microscale Reynolds number, Re λ , based on root mean square of velocity fluctuations (u rms ) and Taylor microscale (λ) of the DNS data is Re λ = 30.5 (t =3.792), and the maximum possible Reynolds number (Re) of the flow is 121.1.
Numerical Results
In this section we compare the numerical results in LES with the results of DNS to understand the decay and statistical behavior of homogeneous isotropic turbulence.We will also discuss about vortical structures in LES data comparing with DNS data.
Decay of turbulence
The comparisons of the three-dimensional energy spectra in DNS and LES data at the end of calculation (t = 3.792) for both FEM and FVM with SSM and DSM models are presented in Fig. 1.The energy spectrum is calculated by the definition given as: In Fig. 1, we can observe that the DNS spectrum shows the power decay close to k -5/3 .The energy spectrum in LES calculation using FEM with DSM model shows almost similar decay as in DNS calculation at the whole wave number range (in Fig. 1 LES but the DSM result suggests that the decay of turbulence in LES follows the k -5/3 power law, and the numerical accuracy is quite good.On the other hand, although the decay of turbulence using FVM collapse with DNS data in the low wave number range but it underestimates DNS data in the high wave number range for both SSM and DSM models (in Fig. 1b).Moreover, the SSM results show little faster decay than DSM results.The difference between SSM and DSM results may happen due to Smagorinsky constant or model itself.However, in both FEM and FVM codes, it reveals that DSM model works better than SSM model comparing with DNS data (in Fig. 1c, d).It is also revealed that among these two codes, the performance of FEM code is better than FVM code in these simulations.
Turbulence statistics
In this sub-section we discuss about the turbulent statistics in LES comparing with DNS results.Fig. 2 shows the decay of total resolved energy, E k , where 2 2 It reveals that FEM results with DSM model coincide with the DNS through out the entire analysis (Fig. 2(b)) and SSM result is dissipative in the early stage but collapse with DNS data at the end of the calculation (Fig. 2(a)).On the other hand, FVM results do not collapse with DNS results through out the entire analysis for both SSM and DSM.
The decay of resolved enstrophy is presented in Fig. 3, where enstrophy is defined as: In this case, the FVM results with SSM and DSM models differ significantly from the DNS through out the entire analysis and show too much dissipation in the initial stage of the calculation.The FEM results with SSM model are also dissipative in the initial stage and it shows relatively higher dissipation from DNS throughout the entire analysis (Fig. 3a).However, FEM results with DSM are in good agreement with DNS (Fig. 3b).However, the performances of these two codes with DSM is better than that of SSM.The decay of root mean square of velocity fluctuations (u rms ) is presented in Fig. 4, where, The decay of u rms for FEM with DSM fully collapses with DNS data in the whole analysis.Although, initially FEM results with SSM is little faster but it is in good agreement with DNS after time t=2.5.On the other hand, FVM results with SSM and DSM do not collapse with DNS through out the entire analysis.Hence it is revealed that the FEM-LES result gives good agreement with DNS result in terms of both spatial spectra and decay of turbulence statistics [11].Skewness and flatness factors of velocity are important statistical properties to rep r resent characteristics of turbulence.The production of the rate of dissipation of turbulent kinetic energy or, equivalently, the production of enstrophy is directly related to skewness in isotropic turbulence [15].Skewness and Flatness of a random variable are presented in Fig. 5, and Fig. 6, respectively.Skewness and Flatness of random variables, respectively, are defined as: The agreement of skewness (in Fig. 5) of LES results with DNS data in both FEM and FVM simulations for both SSM and DSM cases is good until t =2.0 and it overestimate DNS data hereafter.In this case, particularly it is revealed that at the end of calculation the profile of skewness for FVM code with DSM goes very close to the DNS results.However, skewness for all cases is almost zero at t =0 and the LES results for both FEM and FVM solvers either with SSM or DSM agree well with DNS data.On the other hand, flatness (in Fig. 6) of LES results for both FEM and FVM simulations collapse with DNS data well throughout the entire analysis.At time t =0, flatness of DNS and LES for all cases is almost 3.0 and shows nearly a constant value at the end of the calculation.The behavior of flatness suggests that turbulence velocity at the end of the calculation does not include the effects of initial condition and reaches to fully developed state.
Visualization of flows
In this section we shall discuss about the vortical structures in LES comparing with DNS result by visualization of flows.There are several methods for identification of vortical structures and their visualization in turbulence with significant differences [16] and most of them show threshold dependence.In this study, by direct use of 'local flow pattern method' we discuss about vortical structures in LES flow fields.From the distributions of second and third invariant of the velocity gradient tensor one can define 'local flow pattern method' which does not depend on the thresholds of the variables.Details of this method can be seen in the previous studies by Tanahashi et al. [2][3].
The second invariant of the velocity gradient tensor is defined as: ( ) where, 1 2 are the symmetric and asymmetric part of the velocity gradient tensor, Fig. 7 shows the contour surfaces of the positive second invariant given in Eqn.(6) of the velocity gradient tensor in DNS and LES data for FEM and FVM simulations as well as for SSM and DSM cases at the end of analysis (t = 3.792).In this figure the visualized region is the whole calculation domain and the view point is same for all cases.The level of the isosurface is selected to be Q =10 for all cases.These figures show that lots of tubelike vortical structures are randomly oriented in DNS data as well as in LES data for all cases.In this visualization we considered Q =10 only to show the vortical structures in DNS and LES.However, if we increase or decrease the value of Q, we can also show distinct tube-like structures exists in DNS and LES, of course, it will be little bit different from the present visualized structures.Our present study reveals that in actual LES we can have the coherent tube-like structures somewhat similar to the structures in DNS and these structures are quite unique and distinct as we can see in Fig. 7.For FEM code, appearance of the vortical structures using DSM case seems higher than that of SSM case and close to DNS data.On the other hand, for FVM code, appearance of these vortical structures for SSM and DSM cases are similar but differ from the DNS, even it decrease from the SSM case in FEM code.This observation again suggests that the accuracy of LES calculation using FEM with DSM is better than that of FVM. this study it is revealed that the coherent structures in FEM-LES are similar to the structures in the DNS data which suggest that FEM-LES results may have suitable role to tackle turbulence intermittency like DNS.
Conclusions
Larg eddy simulations in homogeneous isotropic turbulence have been done using FEM e and FVM formulation in order to show the performance of these numerical methods as well to assess its spectral accuracy.The results in both cases are compared with those from DNS based on a spectral method.It is shown that the results of FEM simulation are in better agreement with DNS than the results of FVM simulation.Regarding SGS modeling it revealed that the performance of DSM is better than SSM in both FEM and FVM formulation.In FEM formulation DSM gives good agreement with DNS results in terms of both spatial spectra and decay of turbulence statistics.
Visualization of second invariant of velocity gradient tenso d by LES for both FEM and FVM simulations reveal the existence of distinct, coherent and tube-like vortical structures similar to those found in instantaneous flow field computed by the DNS.In this visualization it is also revealed that the appearance of the vortical structures in FVM simulation is lower than that of FEM simulation for both DSM and SSM modeling.Moreover, the appearance of these vortical structures using DSM in the resolved turbulence by FEM seems higher than that of SSM, and very close to DNS data.
A
T and Technology (MEXT), Japan under an IT research program "Frontier Simulation Software for Industrial Science".
Fig. 1 .Fig. 2 .
Fig. 1.Comparisons of three-dimensional energy spectra of velocity fluctuations in DNS and LES data.(a) SSM and DSM results using FEM; (b) SSM and DSM results using FVM; (c) SSM results using FEM and FVM, (d) DSM results using FEM and FVM.The legend FEM and FVM stand for finite element method and finite volume method, respectively.
Development of skewness of a velocity component.(a) SSM and (b) DSM results using FEM F and FVM, respectively. of flatness of a velocity component.(a) SSM and (b) DSM results using FEM and FVM, respectively. | 3,712.8 | 2010-04-26T00:00:00.000 | [
"Engineering",
"Physics"
] |
Lunasin Improves the LDL-C Lowering Efficacy of Simvastatin via Inhibiting PCSK9 Expression in Hepatocytes and ApoE−/− Mice
Statins are the most popular therapeutic drugs to lower plasma low density lipoprotein cholesterol (LDL-C) synthesis by competitively inhibiting hydroxyl-3-methyl-glutaryl-CoA (HMG-CoA) reductase and up-regulating the hepatic low density lipoprotein receptor (LDLR). However, the concomitant up-regulation of proprotein convertase subtilisin/kexin type 9 (PCSK9) by statin attenuates its cholesterol lowering efficacy. Lunasin, a soybean derived 43-amino acid polypeptide, has been previously shown to functionally enhance LDL uptake via down-regulating PCSK9 and up-regulating LDLR in hepatocytes and mice. Herein, we investigated the LDL-C lowering efficacy of simvastatin combined with lunasin. In HepG2 cells, after co-treatment with 1 μM simvastatin and 5 μM lunasin for 24 h, the up-regulation of PCSK9 by simvastatin was effectively counteracted by lunasin via down-regulating hepatocyte nuclear factor 1α (HNF-1α), and the functional LDL uptake was additively enhanced. Additionally, after combined therapy with simvastatin and lunasin for four weeks, ApoE−/− mice had significantly lower PCSK9 and higher LDLR levels in hepatic tissues and remarkably reduced plasma concentrations of total cholesterol (TC) and LDL-C, as compared to each monotherapy. Conclusively, lunasin significantly improved the LDL-C lowering efficacy of simvastatin by counteracting simvastatin induced elevation of PCSK9 in hepatocytes and ApoE−/− mice. Simvastatin combined with lunasin could be a novel regimen for hypercholesterolemia treatment.
Introduction
Elevated circulation cholesterol level, especially low density lipoprotein cholesterol (LDL-C), contributes to the main risk factors of cardiovascular disease (CVD). The National Lipid Association (NLA) recommended atherogenic cholesterols including non-high density lipoprotein cholesterol and LDL-C as the primary targets of cholesterol lowering therapies [1].
Statins, a class of specific hydroxy-3-methyl-glutaryl-CoA (HMG-CoA) reductase inhibitors, have been advised as the first-choice hypocholesterolemic agents by the American College of Cardiology (ACC)/American Heart Association (AHA) [2]. In principle, statins can suppress the synthesis of endogenous cholesterol by competitively inhibiting HMG-CoA reductase and accelerate clearance of circulation LDL-C by up-regulating hepatic low density lipoprotein receptor (LDLR) [3][4][5]. However, statins may strongly up-regulate proprotein convertase subtilisin/kexin type 9 (PCSK9) by inducing the expression of hepatocyte nuclear factor 1α (HNF-1α), the dominating transcription factor of PCSK9 to cause the resistance to the LDL-cholesterol lowering effect of statins [6,7]. The underlying mechanism is that the up-regulated PCSK9 reduces the LDLR level via binding to it and transporting it to the lysosome for degradation [8], which eventually attenuates the clinical performance of statins to a large extent. Accordingly, various statin combination therapy approaches have been considered to enhance the LDL-C lowering effect and reduce the dosage of statins, as well as decrease the risk of adverse effects in the clinic [9].
Combination therapy appears to be a solution to improve the LDL-C lowering efficacy of statins and circumvent issues of statin resistance and intolerance. Thus far, it has been reported that bile acid sequestrants (BAS) in combination with statin therapy provide additive reductions in LDL-C compared with statin monotherapy [10,11]. The cholesterol absorption inhibitor ezetimibe, when combined with a statin, lowers LDL-C more than statin alone [12]. Niacin plus statin offsets the increase in PCSK9 levels noted with statin therapy [13]. Simvastatin together with either carvacrol or berberine improves its lipid lowering efficacy [14,15].
Lunasin, a 43-amino acid polypeptide with a molecular weight of~5 kDa, which was initially identified from soybean [16], has been previously proven to possess various pharmacological activities against cancer [17], inflammation [18], and CVD [19]. Interestingly, we previously revealed that lunasin can functionally enhance LDL uptake in hepatocytes via both inhibiting PCSK9 expression and enhancing LDLR level, thereby remarkably reducing total cholesterol (TC) and LDL-C in blood as compared to vehicle control [20]. Thus, in this study, we investigated whether simvastatin combined with lunasin could improve the cholesterol lowering efficacy of simvastatin.
Lunasin Suppresses Simvastatin Induced Elevation of PCSK9 Levels via Down-Regulating HNF-1α in HepG2 Cells
Statin has been known to lower plasma LDL-C synthesis by competitively inhibiting HMG-CoA reductase and up-regulating the hepatic LDLR. However, the concomitant up-regulation of PCSK9 by statin promotes the degradation of LDLR and thereby attenuates its cholesterol lowering efficacy [21][22][23]. Thus, we examined whether the up-regulation of PCSK9 by simvastatin was inhibited by lunasin in HepG2 cells. As shown in Figure 1A,B, 1 µM simvastatin treatment significantly increased PCSK9 expression at the mRNA and protein levels, while 5 µM lunasin treatment remarkably inhibited PCSK9 expression at the mRNA and protein levels in HepG2 cells as compared to vehicle control. The combination treatment of simvastatin with lunasin reduced PCSK9 expression at the mRNA and protein levels as compared to simvastatin treatment alone. Thus, it was implied that lunasin significantly suppressed the simvastatin induced elevation of the PCSK9 level in HepG2 cells.
Further, the expression level of HNF-1α, a dominating regulator of PCSK9, was analyzed in HepG2 cells; as shown in Figure 1C,D, the HNF-1α expression was stimulated by simvastatin at the mRNA ( Figure 1C) and protein ( Figure 1D) levels. However, as compared to simvastatin treatment alone, combination treatment of lunasin with simvastatin effectively reduced the HNF-1α expression level at the mRNA and protein levels. We further investigated whether the down-regulation of PCSK9 by lunasin was mediated by HNF-1α. HepG2 cells were pre-treated with HNF-1α siRNA before the treatment of lunasin. Importantly, as shown in Figure 1E, F, knock-down of HNF-1α by siHNF-1α effectively abolished the up-regulation of HNF-1α or PCSK9 induced by simvastatin treatment; a similar tendency was also observed by simvastatin combined with lunasin. Taken together, it was demonstrated that lunasin counteracted simvastatin induced elevation of PCSK9 expression at least partially via down-regulating HNF-1α in HepG2 cells. The mRNA (A) and protein (B) levels of intracellular precursor PCSK9 (PCSK9-P) and mature PCSK9 (PCSK9-M), as well as the mRNA (C) and protein (D) level of HNF-1α were determined by qRT-PCR and Western blot using β-actin as an internal control, respectively. After transient transfection with siRNA for 4 h, EA.hy 926 cells were maintained in fresh medium for 48 h and treated with 1 μM lunasin for an additional 24 h. Then, the levels of HNF-1α (E) and PCSK9 (F) protein expression were analyzed by Western blot analyses, respectively. * p < 0.05, ** p < 0.01, *** p < 0.001 vs. the control group; # p < 0.05, ## p < 0.01, ### p < 0.001 vs. the simvastatin group (n = 3, means ± SEM). Effects of simvastatin combined with lunasin treatment on PCSK9 and HNF-1α expressions at the mRNA and protein levels in HepG2 cells. HepG2 cells were treated with simvastatin and/or lunasin for 24 h. The mRNA (A) and protein (B) levels of intracellular precursor PCSK9 (PCSK9-P) and mature PCSK9 (PCSK9-M), as well as the mRNA (C) and protein (D) level of HNF-1α were determined by qRT-PCR and Western blot using β-actin as an internal control, respectively. After transient transfection with siRNA for 4 h, EA.hy 926 cells were maintained in fresh medium for 48 h and treated with 1 µM lunasin for an additional 24 h. Then, the levels of HNF-1α (E) and PCSK9 (F) protein expression were analyzed by Western blot analyses, respectively. * p < 0.05, ** p < 0.01, *** p < 0.001 vs. the control group; # p < 0.05, ## p < 0.01, ### p < 0.001 vs. the simvastatin group (n = 3, means ± SEM).
Simvastatin Combined with Lunasin Synergistically Increases LDLR Level and Functionally Enhances LDL Uptake in HepG2 Cells
To detect the effect of simvastatin combined with lunasin treatment on the LDLR level, HepG2 cells were treated with 1 µM simvastatin and/or 5 µM lunasin for 24 h immediately after a one hour depletion of serum with opti-minimum essential media (Opti-MEM) medium. Then, the LDLR mRNA and protein levels were determined by quantitative real-time PCR (qRT-PCR) and Western blot. It was shown that treatment with either simvastatin or lunasin alone significantly increased the LDLR mRNA and protein levels. Moreover, lunasin combined with simvastatin treatment additively increased the LDLR level as compared to either lunasin or simvastatin alone (Figure 2A,B). Beyond that, functional analysis indicated that lunasin plus simvastatin treatment exhibited additive enhancement in LDL uptake in HepG2 cells ( Figure 2C). To detect the effect of simvastatin combined with lunasin treatment on the LDLR level, HepG2 cells were treated with 1 μM simvastatin and/or 5 μM lunasin for 24 h immediately after a one hour depletion of serum with opti-minimum essential media (Opti-MEM) medium. Then, the LDLR mRNA and protein levels were determined by quantitative real-time PCR (qRT-PCR) and Western blot. It was shown that treatment with either simvastatin or lunasin alone significantly increased the LDLR mRNA and protein levels. Moreover, lunasin combined with simvastatin treatment additively increased the LDLR level as compared to either lunasin or simvastatin alone (Figure 2A,B). Beyond that, functional analysis indicated that lunasin plus simvastatin treatment exhibited additive enhancement in LDL uptake in HepG2 cells ( Figure 2C). Effects of simvastatin in combination with lunasin treatment on the LDLR and LDL uptake levels in HepG2 cells. HepG2 cells were treated with simvastatin and/or lunasin for 24 h. The mRNA (A) and protein (B) levels of LDLR were analyzed by qRT-PCR and Western blot using β-actin as an internal control, respectively. * p < 0.05, ** p < 0.01 vs. the control group; # p < 0.05, ### p < 0.001 vs. the simvastatin group. (C) LDL uptake was assessed in HepG2 cells after treatment with simvastatin and/or lunasin for 24 h on a fluorescence plate reader. ∆∆∆ p < 0.001 vs. the negativecontrol group; # p < 0.05 vs. the simvastatin group; *** p < 0.001 vs. the 20 µg/mL Dil-LDL group (n = 3, means ± SEM).
Lunasin Reduces LDLR Degradation by Counteracting Simvastatin-Induced Up-Regulation of PCSK9 in ApoE −/− Mice
ApoE −/− mice fed a high fat diet (HFD) were administrated with simvastatin and/or lunasin on a daily basis. After four weeks of administration, we measured PCSK9 and LDLR levels in liver tissues of ApoE −/− mice. As shown in Figure 3A,B, hepatic PCSK9 expression was dramatically up-regulated by simvastatin alone; however, it was significantly suppressed at both the mRNA and protein levels in the group treated by simvastatin in combination with lunasin. Besides, immunohistochemistry staining indicated that PCSK9 secreted in the liver of ApoE −/− mice was apparently reduced in the lunasin added simvastatin group ( Figure 3C,D). Furthermore, qRT-PCR and Western blot analysis showed that simvastatin stimulated up-regulation of hepatic HNF-1α was effectively counteracted by lunasin ( Figure 3A,B). ApoE −/− mice fed a high fat diet (HFD) were administrated with simvastatin and/or lunasin on a daily basis. After four weeks of administration, we measured PCSK9 and LDLR levels in liver tissues of ApoE −/− mice. As shown in Figure 3A,B, hepatic PCSK9 expression was dramatically upregulated by simvastatin alone; however, it was significantly suppressed at both the mRNA and protein levels in the group treated by simvastatin in combination with lunasin. Besides, immunohistochemistry staining indicated that PCSK9 secreted in the liver of ApoE −/− mice was apparently reduced in the lunasin added simvastatin group ( Figure 3C,D). Furthermore, qRT-PCR and Western blot analysis showed that simvastatin stimulated up-regulation of hepatic HNF-1α was effectively counteracted by lunasin ( Figure 3A,B). Additionally, it was shown that hepatic LDLR mRNA and protein levels were elevated by administration with either simvastatin or lunasin alone as compared to the model control group (ApoE −/− mice fed with HFD and i.p. administrated with vehicle); however, they were remarkably up-regulated to a greater extent when treated with simvastatin combined with lunasin ( Figure 4A,B). The data indicated that simvastatin combined with lunasin could enhance LDLR expression level more effectively than simvastatin monotherapy via suppressing simvastatin induced elevation of the PCSK9 level in ApoE −/− mice fed with HFD. Additionally, it was shown that hepatic LDLR mRNA and protein levels were elevated by administration with either simvastatin or lunasin alone as compared to the model control group (ApoE −/− mice fed with HFD and i.p. administrated with vehicle); however, they were remarkably up-regulated to a greater extent when treated with simvastatin combined with lunasin ( Figure 4A,B). The data indicated that simvastatin combined with lunasin could enhance LDLR expression level more effectively than simvastatin monotherapy via suppressing simvastatin induced elevation of the PCSK9 level in ApoE −/− mice fed with HFD.
Simvastatin Combined with Lunasin Improves Its Serum Cholesterol Lowering Efficacy in ApoE −/− Mice
Given that the synergistic effects of simvastatin plus lunasin on elevating the LDLR level and functionally enhancing LDL uptake were observed in HepG2 cells, we investigated the in vivo antihyperlipidemia activity of simvastatin combined with lunasin in ApoE −/− mice. After four weeks of administration, the serum cholesterol concentrations were analyzed in ApoE −/− mice fed an HFD, and it was found that simvastatin monotherapy failed at lowering LDL-C and TC concentrations relative to the model group; however, lunasin treatment alone effectively reduced the serum LDL-C and TC levels, and lunasin plus simvastatin showed more potent serum cholesterol lowering efficacy in ApoE −/− mice than lunasin monotherapy ( Figure 5A,B).
Simvastatin Combined with Lunasin Improves Its Serum Cholesterol Lowering Efficacy in ApoE −/− Mice
Given that the synergistic effects of simvastatin plus lunasin on elevating the LDLR level and functionally enhancing LDL uptake were observed in HepG2 cells, we investigated the in vivo anti-hyperlipidemia activity of simvastatin combined with lunasin in ApoE −/− mice. After four weeks of administration, the serum cholesterol concentrations were analyzed in ApoE −/− mice fed an HFD, and it was found that simvastatin monotherapy failed at lowering LDL-C and TC concentrations relative to the model group; however, lunasin treatment alone effectively reduced the serum LDL-C and TC levels, and lunasin plus simvastatin showed more potent serum cholesterol lowering efficacy in ApoE −/− mice than lunasin monotherapy ( Figure 5A,B).
Discussion
LDL-C is not only involved in the formation of cardiovascular diseases, but also closely related to other chronic diseases. The levels of circulating LDL are directly associated with atherosclerosis disease severity, especially ox-LDL [24]. Accumulation of ox-LDL in liver resident macrophages contributes to inflammation and disease progression of non-alcoholic steatohepatitis (NASH) [25]. Several studies showed that levels of serum ox-LDL were increased in patients with breast, pancreas, colon, or esophageal cancer, and ox-LDL induced mutagenesis, stimulated proliferation, initiated metastasis, and induced treatment resistance [26,27]. It was observed that patients with impaired renal function exhibited altered lipid metabolism and dyslipidemia [28], which may contribute to the worsening renal function and to the development of cardiovascular complications [29,30]. It was shown that higher levels of total cholesterol and LDL-C were observed in rats with experimental chronic renal failure, which positively correlated with circulating PCSK9 and negatively with the levels of LDLR [31]. Thus, effectively lowering circulating LDL is essential for disease treatment.
Statins have been accepted as the first-line therapy for lowering LDL-C in the management of patients with increased risk for CVD and associated mortality; however, some patients treated with statins appear to be statin resistant because they fail to achieve adequate reduction of LDL-C levels, while others are statin intolerant because they are unable to tolerate statin therapy due to adverse effects, particularly myopathy and increased activity of liver enzymes [10].
We have previously revealed that lunasin treatment effectively inhibited PCSK9 expression and remarkably elevated LDLR level in hepatocytes and mice [20]; thus, we were prompted to explore whether combination therapy with simvastatin and lunasin could enhance the LDL-C lowering efficacy of simvastatin. In liver tissue, PCSK9 synthesis is largely controlled at the gene transcriptional level by HNF1; there are two members of the HNF1 family, HNF1α and HNF1β. Previous research identified a highly conserved HNF1 binding site on the PCSK9 promoter region as another critical regulatory sequence motif of PCSK9 transcription [32]. The importance of HNF1α in PCSK9 expression has been clearly demonstrated in cell culture studies and in mice where adenovirus-mediated overexpression of HNF1α led to increased PCSK9 and reduced liver LDLR protein [33]. As expected, it was found that lunasin effectively counteracted simvastatin induced elevation of PCSK9 by decreasing HNF-1α, thereby increasing the LDLR level and thus functionally enhancing LDL uptake in HepG2 cells.
Herein, we investigated the in vivo anti-hyperlipidemia activity of simvastatin combined with lunasin in ApoE −/− mice fed an HFD. It was observed that simvastatin monotherapy had little effect
Discussion
LDL-C is not only involved in the formation of cardiovascular diseases, but also closely related to other chronic diseases. The levels of circulating LDL are directly associated with atherosclerosis disease severity, especially ox-LDL [24]. Accumulation of ox-LDL in liver resident macrophages contributes to inflammation and disease progression of non-alcoholic steatohepatitis (NASH) [25]. Several studies showed that levels of serum ox-LDL were increased in patients with breast, pancreas, colon, or esophageal cancer, and ox-LDL induced mutagenesis, stimulated proliferation, initiated metastasis, and induced treatment resistance [26,27]. It was observed that patients with impaired renal function exhibited altered lipid metabolism and dyslipidemia [28], which may contribute to the worsening renal function and to the development of cardiovascular complications [29,30]. It was shown that higher levels of total cholesterol and LDL-C were observed in rats with experimental chronic renal failure, which positively correlated with circulating PCSK9 and negatively with the levels of LDLR [31]. Thus, effectively lowering circulating LDL is essential for disease treatment.
Statins have been accepted as the first-line therapy for lowering LDL-C in the management of patients with increased risk for CVD and associated mortality; however, some patients treated with statins appear to be statin resistant because they fail to achieve adequate reduction of LDL-C levels, while others are statin intolerant because they are unable to tolerate statin therapy due to adverse effects, particularly myopathy and increased activity of liver enzymes [10].
We have previously revealed that lunasin treatment effectively inhibited PCSK9 expression and remarkably elevated LDLR level in hepatocytes and mice [20]; thus, we were prompted to explore whether combination therapy with simvastatin and lunasin could enhance the LDL-C lowering efficacy of simvastatin. In liver tissue, PCSK9 synthesis is largely controlled at the gene transcriptional level by HNF1; there are two members of the HNF1 family, HNF1α and HNF1β. Previous research identified a highly conserved HNF1 binding site on the PCSK9 promoter region as another critical regulatory sequence motif of PCSK9 transcription [32]. The importance of HNF1α in PCSK9 expression has been clearly demonstrated in cell culture studies and in mice where adenovirus-mediated overexpression of HNF1α led to increased PCSK9 and reduced liver LDLR protein [33]. As expected, it was found that lunasin effectively counteracted simvastatin induced elevation of PCSK9 by decreasing HNF-1α, thereby increasing the LDLR level and thus functionally enhancing LDL uptake in HepG2 cells.
Herein, we investigated the in vivo anti-hyperlipidemia activity of simvastatin combined with lunasin in ApoE −/− mice fed an HFD. It was observed that simvastatin monotherapy had little effect on lowering serum LDL-C and TC concentrations in ApoE −/− mice. This result was consistent with previous reports that the lipid lowering effect of statins depends on the presence of intact apolipoprotein E, which functions to transport circulating cholesterol into cells, particularly hepatocytes, and acts as an important mediator for hepatic metabolic clearance of circulating cholesterol [34,35]. Likewise, it was confirmed in the clinic that ApoE genotypes were associated with variations in plasma-lipid levels and with responses to statins [36,37], and the polymorphism in ApoE could cause statin resistance [38]. However, in this study, simvastatin combined with lunasin showed more potent serum cholesterol lowering efficacy in ApoE −/− mice than each monotherapy ( Figure 5A, B), indicating that lunasin could effectively reduce simvastatin resistance through counteracting simvastatin induced elevation of PCSK9.
In Vivo Study
The protocol for the in vivo study was approved by the Animal Ethics Committees of China Pharmaceutical University (No. 201601179, 19 October 2016) and conformed to the Guide for the Care and Use of Laboratory Animals published by the National Institutes of Health.
Six-week-old male ApoE −/− transgenic mice on a C57BL/6 background and their WT littermates were purchased from Beijing Vital River Laboratory Animal Technology Co., Ltd. (Beijing, China; No. SCXK 2012-0001) and maintained in an specific pathogen free(SPF)-class animal house at 25 • C, 40-60% humidity, with a 12 h light/dark cycle. Mice were provided with free access to food and water. The HFD containing 1.25% cholesterol (Diet # D12108C, Research Diets, New Brunswick, NJ, USA) was used to induce hypercholesterolemia.
After an acclimatization period of 7 days, WT mice divided randomly into two groups (n = 8) were fed common chow and administrated with or without 0.5 µmol/kg of lunasin in an application volume of 0.1 mL/10 g body weight, while ApoE −/− mice divided randomly into four groups (n = 8) were fed with HFD and administrated with 10 mg/kg simvastatin and/or 0.5 µmol/kg lunasin for 4 weeks, respectively. Lunasin was administrated by intraperitoneal injection, and simvastatin was given by oral gavage on a daily basis. Each animal was used only in one experiment in order to exclude the influence of other tests. At the end of the administration period, all mice were fasted for 8 h before blood sample collection and then euthanized for tissue harvest. All the experimental procedures were approved by the Animal Ethics Committee of China Pharmaceutical University.
Cell Culture and Treatments
HepG2 cells obtained from China Infrastructure of Cell Line Resources (Beijing, China) were cultured in MEM supplied with 10% (V/V) FBS, 100 units/mL penicillin, and 100 mg/mL streptomycin in a 37 • C, humidified incubator containing 5% CO 2 . Cells were seeded in six well plates and grown to 70% confluence followed by a one hour pretreatment with opti-MEM. Then, cells were treated with 1 µM simvastatin and/or 5 µM lunasin for 24 h, respectively. Cells treated with opti-MEM were used as the control. Total RNA and protein were extracted for qRT-PCR and Western blot analysis.
LDL Uptake Assay
The assay was conducted as described previously [40] with slight modification. Briefly, HepG2 cells were maintained in MEM supplemented with 10% FBS. The cells were seeded in 96 well black plates at a density of 1 × 10 4 cells per well and grown to 70% confluence. Then, cells were incubated with serum-free opti-MEM for 1 h, followed by incubation with 1 µM simvastatin and/or 5 µM lunasin for 20 h. Thereafter, 20 µg/mL Dil-LDL were added, and the cells were incubated in the dark for an additional 4 h. Cells incubated with opti-MEM without Dil-LDL were used as the negative control. Cells incubated with Opti-MEM and 20 µg/mL Dil-LDL were used as the control for normalization, respectively. After rinsing with PBS 3 times, LDL uptake was measured on a fluorescence plate reader (Varioskan flash, Thermo, Waltham, MA, USA) at an excitation wavelength of 520 nm and an emission wavelength of 580 nm.
qRT-PCR
Total RNA was extracted from cells or liver tissue samples using RNAiso plus reagent and reverse transcribed into cDNA by a commercial reverse transcription kit. QRT-PCR was performed with 50~100 ng cDNA template and specific primers on an MX3000PTM qRT-PCR amplifier (Agilent Scientific, Palo Alto, CA, USA) using SYBR ® Premix Ex Taq ™II (Takara, Shiga, Japan) according to the manufacturer's protocols. Primers for each gene are shown in Table 1. Target mRNA expression levels in each sample were normalized to the housekeeping gene β-actin. The 2 −∆∆Ct method was used to calculate relative mRNA expression levels [41]. Each run was completed with a melting curve analysis to confirm the specificity of amplification and lack of primer dimers.
Western Blot
Cells or liver tissue samples were lysed or homogenized by using a glass homogenizer in cold RIPA lysis buffer (Solarbio, Beijing, China) containing 1 mM PMSF (Amresco, Solon, OH, USA), and the supernatant was collected. Total protein concentrations were determined by the BCA protein assay kit (Biouniquer, Beijing, China). An equal amount of proteins from each sample was separated by 10% SDS-PAGE and transferred onto a 0.22 µm PVDF membrane (Merck Millipore, Darmstadt, Germany) followed by blocking with TBS solution containing 0.1% (V/V) Tween 20 and 5% (W/V) nonfat milk for 1 h at room temperature. Then, the membrane was incubated with anti-PCSK9, anti-LDLR, anti-HNF-1α antibody, or anti-β-actin antibodies overnight at 4 • C with gentle shaking and subsequently incubated with a peroxidase-conjugated secondary antibody (1:5000) for 1 h at room temperature. Protein bands were developed by electrochemiluminescence (ECL) fluid (Thermo Scientific, Waltham, Massachusetts, USA) and quantified using Image J software.
Small Interference RNA Transfection
Pre-designed siRNAs targeted to human HNF-1α mRNA were forward: CAGUGAGA CUGCAGAAGUA, reverse: UACUUCUGCAGUCUCACUG, according to Luo's study [42]. HNF-1α siRNAs and the negative control siRNA were synthesized by Shanghai Gene Pharmaceutical Technology Co. (Shanghai, China). The day before transfection, 2 × 10 5 per well of HepG2 cells were plated in a 6 well plate with 2 mL MEM medium containing 10% FBS. When cells were grown to 70-80% confluence, cells were transfected with 100 pmol siRNA with 5 µL Lipofectamine 3000 according to the manufacturer's instructions. Then, fresh growth medium was replaced 4 h after transfection. After 72 h transfection, cells were harvested for Western blot analyses. To examine whether gene silencing of HNF-1α affected the down-regulation of PCSK9 expression in HepG2 cells, 1 µM lunasin was added to treat the cells during the last 24 h of culture.
Serum Cholesterol Level Test
Blood samples were harvested from mice eyes and maintained at 4 • C for 4 h. Then, serum samples were obtained by centrifugation at 3000 rpm, at 4 • C, for 10 min, and the concentrations of LDL-C and TC were measured by using determination kits.
Immunohistochemistry Staining
For histological analysis, mice liver samples were fixed in 4% para-formaldehyde/PBS at 4 • C overnight and imbedded in paraffin followed by slicing on a microtome (RM2245, Leica, Germany) at a thickness of 5 µm. Liver sections were dewaxed and boiled in sodium citrate solution (0.01 M, pH 6.0) for 20 min and then treated with 3% H 2 O 2 for 20 min at room temperature followed by blocking with 10% goat serum/TBST for 30 min. Subsequently, the sections were incubated with rabbit anti-PCSK9 polyclonal antibody (1:200) overnight at 4 • C, followed by incubation with Alexa Fluor ® 488 conjugated goat anti-rabbit IgG (1:400) as the secondary antibody for 1 h at room temperature. The cell nuclei were stained with DAPI. Finally, the sections were mounted by glycerin and photographed by Zeiss AX10 fluorescence microcopy (Zeiss, Oberkochen, Germany).
Statistical Analysis
The data are shown as the means ± SEM, and one-way ANOVA was performed using GraphPad Prism software 6.0 (San Diego, CA, USA). A value of p < 0.05 was considered statistically significant.
Conclusions
Conclusively, the present in vitro and in vivo study demonstrated that lunasin could effectively counteract simvastatin induced elevation of PCSK9 by decreasing HNF-1α, thereby synergistically increasing the LDLR level and functionally enhancing LDL uptake in hepatocytes, significantly improving the LDL-C lowering efficacy of simvastatin in ApoE −/− mice ( Figure 6). Simvastatin combined with lunasin could be a novel regimen for hypercholesterolemia treatment.
Serum Cholesterol Level Test
Blood samples were harvested from mice eyes and maintained at 4 °C for 4 h. Then, serum samples were obtained by centrifugation at 3000 rpm, at 4 °C, for 10 min, and the concentrations of LDL-C and TC were measured by using determination kits.
Immunohistochemistry Staining
For histological analysis, mice liver samples were fixed in 4% para-formaldehyde/PBS at 4 °C overnight and imbedded in paraffin followed by slicing on a microtome (RM2245, Leica, Germany) at a thickness of 5 μm. Liver sections were dewaxed and boiled in sodium citrate solution (0.01 M, pH 6.0) for 20 min and then treated with 3% H2O2 for 20 min at room temperature followed by blocking with 10% goat serum/TBST for 30 min. Subsequently, the sections were incubated with rabbit anti-PCSK9 polyclonal antibody (1:200) overnight at 4 °C, followed by incubation with Alexa Fluor ® 488 conjugated goat anti-rabbit IgG (1:400) as the secondary antibody for 1 h at room temperature. The cell nuclei were stained with DAPI. Finally, the sections were mounted by glycerin and photographed by Zeiss AX10 fluorescence microcopy (Zeiss, Oberkochen, Germany).
Statistical Analysis
The data are shown as the means ± SEM, and one-way ANOVA was performed using GraphPad Prism software 6.0 (San Diego, CA, USA). A value of p < 0.05 was considered statistically significant.
Conclusions
Conclusively, the present in vitro and in vivo study demonstrated that lunasin could effectively counteract simvastatin induced elevation of PCSK9 by decreasing HNF-1α, thereby synergistically increasing the LDLR level and functionally enhancing LDL uptake in hepatocytes, significantly improving the LDL-C lowering efficacy of simvastatin in ApoE −/− mice ( Figure 6). Simvastatin combined with lunasin could be a novel regimen for hypercholesterolemia treatment.
Conflicts of Interest:
The authors declare no conflict of interest. | 6,690.6 | 2019-11-01T00:00:00.000 | [
"Biology",
"Medicine"
] |
Deformation Behavior of Internal Porosity in Continuous Casting Wide-Thick Slab during Heavy Reduction
Heavy reduction (HR) is a novel technology that could effectively improve the internal porosities and other internal quality problems in continuously cast steel, during which a large reduction deformation is implemented at and after the strand solidification end. In the present paper, non-uniform solidification of the wide-thick slab was calculated with a two-dimensional (2D) heat transfer model. Based on the predicted temperature distribution at the solidification end of the casting strand, a three-dimensional (3D) thermal-mechanical coupled model was developed for investigating the deformation behavior of the internal porosities in wide-thick slab during HR. An Arrhenius-type constitutive model for the studied steel grade was derived based on the measured true stress-strain with single-pass thermosimulation compression experiments and applied to the 3D thermal-mechanical coupled model for improving the calculation accuracy. With the developed 3D thermal-mechanical coupled model, deformation behavior of the two artificial porosities located at the slab center of 1/2 width and 1/8 width during HR was investigated under different condition of HR deformation, HR start position, and HR reduction mode. Based on the calculated porosity closure degree (ηs) and the corresponding equivalent strain (εeq) under different HR conditions, a prediction model that describes the quantitative relationship between ηs and εeq was derived for directly and accurately evaluating the process effect of HR on improving the internal porosities in wide-thick slab.
Introduction
Due to solidification shrinkage and gas entrapment, internal porosity often occurs in casting steel.As one kind of the common internal defects, it seriously influences the mechanical properties of the final products, for example, decreasing the fatigue life and the yield strength, and should be eliminated in the subsequent rolling or forging process.
To provide theoretical guide for process design of rolling or forging, many investigations were carried out by previous researchers to clarify the closure mechanism of internal porosity in metal materials during the forging or rolling process.To quantitatively evaluate the porosity closure during forging process, Tanaka et al. [1] proposed the hydrostatic integration parameter, which was widely adopted as an indicator of the porosity closure degree by subsequent researchers [2][3][4][5].In addition to the hydrostatic integration parameter, many other researchers [6][7][8][9]] also adopted effective strain as the indicator of porosity closure degree during hot working process, and different threshold values mboxciteB7-metals-421132,B8-metals-421132,B9-metals-421132 of effective strain for eliminating the internal porosity were reported.Recently, a void aspect ratio evaluation index, defined as a function of stress deviator, effective strain, and effective stress, was proposed by Chen et al. [10], which could give an accurate description of the porosity evolution during forging process.After carrying out full-scale hot-rolling experiments, Ståhlberg et al. [11] concluded that temperature gradient between the lower temperature on workpiece surface and the higher temperature in its internal region, as obtained by water cooling, was advantageous for the elimination of internal porosity with a relatively small rolling reduction.The effect of temperature gradient was further discussed in the later studies of forging [4,12] or rolling [3,13,14] with numerical or experimental methods, and all of these investigations confirmed the promotion effect of temperature gradient on the porosity closure.During forging process, die shape is another critical process parameter that influences the process effect on eliminating the internal porosities in workpiece, and in order to design an optimum die geometry, some studies were conducted by previous researchers to investigate the effect of die shapes on porosity closure during open die forging [6,15,16], upset process [4], and hot radial forging [17], et al.
Internal porosities in casting steel usually could be eliminated by rolling or forging process.However, due to the increased solidification time for casting steel with a large section size, its internal porosities become more serious with the presence of coarser cast structure [4,18].Under condition of large components that were produced by rolling or forging with a relatively low compression ratio [11,[19][20][21], the serious internal porosities in large section size casting steel could not be easily eliminated, which seriously influences the mechanical properties of the final products.Meanwhile, as one of the main counter measures of the internal porosities, the traditional mechanical soft reduction (SR) [22,23] was proved to be insufficient on significantly improving the serious internal porosities in the casting steel with a large section size.In order to improve the serious internal defects in continuous casting steel significantly, some earlier researchers [24][25][26][27] proposed heavy reduction (HR) technology.By implementing a large reduction deformation around the strand solidification end, the internal quality of continuous casting steel could be significantly improved by HR, which effectively contributes to the complete elimination of internal porosity in the subsequent rolling or forging process.
As an effective counter measure of internal defects in continuous casting steel, HR has attracted more and more researchers' attention with the rapidly increased demand for large components in the large equipment manufacturing industry in recent years.Some theoretical and experimental investigations were carried out recently for studying the improving effect of HR on porosity and other internal defects in continuous casting bloom [28,29], billet [30], or slab [19][20][21], and some new HR technologies were then proposed and applied.By establishing a three-dimensional (3D) thermal-mechanical coupled model, the present authors [28] studied the deformation behavior of continuous casting bloom during HR and developed the two-stage sequential heavy reduction technology.Industrial trials indicated that the homogeneity and compactness of the continuous casting bloom could be obviously improved after the application of two-stage sequential heavy reduction technology.Based on numerical simulation results, two kinds of new HR technologies, named as START and HRPISP, were respectively proposed by Xu et al. [19] and Zhao et al. [20,21] for simultaneously improving the internal porosity and macro-segregation in continuous casting wide-thick slab, and the effectiveness of START was proved by experimental results in plant.
To provide theoretical basis for the development of the HR process and thus improve the internal porosities in wide-thick slab more effectively, the porosity deformation behavior during HR in the wide-thick slab continuous casting process was systematically investigated mainly by the numerical simulation method in the present work.A two-dimensional (2D) heat transfer model was established to calculate the non-uniform solidification process of the wide-thick slab.Based on the predicted heat transfer results by the 2D heat transfer model and the derived constitutive model for the studied steel grade during HR, a 3D thermal-mechanical coupled model, containing two artificial spheroidal porosities, respectively, located the slab center of 1/8 width (P 1/8 ) and 1/2 width (P 1/2 ), was established.With this 3D thermal-mechanical coupled model, the deformation behavior of P 1/8 and P 1/2 during HR was numerically investigated under different HR conditions, including the HR deformation, HR start position, and HR mode.Based on the predicted porosity deformation results under different HR conditions and the corresponding equivalent strain (ε eq ), a prediction model for the porosity closure behavior was derived to describe the quantitative relationship between the porosity closure degree (η s ) and ε eq .
2D Heat Transfer Model
During HR, the deformation behavior of the casting strand is closely related to its temperature distribution.In order to improve the calculation efficiency, a 2D heat-transfer model, as shown in Figure 1, was firstly developed with the commercial finite element software MSC.Marc (2013.0.0,MSC Software Corporation, Newport Beach, CA, USA) based on the practical casting conditions in Table 1 and some simplified conditions [31].Heat transfer analysis was then carried out with this model, which determined the strand solidification end and the corresponding initial temperature field for the subsequent 3D thermal-mechanical coupled model in Section 2.2.Due to the symmetry of heat transfer behavior of the casting strand along its width direction, half of the wide-thick slab transverse section, as shown in Figure 1, was taken as the calculation domain.Four-nodes quadrilateral elements with a side length of 5 mm were applied to uniformly mesh the calculation domain, and the final 2D heat transfer model contains 11,200 elements and 11,457 nodes.Automatic time step with 0.1 s and 1 s taken as the minimum and maximum time step, respectively, was adopted during the calculation.Thermal material properties and the cooling boundary conditions are two critical factors that influence the calculation accuracy of the 2D heat transfer model.In order to improve the calculation accuracy, thermal material properties of the studied steel grade, such as the conductivity, density, and enthalpy, were calculated with weighted averaging of phase fraction method [32][33][34], and the final thermal material properties of the studied steel grade can be found in our previous work.[34] In addition to thermal material properties, cooling boundary conditions is another critical factor that directly determines the calculation accuracy of the 2D heat transfer model.When compared with the conventional continuous casting slab with a relatively small section size, solidification of the wide-thick slab is obviously non-uniform along its width direction due to the large section size and the non-uniform water flux distribution in the secondary cooling zone of the continuous casting machine, and the final solidification region of the wide-thick slab was located around 1/8 width of its transverse section [33,34].In order to accurately determine the complicated cooling boundary conditions in the secondary cooling zone, the non-uniform cooling water flux distribution in this cooling zone was measured and applied during the calculation of cooling boundary conditions in the secondary cooling zone.More detailed information about the measured water flux distribution and the calculation method of the cooling boundary conditions in mold, secondary cooling zone, and air cooling zone for the studied wide-thick slab continuous casting machine could be found in our previous work [34].
3D Thermal-Mechanical Coupled Model
In order to simulate the evolution of internal porosities in the wide-thick slab during HR, a 3D thermal-mechanical coupled model, as shown in Figure 2, was developed with MSC.Marc.Due to the symmetry of the strand deformation behavior during HR, a section of half the wide-thick slab along its width direction was taken as the calculation domain.The deformation of the wide-thick slab that is caused by HR is obviously much larger than the thermal deformation.Therefore, the much smaller thermal deformation of the wide-thick slab during HR was neglected in the 3D thermal-mechanical coupled model.For the studied wide-thick slab continuous casting machine, HR can be implemented by one or several HR segments.Each HR segment, as shown in Figure 2, contains five pairs of rollers, and the roller diameter and roller pitch are 390 mm and 410 mm, respectively.During HR, roller gap linearly decreases from entrance (Roller 1#) to exit (Roller 5#) of the segment, and rollers of the HR segment are regarded as rigid bodies without considering their small deformation during HR.The friction factor between rollers and the wide-thick slab was set as 0.3 [13] and the adopted contact detection method between rollers and the casting strand was node to segment contact algorithm [28].
During the calculation of the 3D thermal-mechanical coupled model, the slab temperature field at each increment was firstly solved by the solver based on the corresponding cooling boundary conditions and the thermal material properties.Secondly, the mechanical properties in the 3D thermal-mechanical coupled model were then updated mainly based on the temperature field at the present increment and the derived constitutive equations (Equations ( 1) and ( 2)), and the deformation behaviour of the wide-thick slab was then solved based on the mechanical boundary conditions during HR.
The practical production results, as shown in Figure 3, indicate that serious porosities are centrally distributed around the slab centerline, and the porosity size is usually less than 5 mm.During the development of the 3D thermal-mechanical coupled model, porosity was simplified to be a spheroidal void with a diameter of 3 mm and located at the slab centerline.Due to the non-uniform solidification of the wide-thick slab, there still remains a small mushy region, as shown in Figure 4, around 1/8 width at the strand solidification end when the solid phase fraction (f s ) at the slab center of 1/2 width reaches 1.0.This means that the temperature distribution and variation around the region of 1/8 width differs from those at the other regions during HR at the strand solidification end, which will impact the porosity deformation behavior.Therefore, a total of two artificial spheroidal porosities (each with a diameter of 3 mm) were created at the slab centerline of 1/8 width (P 1/8 ) and 1/2 width (P 1/2 ) in the 3D thermal-mechanical coupled model for investigating the porosity deformation behavior in these two typical regions during HR. Figure 4 schematically shows the distribution of these two artificial porosities on the slab transverse section in the 3D thermal-mechanical coupled model, and the artificial porosity located at 1/2 width (P 1/2 ) in Figure 4 can be also seen in Figure 2 and it is located on the symmetrical surface of the 3D thermal-mechanical coupled model.In order to improve the calculation accuracy and efficiency, the calculation domain of the 3D thermal-mechanical coupled model was nonuniformly meshed with four-nodes tetrahedral elements.Fine elements with a side length of ~0.3 mm are distributed around the two artificial porosities, while coarser elements with a side length of ~10 mm are distributed around the slab surface.The final 3D thermal-mechanical coupled model contains 540,836 elements and 99,592 nodes.Automatic time step was adopted during the simulation, and the maximum and the minimum time step were 0.01 s and 1 s, respectively.
In order to accurately describe the metal flow behavior of the wide-thick slab during HR, the true stress-strain of the studied steel grade was measured at different temperatures and strain rates, and the measured results are presented in Figure 5. Based on the measured results in Figure 5, an Arrhenius-type constitutive model was derived with the similar method adopted in our previous work for establishing an Arrhenius-type constitutive model of GCr15 steel [35].The derived constitutive model of the studied steel was then applied to the 3D thermal-mechanical coupled model, and the strain and stress in the constitutive model were adopted as equivalent strain and equivalent stress in the 3D thermal-mechanical coupled model.The derived constitutive model can be expressed, as follows: where σ is the stress, MPa; A and α are material constants; n is the material's stress index; Z is the Zenner-Hollomon parameter; . ε is the strain rate, s −1 ; Q is the activation energy of hot deformation, J mol −1 ; R is the ideal gas constant, (8.314 J mol −1 K −1 ); and, T is the temperature, • C. The strain-dependent parameters of α, A, n, and Q were derived based on the measured true stress-strain curves and could be calculated with Equation ( 2) and the corresponding parameters that are listed in Table 2.For temperature of >1300 where σ 1300 is the flow stress at 1300 • C and a certain specified strain rate and strain and can be determined by Equation (1); η is the temperature-dependent attenuation coefficient and it can be expressed as: where σ p1300 is the peak stress of the measured true stress-strain curve at 1300 • C and 0.001 s −1 and equal to 8.431; and, T is the temperature of >1300 • C; c and d are equal to 5.898 × 1018 and −5.712, respectively, based on the variation trend of peak stress with temperature at the strain rate of 0.001 s −1 .
Model Validation
In order to show the accuracy of the derived Arrhenius-type constitutive model, flow stress under different temperature and strain rates was calculated with this constitutive model, and the calculated results are compared with the measured ones in Figure 6.Based on the measured and the calculated results in Figure 6, the standard statistical parameters of average absolute relative error (AARE) for the measured and the calculated values, which has been adopted in our previous work [35], was calculated with Equation (5).It is found that the value of AARE is about 4.7%, which proves the accuracy of the derived constitutive model.
AARE(
where E i is the measured value and P i is the calculated value by the derived constitutive model; and, N is the total number of data sets in Figure 6. In order to verify the accuracy of the 2D heat transfer model, the temperature of the slab inner surface at different strand positions were measured with a thermal infrared camera (A40, FLIR, FLIR Systems Inc., Goleta, CA, USA) when the wide-thick slab with a transverse section size of 2000 mm × 280 mm was cast at 0.8 m/min.The measured results are compared with the calculated ones in Figure 7, which indicates that the calculated temperature by the 2D heat transfer model agree well with the corresponding measured results and that the relative error between the calculated and the measured results is less than 2.2%.It should be noted that, due to the non-uniform cooling water flux distribution in the secondary cooling zone, the slab surface temperature at 1/8 width, as shown in Figure 7, is higher than that at 1/2 width.For this reason, the obvious mushy region, as has been presented in Figure 4, still could be observed around 1/8 width when f s reaches 1.0 at the strand solidification end.
In order to verify the 3D thermal-mechanical coupled model, plant trial of HR was carried out.During the plant trial, the casting strand of the wide-thick slab moved through the HR segment from entrance (Roller 1#) to exit (Roller 5#) and f s (solid phase fraction at the slab center) at the entrance of the HR segment is 1.0.The reduction force of the HR segment was measured in real time by the pressure sensors that were installed in the hydraulic cylinders of the HR segment.The calculated reduction force of the HR segment in the 3D thermal-mechanical coupled model can be determined by adding up the calculated reduction force of Roller 1#-5#.Figure 8 compares the actual measured reduction force of the HR segment during plant trial with the corresponding calculated results by the 3D thermal-mechanical coupled model.It can be seen that the calculated reduction force shows good agreement with the actual measured results.The relative error between the calculated and the actual measured results is less than 3.2%.
Results and Discussion
Figure 9 schematically shows the porosity dimension along the slab thickness direction (corresponding to X axis), casting direction (corresponding to Y axis), and the slab width direction (corresponding to Z axis) for the two created artificial porosities in the 3D thermal-mechanical coupled.In order to quantitatively describe the porosity deformation behavior along three axis directions, the porosity deformation degree was defined: where ∆l x , ∆l y , and ∆l z are the porosity deformation degree along the slab thickness direction, casting direction, and the slab width direction; L x , L y , and L z are the porosity axis length along three axis directions before HR; L x , L y , and L z are porosity axis length along three axis directions after HR.In order to quantitatively describe the overall deformation behavior of each artificial porosity in the 3D thermal-mechanical coupled model and evaluate the process effect of HR on improving the internal porosity, the porosity closure degree was defined based on the porosity axis length before and after HR: where η s is the porosity closure degree after HR and it ranges from 0 to 1.A larger value of η s indicates a better process effect of HR on improving the internal porosity.
Porosity Deformation Behavior after Different HR Deformation
The porosity deformation behavior was firstly investigated under the condition that different HR deformation was uniformly implemented by the HR segment at the strand solidification end (f s at the entrance of the HR segment is 1.0).The porosity deformation degree and the closure degree of P 1/2 and P 1/8 after different HR deformation are presented in Figure 10.The thickness reduction in each figure represents the magnitude of HR deformation implemented by the HR segment.It can be seen from Figure 10a to c that the porosity deformation degree along the slab thickness direction (∆l x ), the casting direction (∆l y ), and the slab width direction (∆l z ) continuously increase with thickness reduction (represents the HR deformation implemented by the HR segment) increased.The values of ∆l x in Figure 10a and ∆l z in Figure 10c are negative, which is opposite to that of ∆l y in Figure 10b.This means that the porosity size decreases along the slab thickness direction and the slab width direction, and it meanwhile increases along the casting direction during HR.When compared with the porosity deformation degree along the casting direction (∆l y ) and the slab width direction (∆l z ), the magnitude of ∆l x is much larger, which indicates that the major deformation of the porosity is along the slab thickness direction.During HR, the internal porosity is continuously improved with the implemented HR deformation increase.As a result, an increasing trend for the porosity closure degree (η s ) can be observed in Figure 10d.10 also indicates that there exists difference between the deformation behavior of P 1/2 and that of P 1/8 , and the closure degree of P 1/8 is 9.7% larger than that of P 1/2 after 10% HR deformation.This indicates that the porosity at the slab center around 1/8 width can be improved more effectively during HR at the strand solidification end.Two possible factors may contribute to the difference of porosity deformation behavior at 1/8 width and 1/2 width: the different location of P 1/8 and P 1/2 and the different temperature distribution around 1/8 width and 1/2 width.
In order to investigate the influence of porosity location, the deformation behavior of P 1/8 and P 1/2 was calculated under the condition that the cooling water flux distribution in the secondary cooling zone of the wide-thick slab continuous casting machine and the corresponding solidification process of the wide-thick slab were assumed to be uniform along the slab width direction.As the porosity mainly deforms along the slab thickness direction, only the porosity deformation degree along the slab thickness direction (∆l x ) and the porosity closure degree at 1/8 width and 1/2 width are compared in Figure 11a and b, respectively.The comparison results in Figure 11 show that the difference between the porosity deformation behaviour at 1/8 width and 1/2 width during HR is very small.This indicates that the influence of porosity location on the porosity deformation behavior during HR is not obvious under uniform cooling condition and also simultaneously proves that the difference of porosity deformation behavior at 1/8 width and 1/2 width is mainly caused by the different temperature distribution around these two regions.In order to further clarify the influence of temperature distribution on the porosity deformation behavior at 1/8 width and 1/2 width during HR, the variation of temperature, and the corresponding temperature difference at different typical locations (L s 1/8 , L c 1/8 , L s 1/2 , L c 1/2 ) shown in Figure 12a during HR at the strand solidification end are compared in Figure 12b,c, respectively.T s 1/8 , T c 1/8 , T s 1/2 , T c 1/2 in Figure 12b are the calculated temperature at L s 1/8 , L c 1/8 , L s 1/2 , L c 1/2 respectively, and ∆T 1/8 (∆T 1/8 = T c 1/8 − T s 1/8 ) and ∆T 1/2 (∆T 1/2 = T c 1/2 − T s 1/2 ) in Figure 12c represent the temperature difference between the slab surface and center at 1/8 width and 1/2 width, respectively.
It can be seen from Figure 12b that temperature at different typical locations overall present a decreasing trend from entrance (Roller 1#) to exit (Roller 5#) of the HR segment during HR at the strand solidification end.However, when compared with the gradual temperature variation at the slab surface (T s 1/8 and T s 1/8 ), temperature variation at the slab center (T c 1/8 and T c 1/2 ) are much more remarkable.As a result, the variation trend of ∆T 1/8 and ∆T 1/2 in Figure 12c are similar with that of T c 1/8 and T c 1/2 in Figure 12b.As mentioned above, a small mushy region still remains around 1/8 width at the strand solidification end when f s reaches 1.0.This means that the decrease of T c 1/8 and ∆T 1/8 during HR at the strand solidification end can be slowed down to some extent by the released heat from the remained small mushy region around 1/8 width.For this reason, the temperature difference at 1/8 width (∆T 1/8 ), as shown in Figure 12c, is larger than that at 1/2 width (∆T 1/2 ) during HR, except at the entrance (Roller 1#) of the HR segment.Combined with the larger closure degree of P 1/8 than that of P 1/2 in Figure 10d, it can be concluded that, due to the larger temperature difference at 1/8 width, porosity around this region can be improved more effectively by HR at the strand solidification end.During the hot working process, the deformation degree at one position of the workpiece can be quantitatively evaluated by the corresponding equivalent strain, and for this reason, many previous researchers [6-9] adopted equivalent strain as an indicator of closure degree of internal porosity in workpiece.
Figure 13 compares the equivalent strain at L c 1/8 (corresponding to the location of P 1/8 ) and L c 1/2 (corresponding to the location of P 1/2 ) after different HR deformation implemented by the HR segment at the strand solidification end.It can be seen that, when the HR deformation increased, equivalent strain at L c 1/8 and L c 1/2 after HR continuously increase.As a result, the porosity at 1/8 width and 1/2 width can be continuously improved, and the corresponding porosity closure degree in Figure 10d shows a rising trend with HR deformation increased.However, it should be noted that the equivalent strain at L c 1/8 is overall larger than that at L c 1/2 .This proves that, due to the larger temperature difference at 1/8 width, as mentioned above, HR deformation can transfer from the slab surface into its center more effectively for better improving the internal porosities around this region.
Influence of HR Position on the Porosity Deformation Behavior
For the studied wide-thick slab continuous casting machine, HR can be implemented by one or more HR segments, and the reduction position can be flexibly changed by adjusting the roller gap of the corresponding HR segment.In order to study the influence of HR position on the porosity deformation behavior, the porosity deformation behavior was calculated with 6% HR deformation implemented by one HR segment at different strand position.
Figure 14a−d show the calculated porosity deformation behavior, and the abscissa axis in each figure represents the HR start position (corresponding to Roll 1# of the HR segment) after the strand solidification end.With HR start position moving away after the strand solidification end, the porosity deformation degree along the slab thickness direction in Figure 14a and along the slab width direction in Figure 14c both decrease, while the porosity deformation degree along the casting direction in Figure 14b presents an increasing trend.This indicates that, with the HR starting position moving away after the strand solidification end, the porosity size after HR increases along three axis directions.Figure 14d shows the porosity closure degree after HR implemented at different strand positions.When compared with the porosity closure degree after HR implemented at the strand solidification end, the closure degree of P 1/8 and P 1/2 decrease by 9.3% and 6.3%, respectively, with the HR starting position moving away by 3 m after the strand solidification end, which indicates that the process effect of HR on improving the internal porosity becomes worse with the HR starting position moving away after the strand solidification end.
Figure 15a,b, respectively, presents the average temperature difference between the slab surface and center within the HR segment and the equivalent strain after 6% HR deformation implemented at different strand position.With the HR starting position moving away after the strand solidification end, the average temperature difference, which could promote the transfer of HR deformation from the slab surface into its center, significantly decreases.As a result, the equivalent strain, which represents the material deformation degree and is regarded as an indicator of porosity closure degree, after HR shown in Figure 15b continuously decreases, which explains the continuously decreasing trend of η s in Figure 15d.With the HR starting position moving away after the strand solidification end, the temperature of the casting strand decreases, and its deformation-resistant ability during HR correspondingly increases.As a result, the required reduction force for the HR segment to implement the same HR deformation, as shown in Figure 16, significantly increases with the HR start position moving away after the strand solidification end.When compared with the required reduction force for the HR segment to implement 6% HR deformation at the strand solidification end, this value increases by ~20% with the HR starting position moving away by 3 m after the strand solidification end.This indicates that the reduction capacity of the HR segment will significantly with the HR starting position moving away after the strand solidification end.From the discussion above, it can be concluded that, due to the significant decrease of porosity closure degree and the reduction capacity of the HR segment, the HR efficiency on improving the internal porosities significantly decreases with the HR starting position moving away after the strand solidification end.
Influence of Reduction Mode on the Porosity Deformation Behavior
In order to implement HR more effectively and thus better improve the internal porosities, the influence of reduction mode on the porosity deformation behavior during HR was investigated.Table 3 compares the HR deformation distribution within the HR segment in five cases, and the variation of the corresponding slab thickness from entrance (Roller 1#) to exit (Roller 5#) of the HR segment are presented in Figure 17.The total HR deformation in each case is 6.0%.In Case 1, HR deformation is uniformly implemented with 1.2% HR deformation at each roller of the HR segment, which represents the traditional reduction mode and it is called UHR (Uniform Heavy Reduction) in the present work.In addition to UHR, a new reduction mode, called SPUHR (Single Point and Uniform Heavy Reduction), was proposed based on the mechanical structure of the HR segment.For SPUHR (corresponding to Case 2 to 5), a relatively larger HR deformation was implemented at Roller 1# by adjusting the hydraulic cylinders that were installed at the entrance of the HR segment.The residual HR deformation was then uniformly implemented from Roller 2# to 5# with a relatively smaller HR deformation at each roller than that at Roller 1#, and the HR deformation at each roller of 2# to 5# was equal due to the limitation of mechanical structure of the HR segment.After 6% HR deformation implemented at the strand solidification end by the HR segment with the reduction mode of UHR (Case 1) and SPUHR (Case 2 to 5), the porosity deformation behaviors are presented in Figure 18.When compared with Case 1, porosity deformation degree along the slab thickness direction in Figure 18a and that along the slab width direction in Figure 18c increase while the porosity deformation degree along the casting direction in Figure 18b decreases with HR deformation at Roller 1# increased in Case 2 to 5.This means that the porosity size after HR decreases along three axis directions with HR deformation at Roller 1# increased.
Figure 18d shows the porosity closure degree after HR in five cases.The porosity closure degree continuously increases with HR deformation at Roller 1# increased, and, when compared with closure degree of P 1/8 and P 1/2 in Case 1, these two values, respectively, increase by 6.2% and 8.2% with HR deformation at Roller 1# increased to 3.6% in Case 5.This indicates that the porosity can be improved more effectively by HR with the newly-proposed reduction mode of SPUHR and that the effect of SPUHR on improving the HR efficiency becomes more significant with the HR deformation at Roller 1# increased.
Figure 19 shows the equivalent stain at the slab center of 1/8 width and 1/2 width after HR in five cases.The continuously increasing trend of equivalent strain from Case 1 to 5 indicates that the HR deformation could transfer from the slab surface into its center more effectively with HR deformation as Roller 1# increased.As a result, the porosity could be improved more effectively by HR with HR deformation as Roller 1# increased, which explains the increasing trend of porosity closure degree in Figure 18d and it proves the effect of SPUHR on improving HR efficiency.
During HR at the strand solidification end, HR efficiency continuously decreases from the entrance (Roller 1#) to exit (Roller 5#) of the HR segment due to the decrease of temperature difference.Therefore, with more HR deformation being concentrated at Roller 1# for the newly-proposed reduction mode of SPUHR, the final HR efficiency will be improved.However, in addition to temperature difference, another potential factor that may influence the HR efficiency is the distribution of HR deformation within the HR segment.Although the total HR deformation in Case 1 to 5 is equal, the slab deformation behavior at each roller changes with the change of HR deformation distribution within the HR segment, which may influence the final deformation behavior of the slab and its internal porosities after HR, even ignoring the variation of the slab temperature field within the HR segment.To evaluate the influence of HR deformation distribution within the HR segment on the HR efficiency.The porosity closure degree after 6% HR deformation in Case 1 to 5 was calculated with the assumption that the slab temperature field during HR within the HR segment was fixed, and the calculated results are shown in Figure 20.It can be seen that, although the variation of the slab temperature field was neglected during HR, the closure degree of P 1/8 and P 1/2 continuously increases by 5.9% and 5.2% from Case 1 to 5.This proves that HR deformation distribution within the HR segment is another factor that influences the HR efficiency and that the HR efficiency will be improved more significantly with more HR deformation concentrated at Roller 1#.
Prediction Model for Porosity Closure Behavior
During HR, equivalent strain (ε eq ) distribution of the casting steel could be easily determined by conducting thermal-mechanical analysis with a corresponding thermal-mechanical coupled model.This means that the process effect of HR on improving the internal porosities that were distributed at different locations of the casting steel can be directly evaluated with ε eq at the corresponding location if a quantitative relationship between the porosity deformation behavior and ε eq during HR could be established.
In order to derive the relationship between the porosity deformation behavior and ε eq for the wide-thick slab during HR, the calculated closure degree (η s ) of P 1/8 and P 1/2 and the corresponding ε eq under different HR conditions in Section 3 are shown in Figure 21.To quantitatively evaluate the correlation between η s and ε eq , the Pearson correlation coefficient [36] for the scattered data in Figure 21 was calculated with the following formula: where r is the Pearson correlation coefficient; X i and Y i denote ε eq and the corresponding η s for the scattered data in Figure 21; and, X and Y are, respectively, the mean value of ε eq and η s in Figure 21.
The absolute value of r ranges from 0 to 1, and a larger absolute value of r indicates a closer relationship between X and Y.
The calculated result indicates that the r value for the scattered data in Figure 21 reaches as high as 0.9938, which proves that there exists a very close positive correlation between η s and ε eq for the wide-thick slab during HR.In order quantitatively describe the relationship between η s and ε eq , polynomials with different orders were adopted to fit the scattered data in Figure 21 with the help of MATLAB.It was found that the quantitative relationship between η s and ε eq could be well described by a second order polynomial: η s = −5.23 × ε 2 eq + 3.39 × ε eq + 0.12 × 10 −2 , (9) Comparison results between the original data in Figure 21 and the fitting results with Equation (9) are presented in Figure 22.It can be seen that the fitting results agree well with the original data.The adjusted R square (R 2 ) reaches 0.9921, which proves the fitting accuracy.
Conclusions
The deformation behavior of the internal porosities in wide-thick slab during HR was numerically investigated.Some main conclusions are summarized, as follows: (1) After different HR deformation, the internal porosity size decreases along the slab thickness direction and the slab width direction and meanwhile increases along the casting direction, and the porosity deformation degree along the slab thickness direction (∆l x ) is much larger than that along the casting direction (∆l y ) and the slab width direction (∆l z ).Due to the larger temperature difference at 1/8 width during HR, the closure degree (η s ) of P 1/8 is 9.7% larger than that of P 1/2 and it reaches 0.332 after 10% HR deformation.
(2) With HR start position moving away after the strand solidification end, ∆l x and ∆l z decrease, while ∆l y increases.After 6% HR deformation within the HR segment, η s of P 1/8 and P 1/2 decrease by 9.3% and 6.3%, respectively, with the HR starting position moving away by 3 m after the strand solidification end.Meanwhile, the required reduction force for the HR segment increases by 20%.Therefore, HR efficiency on improving the internal porosities significantly decreases with the HR starting position moving away after the strand solidification end.
(3) When compared with the traditional reduction mode of UHR, the newly-proposed reduction mode of SPUHR could improve the HR efficiency.With more HR deformation being concentrated at entrance (Roller 1#) of the HR segment for SPUHR, ∆l x and ∆l z increase, while ∆l y decreases.η s of P 1/8 and P 1/2 after total 6% HR deformation, respectively, increases by 6.2% and 8.2% with HR deformation at Roller 1# increased from 1.2% to 3.6%.
(4) A prediction model for the porosity closure behavior was derived based on the relationship between η s and the corresponding equivalent strain (ε eq ), which can be expressed as: η s = −5.23 × ε 2 eq + 3.39 × ε eq + 0.12 × 10 −2
Figure 4 .
Figure 4. Solidification morphology of the wide-thick slab transverse section when solid phase fraction (f s ) at the slab center reaches 1.0, and the distribution of the two artificial porosities on slab the transverse section in the 3D thermal mechanical model.
Figure 6 .
Figure 6.Comparison between the measured and the calculated results at different temperature and strain rate of (a) 0.001 s −1 , (b) 0.01 s −1 , and (c) 0.1 s −1 .
Figure 7 .
Figure 7.Comparison between the calculated and the measured temperature.
Figure 8 .
Figure 8.Comparison between the measured and the calculated reduction force.
Figure 9 .
Figure 9. Schematic of the porosity dimension along three axis directions.
Figure 10 .
Figure 10.Porosity deformation degree along the (a) slab thickness direction, (b) casting direction, (c) width direction, and (d) the porosity closure degree after different heavy reduction (HR) deformation.
Figure 11 .
Figure 11.(a) Porosity deformation degree along the thickness direction and (b) porosity closure degree after different HR deformation implemented at the strand solidification end under condition of uniform solidification.
Figure 12 .
Figure 12.(a) Distribution of the typical locations on the slab transverse section and variation of the (b) temperature and (c) temperature difference at the typical locations during HR.
Figure
Figure Porosity deformation degree along the (a) slab thickness direction, (b) casting direction, (c) slab width direction, and (d) the porosity closure degree after 6% HR deformation implemented at different strand position.
Figure 15 .
Figure 15.(a) The average temperature difference at 1/8 width and 1/2 width within the HR segment and (b) the equivalent strain at the slab center of 1/8 width (L c 1/8 ) and 1/2 width (L c 1/2 ) after 6% HR deformation implemented at different strand position.
Figure 16 .
Figure 16.The required reduction force for the HR segment to implement 6% HR deformation at different strand position.
Figure
Figure Variation of the slab thickness within the HR segment in Case 1 to 5.
Figure 18 .
Figure 18.Porosity deformation degree along the slab thickness direction, (b) casting direction, (c) slab width direction, and (d) the porosity closure degree after 6% HR deformation in Case 1 to 5.
Figure
Figure Porosity closure degree after HR in Case 1 to 5 ignoring the variation of the slab temperature field during HR.
Figure 21 .
Figure 21.Relationship between the porosity closure degree (η s ) and the equivalent strain (ε eq ).
Figure 22 .
Figure 22.Comparison between the predicted and the fitting results for η s and ε eq .
Table 2 .
Polynomial fitting coefficients of the material parameters (X represents B, C, D, and E in Equation (2)).
Table 3 .
HR deformation distribution within the HR segment in five cases. | 9,592.8 | 2019-01-25T00:00:00.000 | [
"Engineering",
"Materials Science"
] |
QUANT: A Minimalist Interval Method for Time Series Classification
We show that it is possible to achieve the same accuracy, on average, as the most accurate existing interval methods for time series classification on a standard set of benchmark datasets using a single type of feature (quantiles), fixed intervals, and an 'off the shelf' classifier. This distillation of interval-based approaches represents a fast and accurate method for time series classification, achieving state-of-the-art accuracy on the expanded set of 142 datasets in the UCR archive with a total compute time (training and inference) of less than 15 minutes using a single CPU core.
Introduction
Interval methods represent a long-standing and prominent approach to time series classification.Most interval methods are strikingly similar, closely following a paradigm established by Rodríguez et al (2000) and Geurts (2001), and involve computing various descriptive statistics and other miscellaneous features over multiple subseries of an input time series, and/or some transformation of an input time series (e.g., the first difference or discrete Fourier transform), and using those features to train a classifier, typically an ensemble of decision trees (e.g., Deng et al, 2013;Lines et al, 2018).This represents an appealingly simple approach to time series classification (see Middlehurst and Bagnall, 2022;Henderson et al, 2023).We observe that it is possible to achieve the same accuracy, on average, as the most accurate existing interval methods simply by sorting the values in each interval and using the sorted values as features or, in order to reduce the size of the feature space (and, accordingly, computational cost), to subsample these sorted values, i.e., to use the quantiles of the values in the intervals as features.We name this approach Quant.
The difference in mean accuracy and the pairwise win/draw/loss between Quant and several other prominent interval methods, namely, TSF (Deng et al, 2013), STSF (Cabello et al, 2020), rSTSF (Cabello et al, 2021), CIF (Middlehurst et al, 2020), and DrCIF (Middlehurst et al, 2021b), for a subset of 112 datasets from the UCR archive (for which published results are available for all methods), are shown in the Multiple Comparison Matrix (MCM) in Figure 1 (see Ismail-Fawaz et al, 2023).Results for the other methods are taken from Middlehurst et al (2023).As shown in Figure 1, Quant achieves higher accuracy on more datasets, and higher mean accuracy, than existing interval methods.Total compute time for Quant is significantly less than that of even the fastest of these methods (see further below).
When using quantiles (or sorted values) as features, as we increase or decrease interval length, we move between two extremes: (a) a single interval where the quantiles (or sorted values) represent the distribution of the values over the whole time series (distributional information without location information); and (b) intervals of length one, together consisting of all of the values in the time series in their original order (location information without distributional information): see Figure 2.
Quantiles represent a superset of many of the features used in existing interval methods (min, max, median, etc.).Using quantiles allows us to trivially increase or decrease the number of features, by increasing or decreasing the number of quantiles per interval which, in turn, allows us to balance accuracy and computational cost.We find that quantiles can be used with fixed (nonrandom) intervals, without any explicit interval or feature selection process, and with an 'off the shelf' classifier, in particular, extremely randomised trees (Geurts et al, 2006), following Cabello et al (2021).
The key advantages of distilling interval methods down to these essential components are simplicity and computational efficiency.Quant represents one of the fastest methods for time series classification.The cost of computing the quantiles, in particular, is very low.Median transform time over 142 datasets in the UCR archive is less than one second.Total compute time (training and inference) is under 15 minutes for the same 142 datasets using a single CPU core.This is approximately 5× faster than the fastest existing interval method, rSTSF (Cabello et al, 2021), already one of the fastest methods for time series classification.
The rest of this paper is structured as follows.In Section 2, we discuss relevant related work.In Section 3, we set out the key aspects of the method.In Section 4, we present experimental results including a sensitivity analysis.
Interval Methods
Methods closely resembling current state-of-the-art interval methods have been applied to the domain of time series classification at least since Rodríguez et al (2000) and Geurts (2001).Most interval methods are strikingly similar, closely following the basic concept set out in, e.g., Rodríguez and Alonso (2004), namely: • for a set of intervals (subseries) taken from the input time series, and/or some transformation of the input time series such as the first difference or discrete Fourier transfom; • compute descriptive statistics (e.g., mean and variance) and other features for the values in each interval; and • use the computed features to train a classifier, typically an ensemble of decision trees.
Different interval methods are characterised by the set of transformations applied to the input time series, the characteristics of the intervals, the use of interval and/or feature selection, and the choice of classifier.
Many methods use one or more transformations of the input time series.RISE replaces the input time series with spectral, autocorrelation, and autoregressive representations (Lines et al, 2018;Flynn et al, 2019).More recently, the use of the original input time series in combination with the first difference, and some form of frequency domain representation (e.g., the discrete Fourier transform), has lead to significant improvements in accuracy over earlier methods (Cabello et al, 2020(Cabello et al, , 2021;;Middlehurst et al, 2021b).Some methods use fixed intervals, recursively splitting the input time series in half (e.g., Rodríguez and Alonso, 2004), while some use random intervals, recursively splitting the input time series at random points, or sampling intervals with random length and position (e.g., Deng et al, 2013;Baydogan et al, 2013;Cabello et al, 2021).Others methods use heuristic approaches (e.g., Cabello et al, 2020;Altay and Baydogan, 2021).
All or almost all proposed methods use a fixed set of descriptive statistics (e.g., mean and variance), often combined with other features such as slope (e.g., Deng et al, 2013;Middlehurst et al, 2020).Several methods employ some form of explicit feature and/or interval selection process (e.g., Cabello et al, 2020Cabello et al, , 2021;;Li et al, 2023).
Other variations to the basic concept of interval methods include forming 'bag of words' representations of the features extracted from intervals (e.g., Baydogan et al, 2013;Baydogan and Runger, 2016), fitting Gaussian process models to intervals (Berns et al, 2021), and approaches incorporating clustering (Schmidt and Lohweg, 2021).
Most methods use an ensemble of decision trees, including specialised decision trees for interval features such as 'time series trees' (e.g., Deng et al, 2013;Middlehurst et al, 2020Middlehurst et al, , 2021b)), or standard ensembles such as boosted decision trees (e.g., Rodríguez et al, 2001;Geurts, 2001), random forests, or extremely randomised trees (e.g., Cabello et al, 2021).Some methods use other classifiers such as support vector machines (e.g., Rodríguez and Alonso, 2005).In this context, it is worth noting that some earlier methods were proposed prior to the introduction of what are now considered canonical classifiers such as random forests or extremely randomised trees, and prior to or only shortly after the introduction of the UCR archive.
The two most accurate current interval methods on the datasets in the UCR archive are DrCIF (Middlehurst et al, 2021b), and rSTSF (Cabello et al, 2021).Both, in turn, build on TSF (Deng et al, 2013).TSF uses random intervals (intervals with random position and length), and computes the mean, variance, and slope of the values in each interval.TSF uses an ensemble of specialised decision trees ('time series trees'), using a splitting criteria that combines entropy and a tie-breaking procedure, and trains each tree separately using a different set of random of intervals (Deng et al, 2013).Bagnall et al (2017) found that TSF was faster and at least as accurate as other interval methods on the datasets in the UCR archive at the time.
DrCIF builds on CIF (Middlehurst et al, 2020), sampling random intervals (random position and length) from the input time series, first difference, and a periodogram, and computes features including the mean, standard deviation, slope, median, interquartile range, min, max, as well as the catch22 features (Lubba et al, 2019).DrCIF uses a version of 'time series trees' as per TSF, training each tree separately with a random set of intervals and a random subset of features.DrCIF is one of the four components of HIVE-COTE 2 (HC2), the most accurate method for time series classification on the datasets in the UCR archive (Middlehurst et al, 2021b).
rSTSF builds on STSF (Cabello et al, 2020).For each of the original time series, first difference, a periodogram, and an autoregressive representation (the coefficients of an autoregressive model), and for each of the mean, standard deviation, slope, min, max, median, interquartile range, and two additional features (the number of intersections with the mean and the number of values greater than the mean), rSTSF recursively splits the input at random points, selecting intervals using the Fisher score, performing a kind of interval or feature selection.Unlike DrCIF, rSTSF uses an 'off the shelf' classifier, namely, extremely randomised trees.While DrCIF and rSTSF produce similar accuracies on the datasets in the UCR archive, rSTSF is considerably faster (Middlehurst et al, 2023).
Other State-of-the-Art Methods
In the recent 'bake off redux', Middlehurst et al (2023) evaluate the most accurate current methods for time series classification over an expanded set of 142 datasets from the UCR archive.Middlehurst et al (2023) determine that the most accurate methods from each of a diverse set of different approaches to time series classification are: Proximity Forest, FreshPRINCE, rSTSF, WEASEL-D, InceptionTime, RDST, MultiRocket+Hydra, and HC2.
Proximity Forest (PF) is an ensemble of decision trees using distance measures as splitting criteria (Lucas et al, 2019).Proximity Forest 2.0 (PF2) is a recent extension of PF that improves computational efficiency, and uses a different set of distance measures (Herrmann et al, 2023).(PF2 was published concurrently with Middlehurst et al (2023), and is not included in the study.) FreshPRINCE focuses on simplicity, and combines features drawn from the TSFresh feature set, computed over the whole input time series, with a rotation forest classifier (Middlehurst and Bagnall, 2022).
WEASEL-D is a dictionary method, involving extracting and counting symbolic patterns in time series, building on WEASEL (Schäfer and Leser, 2017), and uses dilated sliding windows and the Symbolic Fourier Transform with random parameters to extract patterns, in conjunction with a ridge regression classifier (Schäfer and Leser, 2023).
InceptionTime is an ensemble of convolutional neural network models based on the Inception architecture (Ismail Fawaz et al, 2020), and represents the most accurate deep learning model on the datasets in the UCR archive.
RDST is a shapelet method, computing features based on the distance between input time series and a set of discriminative subseries drawn from the training set, which uses randomly-selected shapelets with various dilations, and a ridge regression classifier (Guillaume et al, 2022).
MultiRocket+Hydra combines features from both MultiRocket and Hydra.MultiRocket is an extension of Rocket and MiniRocket.Rocket transforms input time series using a large set of random convolutional kernels (random in terms of their length, weights, bias, dilation, and padding), and uses both PPV ('proportion of positive values') and max pooling (Dempster et al, 2020).MiniRocket uses a small, fixed set of convolutional kernels and PPV pooling, allowing for highly-optimised computation, and is significantly faster than Rocket (Dempster et al, 2021).MultiRocket combines the kernels from MiniRocket with an expanded set of pooling functions, and is close to the most accurate method for time series classification on the datasets in the UCR archive, while being only marginally slower than MiniRocket (Tan et al, 2022).Hydra combines aspects of both Rocket and dictionary methods, counting the occurrence of random patterns, represented by random convolutional kernels, in input time series (Dempster et al, 2023).Hydra is both faster and, with the exception of WEASEL-D, more accurate than other dictionary methods.All four methods employ a ridge regression classifier by default.
HC2 is an ensemble combining TDE (Middlehurst et al, 2021a), a dictionary method predating WEASEL-D, DrCIF, STC, a shapelet method predating RDST, and Arsenal, an ensemble of Rocket models (Middlehurst et al, 2021b).HC2 is the most accurate method for time series classification on the datasets in the UCR archive.
While there has been significant progress in terms of both accuracy and computational cost since Bagnall et al (2017), there is still great variability in the computational efficiency of the most accurate methods, with total compute time on the expanded set of 142 datasets ranging between hours, for the faster methods, and several weeks (Middlehurst et al, 2023).
Method
Quant involves computing quantiles over a fixed set of intervals on the input time series (and three transformations of the input time series), and using the computed quantiles to train a classifier.Compared to both DrCIF and rSTSF, we use: (a) a single type of feature (quantiles); and (b) fixed, dyadic intervals.In contrast to rSTSF, we use no explicit interval or feature selection process (in this sense, feature selection is delegated entirely to the classifier) and, in contrast to DrCIF, we use a standard classifier.The simplicity of our approach allows for exceptional computational efficiency, and helps to clarify the factors which are material to classification accuracy.
The key characteristics of Quant are: • the set of input representations; • the set of intervals; • the features (quantiles); and • the classifier.
We implement Quant in Python, using the implementation of extremely randomised trees from scikit-learn (Pedregosa et al, 2011).Our code and results will be made available at https://github.com/angus924/quant.
Input Representations
Following Middlehurst et al (2021b), we use the original time series, the first difference, , and the discrete Fourier transform, F(X).We find that it is also beneficial to use the second difference, , although the improvement in accuracy is marginal: see Section 4.2.2.We find that it is beneficial to smooth the first difference by applying a simple moving average.Again, the effect seems to be relatively small.We found no consistent improvement in accuracy by smoothing the other input representations.
Intervals
Formally, a time series is a sequence of values ordered in time, X = {x 0 , x 1 , . . ., x n−1 }, where n is time series length.An interval is a contiguous subset of values {x a , . . ., x b }, where a ≥ 0, b > a, b ≤ n − 1.We define interval length as m = b − a, and the number of quantiles per interval as a fraction of interval length, e.g., m / 2 corresponds to computing a number of quantiles equal to half the number of values in a given interval.For present purposes, we assume that all time series are univariate and of the same length.We leave the extension of the method to variable-length and multivariate time series to future work.
In contrast to Cabello et al (2021), and Middlehurst et al (2021b), we use fixed, dyadic intervals.We define our set of intervals in terms of 'depth', d, such that we divide the input time series into {2 0 , 2 1 , . . ., 2 d−1 } intervals of length {n / 2 0 , n / 2 1 , . . ., n/2 d−1 }, as shown in Figure 3.For each depth greater than one we also add the same set of intervals shifted by half the interval length.
Accordingly, the total number of intervals is 2 d−1 × 4 − 2 − d for each input representation.By default, we use a depth of d = min(6, ⌊log 2 n⌋ + 1), meaning that there are 120 intervals per representation, and the smallest intervals are of length max(1, n / 32).
As we use nonoverlapping intervals (treating the 'shifted' intervals as a separate set of intervals at each depth), the total number of features per depth is always proportional to time series length, n, regardless of the number of intervals being constructed.For example, if we take m / 2 quantiles per interval, for a depth of d = 1 we take n / 2 quantiles (where d = 1, m = n), and for a depth of d = 2, we likewise take n / 2 quantiles (m = n / 2, so that 2×m / 2 = m = n / 2).Taking m quantiles per interval is equivalent to using the sorted values.
Features
The sorted values represent the empirical distribution of values in each interval.Quantiles, being a subsample of the sorted values, represent an approximation of the full set of values, that is, an approximation of the empirical distribution.Importantly, this approximation reduces the size of the feature space, in turn reducing computational complexity (in particular, in relation to classifier training).
As noted above, we define the number of quantiles per interval in proportion to interval length.By default, we compute m / 4 quantiles per interval, where m is interval length.(For intervals of length one, we simply use the given value.For a single quantile, we use the median.) Broadly speaking, we find that accuracy increases as the number of quantiles per interval increases, although the actual differences in accuracy are small, and computing more quantiles per interval results in proportionally higher computational cost: see Section 4.2.1.
We find that it is beneficial to compute both: (a) the quantiles of the values in each interval, representing the empirical distribution of the values in each interval; and (b) the quantiles of the values in each interval after subtracting the interval mean, representing the distribution of the values relative to the mean (i.e., such that the values are invariant to level shifts): see Figure 4. We find that an efficient means of doing this is to alternate between both representations by subtracting the interval mean from every second quantile.(We only subtract the mean where m ≥ 2, and the number of quantiles is greater than one.)Note, however, that the effect of using both the original and mean-corrected quantiles, versus only the original quantiles, or only mean-corrected quantiles, is relatively small: see Section 4.2.3.
Classifier
We use extremely randomised trees (Geurts et al, 2006), as per rSTSF (Cabello et al, 2021).The key distinctions with random forests are that extremely randomised trees do not use bagging, and extremely randomised trees consider a random split point for each candidate feature.
Interval methods can potentially produce a large number of features, depending on the number of input representations, the number of intervals, and the number of features per interval.For extremely randomised trees, the typical 'default' number of candidate features per split is the square root of the total number of features (Geurts et al, 2006).
However, a large number of features in combination with a sublinear number of candidate features per split could potentially result in the classifier 'running out' of training examples before adequately exploring the feature space, especially in the context of smaller datasets.In other words, with a sublinear number of candidate features per split, as the size of the feature space grows, the probability of any given feature being considered decreases.
To this end, we find that it is beneficial to increase the number of candidate features per split to a linear proportion of the total number of features, in particular, 10% of the total number features (10% of all features are considered at each split).In effect, we delegate interval and feature selection entirely to the classifier.The results show that this approach is both effective and computationally efficient.
Complexity
We treat the computational cost of sorting the values as an upper bound on the cost of computing the quantiles: O(n log n), where n is time series length.Naively, computing the quantiles for all intervals requires sorting the values in each interval, for each input representation.However, as we use a fixed number of input representations, and a fixed number of intervals, we treat these as constant factors.
In principle, we could sort each input representation once, keeping track of the indices of the sorted values, and then form any interval by selecting the already-sorted values using their indices.In practice, even the naive approach incurs negligible overall computational cost.Median transform time over 142 datasets in the expanded UCR archive is less than one second.The majority of compute time is spent in training the classifier.In other words, any attempt at optimising total compute time should concentrate on reducing the size of the feature space, and/or improving the efficiency of classifier training.We leave further optimisation for future work.
Assuming approximately balanced trees, the complexity of training the classifier is O(p • q log q), where p is the total number of features, and q is the number of training examples (see Louppe, 2014).As we consider a linear proportion of the total number of features at each split, complexity is linear with p (which is, in turn, linear with n).The number of trees is not proportional to the number of training examples, or the number of features, so we treat this as a constant factor.
Experiments
We evaluate Quant on the datasets in the UCR archive, including 30 datasets recently incorporated into the archive, showing that Quant is at least as accurate, on average, as the most accurate existing interval methods, while being meaningfully faster.We also show the effect of key hyperparameter choices including the number of features, the set of input representations, the number of trees, and the number of candidate features per split.
UCR Archive
We evaluate Quant on the datasets in the UCR archive (Dau et al, 2019).We compare Quant with the most accurate existing interval methods, and other state-of-the-art methods for time series classification.For direct compatibility with published results, we evaluate Quant on the same 30 resam- ples per Middlehurst et al (2023).Results for other methods are taken from Middlehurst et al (2023).
The difference in mean accuracy, the pairwise win/draw/loss, and the p value for a Wilcoxon signed rank test-between Quant and other prominent interval methods, namely, TSF, STSF, rSTSF, CIF, and DrCIF, over a subset of 112 datasets from the UCR archive-are shown in the Multiple Comparison Matrix (MCM) in Figure 1 on page 1.In addition, Figure 5 shows the pairwise accuracy of Quant versus the two most accurate existing interval methods, namely, DrCIF (left), and rSTSF (right), on the same subset of 112 datasets.
Quant is more accurate on average than existing interval methods, including DrCIF and rSTSF, although the actual differences in accuracy are small.Quant is more accurate than DrCIF on 65 datasets, and less accurate on 43.Similarly, Quant is more accurate than rSTSF on 65 datasets, and less accurate on 42.However, as the results for all three methods are highly correlated, and the differences in accuracy are mostly small, even small changes in accuracy could change the appearance of the results, in particular, the ratio of wins and losses.
As noted above, thirty additional datasets were added to the UCR archive per the recent 'bakeoff redux' (Middlehurst et al, 2023).Figure 6 shows the MCM for Quant versus current state-of-the-art methods, namely, HC2, MultiRocket+Hydra, RDST, WEASEL-D, InceptionTime, rSTSF, Fresh-PRINCE, and PF (see Section 2), over 30 resamples of the expanded set of 142 datasets.Figure 7 shows the pairwise accuracy of Quant versus rSTSF (left), and HC2 (right), for all 142 datasets.Over these 142 datasets, Quant is reasonably similar to both WEASEL-D and InceptionTime in terms of mean accuracy and win/draw/loss.However, Quant is clearly somewhat less accurate than the most accurate methods (RDST, MultiRocket+Hydra, and HC2).Quant is more accurate than rSTSF on 81 datasets, and less accurate on 56.In contrast, Quant is more accurate than HC2 on only 41 datasets, and less accurate on 97.
However, Quant is noticeably faster than any of these methods.Total compute time (training and inference) over all 142 datasets, averaged over 30 resamples, is less than 15 minutes using a single CPU core, compared to approximately 1 hour 15 minutes for MultiRocket+Hydra, 1 hour 35 minutes for rSTSF, almost 2 hours for WEASEL-D, more than 4 hours for RDST, more than one day for FreshPRINCE, several days for InceptionTime, and several weeks for HC2 and PF.(Timings for Quant are averages over 30 resamples, run on a cluster using Intel Xeon E5-2680 and Xeon Gold 6150 CPUs, restricted to a single CPU core per dataset per resample.Timings for other methods are taken from Middlehurst et al (2023).Different timings are not necessarily directly comparable, due to hardware and software differences.)Using 8 CPU cores, compute time is reduced to 6 minutes.
The training time for the classifier is proportional to the total number of features.Accordingly, we can improve overall compute time by reducing the number of intervals and the number of quantiles per interval.To this end, Figure 17 (Appendix) shows the pairwise accuracy for a faster configuration of Quant (informally, Quant FAST ), using approximately half the number of intervals (d = 5), and half the number of quantiles per interval (m / 8), versus rSTSF.Over 142 datasets, Quant FAST is more accurate than rSTSF on 70 datasets, and less accurate on 66.Total compute time for Quant FAST is approximately 7 minutes 40 seconds using a single CPU core.In other words, Quant FAST achieves almost the same accuracy, on average, as rSTSF, but is approximately 10× faster.
Sensitivity Analysis
We demonstrate the effect of key hyperparameters, namely: • the number of features; • the set of input representations (including smoothing); • subtracting the mean; and • the number of trees and the number of features per split.
Following Herrmann et al (2023), in an effort to avoid the peculiarities of the smallest datasets and the original training/test splits, we conduct the sensitivity analysis using a random sample of 50 of the datasets from the subset of 112 datasets from the UCR archive used in, e.g., Middlehurst et al (2021b), using stratified 5-fold cross-validation (such that, for each fold, 80% of the data is used for training, and 20% of the data is used for validation).In particular, from the subset of 112 datasets, we randomly sample 50 of the 100 datasets where there are at least 100 training examples on an 80/20 split, and at least 5 examples of each class.are very similar.The spread of accuracy values is very small.However, computing more quantiles per interval results in proportionally greater computational cost due to the expanded feature space: m quantiles per interval requires twice the total compute time of m / 2 quantiles per interval.
Number of Features
It is apparent that, when computing a relatively small number of quantiles per interval, accuracy tends to increase as depth increases, up to a depth of approximately d = 6, and then decreases.The same effect is not evident for m / 4 or more quantiles per interval.We believe that this relates to the balance between distributional information and location information in larger versus smaller intervals: see Section 1.The results suggest that, broadly speaking, larger intervals are more informative than smaller intervals.With fewer quantiles per interval, more of the information in larger intervals is discarded, and smaller intervals dominate, which leads to lower accuracy.(It may be possible to counteract this effect by sampling features from larger intervals with higher probability when training the classifier.We leave this for future work.)Configurations using more quantiles per interval appear to be relatively immune to this effect.
While increasing depth significantly increases the number of intervals, the corresponding computational cost is linear with depth, as the total number of features computed at each depth is proportional to input length, rather than the number of intervals: see Section 3.2.
Figure 9 shows the pairwise accuracy for a depth of d = 6 with m / 4 quantiles per interval (the default) versus two extremes in terms of the total number of features, namely, a depth of d = 4 with m / 16 quantiles per interval (left), and a depth of d = 8 with m quantiles per interval (right).While a smaller number of features clearly results in lower accuracy on several datasets, the differences in accuracy compared to a larger number of features are relatively small.
Figure 18 (Appendix) shows compute time versus the number of quantiles per interval for a depth of d = 6.This emphasises the extent to which compute time is dominated by the time required to train the classifier which, in turn, is determined by the size of the feature space.
We note that the results presented here relate to the characteristics of the datasets used in these experiments.In particular, we note that the lengths of most of the time series are relatively short: see Figure 19 (Appendix).In practice, it may be appropriate to adjust the parameters of the transform, e.g., depth, in order to suit the characteristics of a particular dataset.
Input Representations
Figure 10 shows mean accuracy (left), and total compute time (right), for different combinations of input representation.Figure 11 shows pairwise accuracy for the default combination of the input time series, X, first difference, X ′ , second difference, X ′′ , and discrete Fourier transform, F(X), versus: • X, X ′ , X ′′ (left); • X, X ′ , F(X) (centre); and • X, X ′′ , F(X) (right).
Subtracting the Mean
Figure 12 shows the pairwise accuracy for subtracting the mean from half of the quantiles (the default) versus not subtracting the mean from any quantiles (left), and subtracting the mean from all quantiles (right).Subtracting the mean from half of the quantiles results in higher accuracy than either not subtracting the mean, or subtracting the mean from all quantiles.There is no practical effect on compute time.
Number of Trees
Figure 13 shows mean accuracy (left), and total compute time (right), versus the number of trees used in the classifier.Figure 14 shows the pairwise accuracy for 200 trees (the default) versus 50 trees (left), and 800 trees (right).
Unsurprisingly, accuracy tends to increase as the number of trees increases, with a proportional increase in computational expense.However, while there are small but clear differences in accuracy between 50 trees and 200 trees, the differences in accuracy for more than approximately 200 trees are minimal.
Number of Features per Split
Figure 15 shows mean accuracy (left), and total compute time (right), versus the number of candidate features per split as a proportion of the total number of features, p. Figure 16 shows the pairwise accuracy for 0.1 × p (the default) versus √ p (left), and 0.2 × p candidate features per split (right).Note that √ p > 0.01 × p for p < 10,000.
There is a clear advantage in terms of accuracy from increasing the number of candidate features per split to a linear proportion (≥ 0.05 × p) of the total number of features, with a proportional increase in computational expense.However, the differences in accuracy between sampling 5%, 10%, or 20% of the features are minimal.
Conclusion
We demonstrate that a simplified interval method, Quant, using a single type of feature (quantiles), fixed intervals, and a standard classifier, without any separate interval or feature selection process, can achieve the same accuracy as the most accurate current interval methods.Compared to most current state-of-the-art methods for time series classification-many of which require considerable computational resources-Quant is both simpler, and represents a significant improvement in terms of accuracy relative to computational cost.In future work, we intend to explore the extension of the method to variable-length and multivariate time series, as well as further improvements to computational efficiency.
Figure 1 :
Figure 1: Multiple Comparison Matrix for Quant vs TSF, STSF, rSTSF, CIF, and DrCIF, for a subset of 112 datasets from the UCR archive.
Figure 2 :
Figure 2: Sorted values for intervals of decreasing length.Larger intervals contain more distributional information but less location information.
Figure 3 :
Figure 3: An illustration of the set of intervals for a depth of d = 4, including 'shifted' intervals for d > 1.
Figure 4 :
Figure 4: An illustration of quantiles drawn from intervals of length n / 4. We compute quantiles for both the values in each interval, and the values after subtracting the interval mean, representing the distribution of the values and the distribution of the values relative to the mean respectively.
Figure 5 :
Figure 5: Pairwise accuracy of Quant vs DrCIF (left), and rSTSF (right), for a subset of 112 datasets from the UCR archive.
Figure 6 :Figure 7 :
Figure 6: MCM for Quant vs other state-of-the-art methods for 142 datasets from the UCR archive.
Figure 8 Figure 8 :
Figure 8 shows mean accuracy (left), and total compute time (right), in terms of both: (a) the number of intervals, expressed in terms of depth, d; and (b) the number of quantiles per interval, expressed as a proportion of interval length, m.Accuracy improves modestly as the number of quantiles per interval increases, although the accuracy for m / 4, m / 2, and m quantiles per interval
Figure 9 :
Figure 9: Pairwise accuracy for a depth of d = 6 with m / 4 quantiles per interval (the default) vs a depth of d = 4 with m / 16 quantiles per interval (left), and a depth of d = 8 with m quantiles per interval (right).
Figure 12 :
Figure12: Pairwise accuracy for subtracting the mean from half of the quantiles (the default) vs not subtracting the mean from any quantiles (left), and subtracting the mean from all quantiles (right).
Figure 13 :Figure 14 :
Figure 13: Mean accuracy (left), and total compute time (right), vs the number of trees.
Figure 15 :Figure 16 :
Figure 15: Mean accuracy (left), and total compute time (right), vs the number of features per split as a proportion of the total number of features.
Figure 17 :
Figure 17: Pairwise accuracy of Quant FAST vs rSTSF for 142 datasets from the UCR archive.
Figure 18 :
Figure 18: Compute time (training and inference) vs the number of quantiles per interval for a depth of d = 6.
Figure 19 :
Figure 19: Distribution of time series length for the datasets used in the sensitivity analysis.
Figure 20 :
Figure 20: Pairwise accuracy for smoothing (the default) vs not smoothing the first difference. | 7,954.8 | 2023-08-02T00:00:00.000 | [
"Computer Science"
] |
Uncertainties in Dark Matter Indirect Detection
Astrophysical observations interpreted in the standard (ΛCDM) cosmological framework indicate that about eighty percent of matter in the Universe is non-luminous. This dark matter poses a major problem for particle physics: no known particle explains its inferred properties. Observations are most consistent with the assumption that dark matter is composed of weakly interacting massive particles. The discovery of these particles is vital to validate the prevailing dark matter paradigm. In this work, we examine the uncertainties affecting the astrophysical discovery of dark matter particles via secondary cosmic ray emission.
Introduction
Astrophysical observations interpreted in the standard (ΛCDM) cosmological framework indicate that about eighty percent of matter in the Universe is non-luminous.This dark matter poses a major problem for particle physics: no known particle explains its inferred properties.Observations are most consistent with the assumption that dark matter is composed of weakly interacting massive particles.The discovery of these particles is vital to validate the prevailing dark matter paradigm.In this work, we examine the uncertainties affecting the astrophysical discovery of dark matter particles via secondary cosmic ray emission.
Before trying to discover dark matter particles, we should know some of their properties.These properties of dark matter are reconstructed from astrophysical observations, most of which (including the galactic rotational velocities, galactic structure formation, and weak gravitational lensing) indicate that dark matter particles have mass and are present in large numbers around us.Measurements of the cosmic microwave background radiation and the abundance of light elements further suggests that dark matter is not composed of baryonic particles (that is quarks).Since electromagnetic interactions imply light emission, we are led to conclude that dark matter particles may only interact with ordinary matter weakly, either via the standard W and Z bosons, or via an unknown force.Since they are electrically neutral, the simplest assumption is that dark matter particles are their own anti-particles.Their diffuse distribution, inferred from their gravitational effects, indicates that they probably interact weakly with each other.To be present over the observed distance scales, dark matter particles have to be stable on the timescale of the age of the Universe.Lastly, the observed large scale structure indicates that dark matter is cold -its particles are non-relativistic at its present temperature.
Based on the above properties, dark matter particles are being searched for in three major types of experiments.First, since the CERN Large Hadron Collider (LHC) in Geneva was built specifically to explore the electroweak sector of the standard particle model, it is an obvious place for trying to create dark matter particles.While we are in (almost) full control of this experiment, without knowing the exact mass and interaction strength between ordinary and dark matter we can only hope that the energy and luminosity of the LHC are high enough to produce the latter.Second, because it appears that the Solar System is immersed in a high flux stream of dark matter particles, it is natural to try to detect collisions between them and well shielded nuclei in underground laboratories.These experiments have the potential to probe interactions between dark and ordinary matter even beyond the reach of the LHC.However, these experiments are also limited by the unknown mass, interaction strength, flux and velocity distribution of the dark matter particles.Finally, perhaps the most general and unconstrained way to discover dark matter particles is to find traces of their annihilation or decay products in cosmic rays bombarding Earth.This last type of experiment is called indirect dark matter detection and it is a sensible way to find dark matter particles if they are either their own anti-particles or the matter-antimatter asymmetry in the dark sector is not pronounced.In this case, via weak interactions, dark matter particles self annihilate into standard ones and create secondary cosmic rays.Alternatively, if dark matter decays into standard particles (with the lifetime of about 10 26 sec or more) its decay products can contribute to the secondaries.The most promising detection modes are the photon final states or the ones that contribute to anti-matter cosmic rays, such as positrons or anti-protons.In the last few years several anomalies were found in the cosmic positron fluxes by the PAMELA and Fermi-LAT satellites, which could be the first glimpses of dark matter.
However, various factors make indirect detection of dark matter challenging and less straightforward than we would like it to be.First, the immense cosmic ray background originating from ordinary astrophysical sources makes it hard to find the signal contributed by dark matter.Next, in many cases sources of the cosmic ray background are not known or not understood well enough.Finally, an important source of uncertainty and the main subject of our study is the cosmic ray propagation through the galaxy.This propagation is described by the diffusion equation; an equation with many unknown parameters.We review the state of this field of research.We show that using state of the art numerical codes, CPU intensive statistical inference, and the latest cosmic ray observations, the most important of these propagation parameters can be determined with a certain precision.Then we show how to propagate these uncertainties into recent cosmic ray measurements of Fermi-LAT and PAMELA.In the light of these findings we quantify the statistical significance of the present hints of signals in dark matter indirect detection.Finally, we contrast the experimental standing with some of the theoretical dark matter models proposed in the recent literature to explain cosmic ray 'anomalies'.
Experimental status of cosmic electrons and positrons
Experiments detecting cosmic rays near Earth have been finding various unexpected deviations from theoretical predictions over the last twenty years.The local flux of high energy positrons is notoriously anomalous as reported by the • TS [1], • AMS [2], • CAPRICE [3], • MASS [4], and • HEAT [5,6] collaborations.Measurements of the PAMELA satellite stirred great interest by showing an unmistakable rise of the local e + /e − fraction, that deviates significantly from theoretical predictions for E e + > 10 GeV [7].The combined experimental and theoretical uncertainties do not seem to account for such a large excess [8][9][10][11].
The summed flux of electrons and positrons also indicates an anomalous excess of observation over theory, as measured by the • AMS [12], • PPB-BETS [13], and • HESS [14,15] collaborations.The Fermi Large Area Telescope (LAT) satellite confirmed the excess of the e − + e + flux for energies over 100 GeV [16,17].The Fermi-LAT results are consistent with those of the PAMELA collaboration, which measured the cosmic ray electron flux up to 625 GeV [18].To date the Fermi-LAT data are the most precise indication of such an anomaly in the electron-positron spectrum.The Fermi-LAT data differ by several standard deviations from the theoretical calculations encoded in GALPROP by [19].
The problem of cosmic ray background calculation
Between 2008 and 2011, of the order of a thousand papers were devoted to explaining the difference between the experimental measurements of Fermi-LAT and PAMELA and theoretical calculations.Speculation ranged from the modification of the cosmic ray propagation through supernova remnants, to dark matter annihilation.A concise summary of this literature with detailed references can be found in [20] and [21].
Before drawing conclusions from the electron-positron anomaly, however, one has to carefully examine the status of the theoretical understanding of Galactic cosmic rays.Unfortunately, even the origin of the cosmic rays is uncertain.The theory describing the propagation of cosmic ray particles from their birthplace through the Milky Way is based on the diffusion-convection model.The quantitative description of propagation is facilitated by the transport equation.This is a partial differential equation for each cosmic ray species which requires fixing the distribution of initial sources and the boundary conditions.Specifying the initial source distribution is a source of significant uncertainty in these calculations.The local cosmic ray fluxes are obtained as the self-consistent solutions of the set of transport equations.Obtaining these solutions is challenging due to the large number of free parameters such as the convection velocities, spatial diffusion coefficients, and momentum loss rates.
In the rest of this chapter we show how to determine those uncertainties of the electron-positron cosmic ray flux that originate from the propagation parameters of the diffusion equation.First we find the set of propagation parameters that the electron-positron flux is most sensitive to.Then we extract the values of these propagation parameters from cosmic ray data (different from the Fermi-LAT and PAMELA measurements).Based on the values of the propagation parameters most favored by the data we calculate theoretical predictions for the electron-positron fluxes and compare these to Fermi-LAT and PAMELA.By calculating the difference between our predictions and the observed fluxes we are able to isolate the anomalous part of the cosmic e − and e + fluxes.
Similar results have been published in the literature before.However, our results supersede these in two important aspects.First, we show that when analyzed in the framework of the standard propagation model there exists a statistically significant tension between the e − , e + and the rest of the charged cosmic ray fluxes.Second, unlike anyone else before us, we isolate the anomalous contribution within the e − and e + spectrum together with its theoretical uncertainty.
Our analysis uses more charged cosmic ray spectral data points than similar studies before us such as [22].Unlike us [23] use gamma ray data when extracting the background, however this may bias the analysis since gamma rays originating from anomalous electrons or positrons are not part of the background.Our numerical treatment, similar to that of [24], is more complete than the one of [25][26][27].
Our statistical analysis can be considered as an extension of [24] since we calculate the e − , e + background with a theoretical uncertainty.Ref. [24] use 76 spectral data points, while we use 219 which gives us a significant edge over their analyses.The parameters that we freely vary are somewhat different from those of [24].Before we choose the parameter space, we analyzed the sensitivity of the electron and positron spectrum to the parameters to maximize the efficiency of our parameter extraction.Our choice and treatment of the nuisance parameters also differs from [24].Finally, we use a different scanning technique from the one they use.
Cosmic ray propagation through the Galaxy
The propagation of charged particles through the Galaxy can be well described using the diffusion-convection model [28].This model assumes that the charged particles propagate homogeneously within a defined region of diffusion (similar to the leaky box propagation model), while taking the effects of energy loss into account.The diffusive region is assumed to be a solid flat cylinder defined with a radius R and a half-height of L. Its shape is such that it encloses the Galactic plane which confines charged cosmic rays to the Galactic magnetic fields inside it, while cosmic rays outside are free to stream away.The solar system in this diffusive region is defined in cylindrical coordinates as r ⊙ = (8.33 kpc, 0 kpc, 0 kpc) [29].The phase-space density ψ a ( r, p, t) of a particular cosmic ray species a at time t, Galactic position r and with momentum p can be determined by solving the cosmic ray transport equation, which has the general form [30] ∂ψ a ( r, p, t) If the time-scale of cosmic ray propagation (which is of the order of 1 Myr at 100 GeV energies) is much longer than the typical time scales of the galactic collapse of dark matter and the variation in the propagation conditions, then one can assume that the steady state condition holds.In this case, the left hand side of equation 1 can be set to zero and the time dependence of all quantities can be dropped.For our analysis we focus on a simplified version of the transport equation which to a first order approximation is sufficient to describe the propagation of electrons, positrons or anti-protons through the Galaxy and their corresponding spectrum at Earth: where E is the energy of the secondary particle species a.To ensure that on the outer surface of the cylinder the cosmic ray density vanishes, boundary conditions are imposed.Similarly, outside of the diffusive region, these boundary condtions allow the particles to freely propagate and escape.This ensures that the modelling is consistent with the physical picture described above.One also imposes the symmetric condition ∂ψ a /∂r(r = 0) = 0 at r = 0.In momentum space, null boundary conditions are imposed.
The transport of cosmic ray species through turbulent magnetic fields, the energy losses experienced by these particles due to Inverse Compton scattering (ICS), synchrotron radiation, Coulomb scattering or bremsstrahlung and their re-acceleration due to their interaction with moving magnetised scattering targets in the Galaxy is defined by the spatial diffusion coefficient K(E), the energy loss rate b(E) and the diffusive re-acceleration coefficient K EE (E) respectively.The effect of Galactic winds propagating vertically from stars in the Galactic disk can be incorporated by defining the convective velocity V C .The source of the cosmic rays is defined by Q a ( r, E) in equation 2, with a standard source term resulting from the annihilation of dark matter which can be written as: Here σ a v 0 corresponds to the thermally averaged annihilation cross section of the relevant species, and ρ g ( r) is the energy density of dark matter in the Galaxy.The energy distribution of the secondary particle a is defined as dN a /dE and is normalised per annihilation.This formula applies to self-conjugated annihilating dark matter.In the case of non-self-conjugated dark matter, or of multicomponent dark matter, the quantities in Eq. ( 3) should be replaced as follows, where an index i denotes a charge state and/or particle species (indeed any particle property, collectively called "component") and f i = n i /n is the number fraction of the i-th component: (mean cross section times relative velocity), ( 5) The spatial diffusion coefficient K(E) is assumed to have the form where v is the speed (in units of c) and R = p/eZ is the magnetic rigidity of the cosmic ray particles.Here Z is the effective nuclear charge of the particle and e is the absolute value of its electric charge (if we considered particles other than electrons, positrons, protons or anti-protons then this quantity would be different from 1).At low energies the behaviour of the cosmic rays as they diffuse is controlled using the parameter η.Traditionally one will set η = 1 but departures from this traditional value to other values (either positive or negative) have been suggested.More detailed treatments allow one to incorporate spatial dependence into the diffusion coefficient (K( r, E)) and the influence of particle motion on the diffusion of these particles (which leads to anisotropic diffusion).
Synchrotron radiation and Inverse Compton scattering are position dependent phenomena.Synchrotron radiation arises from the interaction of a charged particle with Galactic magnetic fields and thus it depends on the strength of the magnetic field which changes in the Galaxy.Similarly, inverse Compton scattering is dependent on the distribution of background light which varies in the Galaxy.If one neglects this position dependence of energy losses in the Galactic halo and assumes that all energy losses can be described using a relationship that is proportional to E 2 (which is only valid if one neglects energy losses, such as Coulomb losses and bremsstrahlung, and considers only inverse Compton scattering for electrons with relatively low energy -Thomson scattering regime), then the energy loss rate can be parametrized as In more detailed treatments the spatial dependence associated with the energy loss rate b( r, E) would be considered and a more general energy dependence relationship for the energy loss rate would be obtained.Additionally Coulomb losses (dE/dt ∼ const) and bremsstrahlung losses (dE/dt ∼ bE) could also be taken into account.These losses can be calculated using functions dependent on position and energy as well as gas, interstellar radiation and magnetic field distributions [31].
Finally, the diffusive re-acceleration coefficient K EE (E) is usually parametrized as where v A is the Alfvén speed.
A propagator, or Green's function G, is used to describe the evolution of the cosmic ray that originates from a source Q at r S with energy E S through the diffusive halo and reaches the Earth at point r with energy E. This allows the general solution for Eq. ( 2) to be written as The differential flux is related to the solution in Eq. ( 10) via For the propagation of protons or anti-protons in the Galactic halo, additional terms in Eq.
(2) should be introduced to account for spallations on the gas in the disk.
Statistical framework
We use standard Bayesian parameter inference to determine the statistically favored regions of the propagation parameters P = {p 1 , ..., p N } that the electron and positron cosmic ray fluxes are the most sensitive to.For full mathematical details we refer the reader to [21].
Here we only highlight the main concepts used.
Using the experimental data D = {d 1 , ..., d M } and their corresponding theoretical predictions T = {t 1 (P), ..., t M (P)}, as the fuction of the parameters, we construct the likelihood function Here and σ i are the corresponding combined theoretical and experimental uncertainties.Assuming flat priors P (P), we then construct the posterior probability distribution At this stage the value of the evidence E (D) is unknown.After integrating over the whole parameter space its value can be recovered as More relevant to our purpose is the adaptive scan of the likelihood function during this integration which will give us the shape of the posterior distribution over the relevant part of the parameter space (where the likelihood is the highest).Having this shape we can calculate the probability density of a certain theoretical parameter p i acquiring a given value by marginalization Here the integral is carried out over the full range of the parameters.We can also determine Bayesian credibility regions R x for each of the parameters: The above relation expresses the fact that x % of the total posterior probability lies within the region R x .
After examining the electron and positron fluxes for parameter sensitivity we found that the relevant parameters are: These parameters enter into the diffusion calculation as follows; γ e − and γ nucleus are the primary electron and nucleus injection indices parameterizing an injection spectrum without a break; δ 1 and δ 2 are spatial diffusion coefficients below and above a reference rigidity ρ 0 ; and D 0xx determines the normalization of the spatial diffusion coefficient.
We treat the normalizations of all charged cosmic ray fluxes as theoretical nuisance parameters: We discuss other statistical and numerical issues, such as the choice of priors, the systematic uncertainties, sampling and convergence, in detail in [21].
Experimental data used in this analysis
In our statistical analysis we use 219 of the most recent data points corresponding to five different types of cosmic ray experiment.A majority of these data points (114 in total) come from electron-positron related experiments while the other 105 are made up of Boron/Carbon, anti-proton/proton and (Sc+Ti+V)/Fe and Be-10/Be-9 cosmic ray flux measurements.If any of the energy ranges of the experiments overlap, the most recent experimental data point was chosen in that energy range.
There are three main experiments that have measured electrons and positrons over different decades of energy.These experiments include AMS by [12], Fermi-LAT by [17] and HESS by [14,15].The AMS collaboration reported an excess in positrons with energies greater than 10 GeV [12], while the HESS collaboration measured a significant steepening of the electron plus photon spectrum above one TeV as measured by HESS's atmospheric Cherenkov telescope (ACT).Using the Large Area Telescope (LAT) on the Fermi Satellite, the Fermi-LAT collaboration released a high precision measurement of the e + + e − spectrum for energies from 7 GeV to 1 TeV [17], extending the energy range of their previously published results.
Measured flux
How primary and secondary cosmic rays are produced and transported throughout the Galaxy can be studied by using cosmic-ray particles such as anti-protons.One requires a large number of measurements with good statistics over a large energy range to produce detailed anti-proton spectra for study.Anti-proton spectra obtained from previous balloon borne experiments such as CAPRISE98 [3] and HEAT [48] had very low statistics, but recently the PAMELA satellite experiment [33] released a high-quality measurement of the anti-proton/proton flux ratio for an energy range of 1-100 GeV.This spectrum confirmed the behaviour of the anti-proton/proton ratio as observed by previous experiments.
Additionally one can use stable secondary to primary cosmic ray ratios such as Boron/Carbon and (Sc+Ti+V)/Fe ratio to study the variation experienced by cosmic rays as they propagate through the Galaxy.These ratios are particularly sensitive to the properties of cosmic ray propagation as the element in the numerator is produced by a different mechanism to the element that defines the denominator.Primary cosmic rays are produced by the original source of the cosmic rays such as a supernova remnant, while the secondary cosmic rays are generated by the interaction of their primaries with the interstellar medium [49].Ratios that are defined by a numerator and denominator which are produced by the same mechanism, such as a primary/primary or a secondary/secondary cosmic ray ratio, have a low sensitivity to any variation in the propagation parameters.Analysing Galactic Boron/Carbon and (Sc+Ti+V)/Fe ratios allows one to determine the amount of interstellar material transversed by the primary cosmic ray and its energy dependence [49].
Unstable isotopes such as Beryllium-10/Beryllium-9 are also beneficial to analyse as they produce a constraint on the time it takes for a cosmic ray to propagate through the Galaxy [50].Various experiments such as ISOMAX98 [44], ACE-CRIS [45], ACE [46] and AMS-02 [47] have all measured Be-10/Be-9 data with varying statistics.
In Table 1 we state over what energy range and from which experiment we selected the data points that define our spectrum of anti-proton/proton, Boron/Carbon, (Sc+Ti+V)/Fe and Be-10/Be-9 ratio.
For energies below E < 10 GeV, solar magnetic and coronal activities perturb the low energy part of the cosmic ray spectrum.This is called solar modulation and has an important role in determining the observed spectral shape(s) of cosmic rays measured at earth [51,52].Solar modulation is accounted for in GALPROP by using a force field approximation.It should be noted that this is an approximation and does not include important influences such as the structure of the heliospheric magnetic field.To incorporate these effects into our analysis we vary the value of the modulation potential in GALPROP.Following Gast & Schael (2009), we also assume that the positively and negatively charged cosmic rays are modulated differently by solar activities (charge-sign dependent modulation).This charge dependent modulation has a significant effect on positrons and its effect on the anti-proton/proton ratio can be comparable to the experiment's statistical uncertainties.The modulation effect on heavy nuclei such as B, C, Sc, Ti, V, Fe and Be is mild even though these nuclei have a higher positive charge compared to the proton.The reason for this is that the modulation potential is proportional to the charge-to-mass ratio and these heavy nuclei have a much lower charge-to-mass ratio than the proton therefore the effect is minute.Regardless, as we use the ratio of their fluxes, most of the effect of solar modulation on these nuclei is cancelled, thus we can safely absorb this modulation effect into the systematic uncertainties of the experiment.To be able to compare experimental data we set the modulation potential in GALPROP for positrons (electrons solar modulation potential was not substantial over the period of PAMELA's data taking, and approximately the same average value for the potential can be used for Fermi-LAT.
The presence or absence of a cosmic ray anomaly
There has been a plethora of experiments which have hinted at the existence of an anomaly in the electron-positron spectrum.The most notable measurements are the Fermi-LAT electron positron sum and the PAMELA positron fraction.Ref. [53] and [24] have all questioned the reality of the anomaly in the PAMELA and Fermi-LAT data as well as the absence of an anomaly in the anti-proton flux.Ref. [24] suggested that one requires only to readjust the diffusion parameters that define the propagation model as encoded in GALPROP to reproduce the Fermi-LAT data.This conclusion is highlighted clearly in figure 8 of [24], where their propagation model obtained from the best fit to 76 cosmic ray spectral data points agrees well with the Fermi-LAT data.Interestingly, the corresponding positron fraction obtained from their best fit does not agree with the PAMELA data, indicating that one cannot fit the PAMELA data by simply adjusting the parameters of the propagation model.This indicates to us that the anomaly observed in cosmic electron-positron data is real and rather than adjusting the propagation parameters, one has to perform a detailed investigation of its existence and characteristics.
One of our important results is that just by adjusting the parameters in Eq.( 18) and ( 19) it is possible to generate a theoretical prediction which is fully consistent with the Fermi-LAT e − + e + flux measurement.Similarly, we found that both the PAMELA positron fraction and the electron flux can be reproduced by theory.This means that the theory has enough flexibility to accommodate the experimentally measured fluxes.
If one assumes that all types of cosmic ray data (electron-positron related and non-electron-positron related) can be described well using a single set of propagation parameters it quickly becomes obvious that one cannot fit the data simultaneously.To analyse this behaviour, we first divided the cosmic ray data listed in table 1 into two groups: electron and/or positron fluxes as measured by AMS, Fermi, HESS and PAMELA and non-electron and/or positron fluxes such as anti-proton/proton, Boron/Carbon, (Sc+Ti+V)/Fe, Be-10/Be-9.We then attempted to fit only the second group of cosmic ray data, that is, excluding the electron-positron related data.We obtain a χ 2 per degree of freedom of 0.34 from this fit and as a consequence the corresponding best fit curves each pass through all the estimated systematic error bands shown in grey in Fig. 1.When we apply the best-fit parameters to the electron and/or positron flux data, however, we obtain a χ 2 per degree of freedom of 24.Similarly, when we do the converse, i.e. find the best-fit propagation parameters using only the electron-positron related flux, we obtain an excellent χ 2 per degree of freedom of 1.0, but the best-fit parameters gave a larger χ 2 per degree of freedom (3.1) for the non-electron-positron data.As we have a large number (105) of data points the deviations observed between these two sets of data is significant which signals statistically significant tension between electron-positron and non-electron-positron measurements.These results support the conclusion highlighted in Fig. 7 and 8 of [24], that one requires something more than simply adjusting the propagation parameters to accommodate for the cosmic ray anomaly.
This tension between the electron-positron and non-electron-positron measurements was further investigated by performing an independent Bayesian analysis on the two groups of data.This allows us to extract the values of the propagation parameters as preferred by the different sets of data.Interestingly, one can derive information about the propagation parameters of the electron-positron related data from the non-electron-positron related data.This arises due to a number of reasons.Firstly, the value of some propagation parameters such as D 0xx is highly dependent on the species that one is modelling the propagation of.Secondly, to model the propagation of cosmic rays one uses the transport equation (equation 1).In this equation a large number of processes, including nuclear fragmentation and decay, are incorporated, which directly affects the predicted secondary electron-positron flux.Thirdly, as the energy density of cosmic rays is comparable to the energy density of the interstellar radiation field and the local magnetic field, different cosmic ray species will influence the dynamics of each species non-negligibly.
As a consequence, even if no electron-positron related data is used in our fit one can still constrain some of the propagation parameters of the electron-positron data.Unfortunately this method does not constrain the value of injection indices sufficiently, so in order to fix these parameters we have to include a minimal amount of information about the electron-positron related fluxes in our analysis.We selected data points from the e − + e + for four reasons: (1) these points cover the largest energy range; (2) before setting out to find the optimal parameter value, within uncertainties the end points of the e − + e + spectrum agree with theoretical predictions; (3) for low energies the effect of Open Questions in Cosmology Figure 2. Marginalized posterior probability distributions of propagation parameters listed in Eq.( 18).The likelihood functions containing electron and/or positron flux data are plotted as blue dashed lines while the likelihood functions for the rest of the comic ray data are plotted as solid red lines.The 68 % credibility regions are highlighted by the shaded areas of the posteriors.In the lower three frames it is evident that there exists a statistically significant tension between the electron-positron data and the rest.solar modulation on this data is minor; and (4) the theoretical prediction for the e − + e + is insensitive to the value of the propagation parameter for mid-range energies (this is highlighted by the distinct bow-tie shape of the theoretical uncertainty band).
In addition to using non-e ± related data points (i.e.p/p, B/C, (Sc+Ti+V)/Fe and Be data), we also selected four e ± related data points to use in our analysis.This included the lowest energy data point from AMS, the highest energy data point from HESS and the 19.40 GeV and 29.20 GeV data points of Fermi-LAT.We checked that this selection of e ± related data points does not bias the final conclusion and the results that we obtained with this selection are robust.
In figure 2 we have plotted the marginalized posterior probability densities of our selected propagation parameters as obtained by completing a Bayesian analysis on the two sets of data.The blue dashed curve represents the likelihood functions generated for the electron-positron related data (AMS, Fermi-LAT, HESS, and PAMELA), while the red solid curves represent the likelihood functions obtained for the rest of the cosmic ray data (anti-proton/proton, Boron/Carbon, (Sc+Ti+V)/Fe, Be-10/Be-9) listed in table 1.The 68% credibility regions of the likelihood functions are highlighted by the shaded areas of figure 2 and table 2 lists the numerical values of these credibility regions as well as the best-fit values of each propagation parameter.By looking at figure 2 it is obvious that the electron-positron related data and the non-electron-positron related data are inconsistent with the hypothesis that the model of cosmic ray propagation and/or the sources encoded in GALPROP provide a sufficient theoretical description.For the posterior densities of the electron and nucleus injection indices γ e − and γ nucleus shown in the first two frames of figure 2, there is a mild but tolerable tension between the two data sets.In the final three frames of figure 2 the posterior densities for δ 1 , δ 2 and D 0xx are shown.These frames indicate a statistically significant tension between the two sets of data as the 68 % credibility regions of each set for the two spatial diffusion coefficients δ 1 and δ 2 , as well as D 0xx do not overlap each other.Although not shown it is easily extrapolated that not even the 99% credibility regions of these posteriors will overlap.As a consequence we can conclude that if one adjusts the values of the cosmic ray parameters, one can indeed obtain a good fit for either the electron-positron related data or to the rest of the data individually, however, you cannot obtain a good fit for both sets of data simultaneously.
This tension can be interpreted to mean that the data measured by the PAMELA and Fermi-LAT collaborations is affected by new physics that is unaccounted for by the propagation model and/or cosmic ray sources encoded in GALPROP.Based on simple theoretical arguments, the observed behaviour of the PAMELA positron fraction is unexpected.If one attempts to fit this data by simply adjusting the value of the propagation parameters this will lead to a bad fit of the non-electron-positron related data.One also expects that the anomaly in the PAMELA e + /(e + + e − ) would also produce an observable anomaly in other electron-positron related data such as the Fermi-LAT e + + e − and the PAMELA e − spectra.This conclusion agrees with the argument of [24] that "secondary positron production in the general ISM is not capable of producing an abundance that rises with energy".
The observed tension in our data is dramatically increased when one incorporates the recently released PAMELA e − flux [18] in our electron-positron related data.For consistency we checked the result that we would obtain if we excluded this new electron flux data into our Open Questions in Cosmology analysis.We noticed that the tension we observe is significantly milder if it is not included.This, and the effect of using a larger amount of data compared to previous studies, suggests why the tension we observe was not detected by authors such as [24].
The size of the anomaly
As we conclude that new physics is buried within the electron-positron fluxes, we now attempt to extract from the data the size of this new physics signal.Assuming that the new physics affects only the electron-positron fluxes but its influence on the rest of the cosmic ray data is negligible, we can determine the central value and credibility regions of the cosmic ray propagation parameters from the unbiased data: anti-proton/proton, Boron/Carbon, (Sc+Ti+V)/Fe, Be-10/Be-9 to generate a background prediction for all cosmic ray data including the electron-positron fluxes.Once we calculate the theoretical background prediction we can subtract this background from the electron-positron data and determine if a statistically significant signal can be extracted.
To do this we calculate the prediction for the PAMELA and Fermi-LAT electron-positron fluxes by using the central values of the propagation parameters determined using p/p, B/C, (Sc+Ti+V)/Fe, Be-10/Be-9.Then using all the scanned values of all five propagation parameters lying within the 68 % credibility region we generate a 1-σ uncertainty band for the background around this central value.In figure 3 we overlay the uncertainty background in gray over the Fermi-LAT electron+positron and the PAMELA electron and positron fraction fluxes.For the Fermi-LAT and PAMELA e − the statistical and systematic uncertainties were combined in quadrature, while as the PAMELA e + /(e + + e − ) only had statistical uncertainties, we scaled these uncertainties using τ = 0.2 to produce the experimental error bands.The magenta bands correspond to our background predictions, while the green dashed lines and band correspond respectively to the central value and the 1-σ uncertainty of the calculated anomaly.
In figure 3 one can see that our background prediction deviates from the data at energies below ≈ 10 GeV and above 100 GeV.For this analysis we focus on the deviation between the background and the data for energies greater than 100 GeV, while for the deviation observed at low energies we leave this to future research, but we note that this deviation could arise from inadequacies of the propagation model.Based on our background prediction we obtain a weak but statistically significant anomaly signal which we interpret as the presence of a new physics in the Fermi-LAT electron+positron flux.A similar conclusion can be drawn about the PAMELA positron fraction when taking the difference between the central values of the data and the background, but due to sizeable uncertainties of the PAMELA measurement we cannot claim a statistically significant deviation. 1o determine the size of the new physics signal in the electron positron data we subtract the central value of the corresponding background prediction from the central value of the data.The 1-σ uncertainty band of the signal is obtained by combining the experimental and background uncertainties quadratically.In Fig. 3 the results for the electron-positron anomaly are shown.Based on our background predictions, we obtain a non-vanishing anomalous signal for the Fermi-LAT e + + e − flux, while for the PAMELA data we cannot claim the presence of a statistically significant anomaly due to the large uncertainties of the data.
The source of the anomaly
Since the publication of the PAMELA positron fraction [7], there have been numerous publications that have speculated on the origin of the discrepancy between the theoretical prediction of electron-positron spectra and the experimental data.Based on the available evidence we can only postulate on the origin of this deviation.An obvious guess would be that the model used to describe the propagation of electrons and positrons in our Galaxy is insufficient in some respect, which if correct would mean that there exists no anomalous signal in the data.One such reasonable effect which is not incorporated in the two dimensional GALPROP calculation is the spectral hardening of cosmic ray spectra due to the presence of non-steady sources.To confirm these possibilities it would be an interesting exercise to repeat our analysis using different calculation tools such as DRAGON by [55], USINE by [56], PPPC4DMID by [57] or the code of [58].
If one assumes that the propagation model satisfactorily describes the propagation of cosmic rays through our Galaxy it is only natural to suspect that local effects are modifying the distribution of electrons and positrons.The lack of sources included in the GALPROP calculation seem to confirm this suspicion.There have been a plethora of papers that account for this anomaly by proposing various new sources of cosmic rays.There are two major categories of new cosmic ray sources that have been proposed.The first involves known astrophysical objects with uncertain parameters such as supernova remnants, pulsars, or various other objects in the Galactic centre, while the second involves more exotic astronomical and/or particle physics phenomena such as dark matter.Literature discussing these cases is extensively cited by [21].
For energies greater than 100 GeV, energy losses such as inverse Compton scattering of interstellar dust and cosmic microwave background light or synchrotron radiation become important.These effects result in a relatively short lifetime of the electron and positron while simultaneously this causes a decrease in the intensity of these particles as energy increases.
As a result it is hypothesised that a large number of the electrons and positrons detected at Earth with an energy above 100 GeV come from individual sources within a few kilo-parsecs of Earth [51,52].Random fluctuations in the injection spectrum and spatial distribution of these nearby sources can produce detectable differences between the predicted background and the most energetic part of the observed electron and positron spectrum.This deviation could indicate the presence of new physics arising from either an astrophysical object(s) or dark matter.
If the size of the anomalous signal can be isolated from the experimental data then, regardless of the origin of the anomaly, the source will have to produce a signal with those characteristics.In Fig. 4 we compare our extracted signal to a few randomly selected attempts from the literature to match this anomaly.The first frame features the spectrum of electrons and positrons unaccounted for from local supernovae as calculated by [59] .The top right frame shows the contribution from additional primary cosmic ray sources such as pulsars or annihilation of particle dark matter as calculated by [52].The bottom left frame contains the predictions of [60] for anomalous electron-positron sources from dark matter annihilations, while the last frame shows the dark matter annihilation contributions calculated by [61].
If the theoretical uncertainty of a new cosmic ray source and its contribution to cosmic ray measurements at Earth is unknown it can be difficult to draw any conclusion about its contribution to our isolated signal.In the case where the theoretical uncertainty of a new cosmic ray source is known it usually tends to be of significant size that it can prevent us from judging whether it is a valid explanation of our signal.Regardless, we can select a few scenarios that are more likely to be favoured than some others based on the present amount of information we have obtained from our analysis.With more data it will be possible to reduce the size of the uncertainty of our signal, while with more detailed calculations we can produce a more precise prediction of the cosmic ray spectrum as measured at Earth.This may enable the various suggestions of the source of the electron-positron anomaly to be confirmed or ruled out.The various theoretical predictions come from [59], [52], [60] and [61].Currently the comparison is fairly inconclusive but with more data it will be possible to shrink the uncertainty in the determination of the signal.Then various suggestions can be confirmed or ruled out.
Conclusions
Motivated by the possibility of new physics contributing to the measurements of PAMELA and Fermi-LAT, we subjected a wide range of cosmic ray observations to a Bayesian likelihood analysis.In the context of the propagation model coded in GALPROP, we found a significant tension between the e − \e + related data and the rest of the cosmic ray fluxes.This tension can be interpreted as the failure of the model to describe all the data simultaneously or as the effect of a missing source component.
Since the PAMELA and Fermi-LAT data are suspected to contain a component unaccounted for in GALPROP, we extracted the preferred values of the cosmic ray propagation parameters from the non-electron-positron related measurements.Based on these parameter values we calculated background predictions, with uncertainties, for PAMELA and Fermi-LAT.We found a deviation between the PAMELA and Fermi-LAT data and the predicted background even when uncertainties, including systematics, were taken into account.Interpreting this as an indication of new physics we subtracted the background from the data isolating the size of the anomalous component.
The signal of new physics in the electron+positron spectrum was found to be non-vanishing within the calculated uncertainties.Thus the use of 219 cosmic ray spectral data points within the Bayesian framework allowed us to confirm the existence of new physics effects in the electron+positron flux in a model independent fashion.Using the statistical techniques we were able to extract the size, shape and uncertainty of the anomalous contribution in the e − + e + cosmic ray spectrum.We briefly compared the extracted signal to some theoretical results predicting such an anomaly.
Figure 1 .
Figure 1.Best fit curves plotted against non-electron-positron related data.These curves were calculated using the most probable parameter values obtained from the peak values of the posterior probabilities (inferred from p/p, B/C, (Sc+Ti+V)/Fe and Be data) shown in red in Fig. 2. The best fit curves pass through the estimated systematic error bands, shown in gray.
Figure 3 .
Figure 3.The anomalous signal we extracted for various electron-positron fluxes.The green dotted curves (marking the central values) and bands (showing 68 % credible intervals) correspond to the extracted size of the anomaly.The data points correspond to the spectra measured by Fermi-LAT and PAMELA.Combined statistical and systematic uncertainties are shown (by the gray bands) for Fermi-LAT and PAMELA e − , while (τ = 0.2) scaled statistical uncertainties are shown for PAMELA e + /(e + + e − ).Overlaid in magenta is our background prediction (central value curve and 68 % credible intervals).
Figure 4 .
Figure 4. Comparing the detected cosmic ray flux (data points and gray band) and the signal extracted in this work (green dotted curve and band) to potential explanations of the electron-positron cosmic ray anomaly (solid curves).The various theoretical predictions come from[59],[52],[60] and[61].Currently the comparison is fairly inconclusive but with more data it will be possible to shrink the uncertainty in the determination of the signal.Then various suggestions can be confirmed or ruled out.
Table 1 .
In this table we have listed the cosmic ray experiments and the energy ranges of the corresponding data points that we selected for our analysis.There are two sets of cosmic ray data listed in this table.Electron positron flux related experiments make up the first five lines of the table, while all other experiments make up the rest of the table.On these two sets of data, we perform a Bayesian analysis to highlight the tension between the two sets of data.
Table 2 .
Best fit values of the propagation parameters and their 68 % credibility ranges.Numerical values are shown for both fits: including the electron-positron related cosmic ray data only, and including the rest of the data. | 10,260.6 | 2012-11-12T00:00:00.000 | [
"Physics"
] |
Kilohertz serial crystallography with the JUNGFRAU detector at a fourth-generation synchrotron source
The first demonstration of 2 kHz time-resolved serial crystallography data acquisition at a fourth-generation synchrotron, using the JUNGFRAU 4M pixel detector.
Introduction
The development of X-ray free-electron lasers (XFELs) has initiated a renaissance of time-resolved macromolecular crystallography (MX) experiments at physiological conditions (Chapman et al., 2011;Boutet et al., 2012;Orville, 2020;Pearson & Mehrabi, 2020;Barends et al., 2022).While their ultra-short intense burst of X-rays pushed the time resolution to the femtosecond domain (Milne et al., 2017), it also created new challenges in terms of sample delivery and consumption, since thousands of crystals need to be delivered to the XFEL pulse, sparking many novel ideas for serial sample delivery to the X-ray beam.These sample-delivery systems include the high-viscosity extruder (HVE) (Weierstall et al., 2014), fixed target chips (Martiel et al., 2019) and tape drives (Beyerlein et al., 2017).Another major bottleneck of the technique is the sparse availability of beam time at XFEL facilities.To mitigate these challenges, serial synchrotron crystallography (SSX) was actively developed at many synchrotron facilities to make use of the novel sample-delivery systems and experiment techniques, while offering significantly fewer hurdles for users to access the technique (Weinert et al., 2017;Botha et al., 2015).
However, time-resolved experiments at synchrotron sources are essentially limited by the photon flux, radiation damage and the detector readout rate.The Paul Scherrer Institut (PSI) has been at the forefront of state-of-the-art hybrid pixel-array detector development for nearly two decades now.The PILATUS and EIGER photon-counting detectors have revolutionized diffraction data collection at synchrotrons worldwide, and the JUNGFRAU integrating pixel detector (Mozzanica et al., 2018) has applied the same key technologies in the field of XFELs.MAX IV, on the other hand, is the pioneer in the next generation of synchrotron light sources, providing the most brilliant beams in a micrometre focus.Bringing the two competencies together is a great opportunity to bridge the gap between XFELs and third-generation synchrotrons, allowing a new chapter of structural biology, including ultra-high-throughput screening, serial crystallography structure determination of true microcrystals, and especially time-resolved MX, even in the microsecond regime.
Fourth-generation synchrotrons
The MAX IV 3 GeV storage ring (Tavares et al., 2014) was the first of the new fourth-generation synchrotron light sources based on multi-bend achromat designs (Borland et al., 2014) when it went into operation in 2016.These diffractionlimited sources are characterized by a low emittance, high brilliance and a high degree of coherence.
The MAX IV Laboratory currently operates two MX beamlines: BioMAX (Ursby et al., 2020) and MicroMAX.Sirius (Liu et al., 2014) and ESRF EBS (Raimondi, 2016) are other fourth-generation sources in operation with many others, in different stages of design and construction.The Swiss Light Source (SLS) at the PSI is scheduled to start the upgrade to SLS 2.0 in 2023 (Streun et al., 2018).
The high degree of coherence is revolutionizing X-ray imaging (Thibault et al., 2014) but the high brilliance is also opening up new possibilities in MX.By careful beamline design, the high brilliance of the source results in high brilliance at the sample allowing a high flux in a small beam focus of a highly parallel beam.At BioMAX, a photon flux of 10 13 photons s À1 can be focused into a 20 Â 5 mm full width at half-maximum (FWHM) spot with a divergence of 0.1 mrad.At MicroMAX, it will soon even be possible to focus >10 12 photons s À1 (or >10 14 photons s À1 with its multilayer monochromator) into a 1 Â 1 mm spot with a divergence below 1 mrad, allowing higher time resolution and smaller samples to be measured.
JUNGFRAU detector
The extreme brilliance at XFELs triggered the renaissance of integrating detectors (Mozzanica et al., 2012;Hart et al., 2012;Hatsui & Graafsma, 2015).One of the major breakthroughs, allowing for practical use of the integrating technology, was the in-pixel adaptive gain.This technology enables the detector to operate with multiple dynamic ranges, which are dynamically switched from the highest gain to the lowest by individual pixels during exposure, allowing single-photon sensitivity for pixels with low incoming flux and high-dynamic range for pixels with high illumination.This development was essential for the megahertz pulse trains at the European XFEL and has been successfully introduced with the AGIPD detector (Henrich et al., 2011;Allahgholi et al., 2019).However, the low pixel depth limited its usability for slower applications, thus a new generation of integrating detectors, JUNGFRAU, was developed at the PSI (Mozzanica et al., 2014).
JUNGFRAU was proven to be not only an excellent XFEL detector (Nass et al., 2020) but also a promising system for synchrotron-based MX (Leonarski et al., 2018), showing superiority over photon-counting systems at kilohertz frame rates.Yet, getting a detector excellent in acquiring kilohertz data is only the first step, as a new challenge is created -the data volume.A 4 MP sized detector, standard size for MX, with a 16-bit pixel counter depth, operating at a 2 kHz frame rate, produces a steady stream of 17 GB of data per second, which has to be handled by the downstream IT infrastructure, including data storage and analysis.
Jungfraujoch data-acquisition system
The JUNGFRAU data stream is a challenge for traditional CPU-only IT architecture (Leonarski et al., 2020), since it would require a massive parallel readout system with multiple servers handling the incoming data.This not only leads to significant infrastructure and support cost but also calls for a sophisticated control and synchronization layer.To solve the issue, the PSI has therefore developed a control and readout system called Jungfraujoch, integrating a CPU, generalpurpose graphical processing units (GPGPUs) and fieldprogrammable gate array (FPGA) technologies, with a single server handling the full JUNGFRAU 4M 2 kHz data stream of 17 GB s À1 (Leonarski et al., 2023).While developed at the PSI, we transferred and used the Jungfraujoch at MAX IV, which was relatively easy due to the compact design.
Sample preparation
Lysozyme (Sigma-Aldrich) was dissolved in 100 mM sodium acetate, pH 3.0, to a final concentration of 25 mg ml À1 .To obtain microcrystals, the lysozyme solution was mixed 1:1 with precipitant solution (22% NaCl, 6.4% PEG 6000 in 80 mM sodium acetate, pH 3.0) and incubated overnight.The resulting crystals had an average size distribution of 20 Â 15 Â 15 mm and were harvested by centrifugation.Cellulose matrix was prepared by dissolving 22%(w/v) 2hydroxyethyl-cellulose in H 2 O, and left to swell overnight.For data collection, the crystals were embedded 1:4 in the cellulose matrix.
Beamline setup and data acquisition
The beamline setup at the BioMAX beamline was modified by replacing the standard EIGER 16M detector with the JUNGFRAU 4M detector, using the existing detector stage.The detector was integrated through the Jungfraujoch server (IBM IC922) to the MAX IV infrastructure (Fig. 1).
The HVE injector (Max Planck Institute for Medical Research, Heidelberg, Germany) used in this experiment was mounted vertically to the BioMAX micro-diffractometer (MD3, ARINAX, France).Protein crystals loaded into the HVE reservoir were extruded through tipped silica capillaries with an inner diameter of 75 mm by a high-performance liquid chromatography pump (Shimadzu, Japan, LC-20AD), and the sample jet was stabilized with helium as sheath gas (Shilova et al., 2020).The X-ray beam was focused to a 20 Â 5 mm FWHM spot at the sample position, and the beam energy was set to 11 and 15 keV at fluxes of 1.2 Â 10 13 and 6 Â 10 12 photons s À1 , respectively, for optimal photon yield.At these values, a crystal receives a dose of less than 63 and 21 kGy, respectively, per millisecond exposure, well below the assumed roomtemperature radiation limit of 300 kGy (Holton, 2009).For a comparison of different data-collection speeds, lysozyme microcrystals were extruded at speeds of 2.5 and 0.22 mm s À1 for data collection at 1 kHz and 100 Hz (0.1 kHz), respectively.The X-ray beam was attenuated for the 100 Hz acquisition by a factor of ten in order to have a similar dose per frame as at 1 kHz.For the time-resolved measurement of KR2 photo-dynamics, the sample was extruded at 1.5 mm s À1 .As a pump trigger, a 530 nm laser diode (Roithner Lasertechnik GmbH, Austria) was mounted close to the sample area and focused onto the extruded sample to a 100 Â 80 mm FWHM spot, offset with respect to the X-ray beam by $30 .Since the laser spot was larger than the X-ray spot, its position was slightly offset vertically so that the X-rays were co-aligned with the lower part of the laser.The fluence of the laser at the sample position was measured to be 12.7 W cm À2 .The detector and laser were synchronized using two digital delay generators (DDGs) (DG645, Stanford Research Systems, USA).One DDG was used for generating pulses with a repetition rate of 7.5 Hz, defining the total probe length of 133.3 ms.This master clock was then connected to the second DDG, whose first output triggered the detector using a risingedge transistor-to-transistor logic (TTL) pulse and, after a delay of 10 ms, whose second output triggered the pump laser for 10 ms using a rectangular pulse.This delay was added to compensate for any possible lag during triggering of the detector.The remaining 3.3 ms was a safety margin to ensure the system was ready for the next cycle.Correct timings were confirmed by measuring the TTL signals and the actual pumplaser output using a fast photo diode on an oscilloscope.
Sample centering and data acquisition were carried out with the beamline-control software MXCuBE3 (Mueller et al., 2017).Using the representational state transfer interface compatible with DECTRIS EIGER systems, the JUNGFRAU detector was integrated smoothly into the beamline-control system by adapting the existing EIGER control infrastructure.For the time-resolved experiments, pulse duration for the pump laser as well as the exposure time per image and number of images per trigger for the JUNGFRAU detector were easily configured via the MXCuBE3 user interface.
JUNGFRAU and Jungfraujoch
The diffraction data were collected with the PSI-developed adaptive-gain charge-integrating JUNGFRAU detector (Leonarski et al., 2018;Mozzanica et al., 2018).This detector is composed of eight modules, comprising roughly 4 million pixels in total, with a single pixel size of 75 Â 75 mm.The detector was operated at two different frame rates: (a) at 2 kHz, with 500 ms frame time and 480 ms integration time; and (b) at 1 kHz, with 1 ms frame time and 980 ms integration time.At these settings, the detector was streaming raw data at rates of 17 and 8 GB s À1 , respectively (Leonarski et al., 2020).Acquisition at 100 Hz was achieved by summing every ten frames with the detector operating at 1 kHz, similar to the intrinsic frame summation inside JUNGFRAU in standard operating mode.
Detector-gain calibration and pedestal-factor collection were performed using a procedure outlined previously (Redford et al., 2018;Leonarski et al., 2020), including pedestal-tracking correction to account for drift of the dark current.Dark images were collected before each measurement: a pedestal for high gain (G0) was calculated using 3000 dark frames collected at the same integration time and frame time as the actual measurement, while pedestals for medium gain (G1) and low gain (G2) were calculated based on 200 frames collected at the same integration time as the measurement, but with a reduced fixed frame rate of 100 Hz.To reduce the dark current, the detector was cooled to À10 C.
Detector control and data readout were performed with the Jungfraujoch server (Leonarski et al., 2023).Here, network packets arriving from the detector are received by an FPGA board, which plays the role of a smart network-interface card.The FPGA board implements network-protocol decoding as well as conversion of JUNGFRAU raw frames to photon counts with pedestal and gain corrections.Converted images are written to CPU memory, and the CPU handles the assembly of full images, optional frame summation to reduce frame rate, and compression with the Bitshuffle/LZ4 algorithm (Masui et al., 2015).Additionally, assembled images are also sent, at a highly reduced rate, to the beamline consoles via a messaging queue, allowing the display of a live preview.Spot finding was implemented on the GPGPU, but the functionality was not mature enough during the beam time and was not used for live data processing.
Data processing and analysis
The CrystFEL 0.10 application suite (White et al., 2012) was used for offline data analysis.Spot finding was performed using the Peakfinder8 algorithm (Barty et al., 2014) with spots of one or more pixels allowed, while signal-to-noise (SNR) and photon-counting thresholds were optimized separately for each crystal (Table 1).Indexing of the data was performed with the XGANDALF algorithm (Gevorkov et al., 2019).Diffraction-geometry parameters, including beam center and detector distance, were iteratively optimized with detectordistance and geoptimiser tools included in the CrystFEL package.Scaling and post-processing were executed in partialator with the xsphere algorithm.Time-resolved data were saved with an additional hierarchical data format, version 5 (HDF5) virtual dataset, that pointed to images belonging to a particular time point.KR2 data were additionally treated with STARANISO (Tickle et al., 2016) to account for anisotropic diffraction.Difference maps were calculated using Phenix (Liebschner et al., 2019) and figures were generated with PyMOL (Schro ¨dinger, LLC, 2015).
Detector integration
The 17 GB s À1 stream of data from the JUNGFRAU detector was served to a single edge server for data acquisition and preliminary analysis.The server was an IBM IC922 system, consisting of two POWER9 CPUs, two Alpha Data 9H3 boards with Xilinx Virtex Ultrascale+ high bandwidth memory (HBM) FPGAs running the Jungfraujoch firmware (Leonarski et al., 2023), a single Nvidia T4 GPGPU and two Mellanox Connect-X 5 Ex InfiniBand host channel adapters.The connection between the beamline switch and the two server FPGAs was patched with a long-range, 100 Gbit s À1 , optical connection.The detector modules, the detector switch and the receiving FPGAs formed a dedicated data network between the beamline network and the MAX IV server hall using 2 out of 16 existing fiber optic cables.Hosting the edge server in the central MAX IV server room allows a shortdistance InfiniBand enhanced data rate (EDR) (100 Gbit s À1 ) network connection to the standard MAX IV x86_64 computing infrastructure (Fig. 2).This allows data processed and compressed with the Jungfraujoch system to be streamed to a single x86_64 server hosting the Jungfraujoch file-writer application, writing data following the NXmx gold standard (Bernstein et al., 2020) to the central MAX IV GPFS storage.The bandwidth of the compressed data to the MAX IV storage was only limited by the server-to-storage network connection (FDR InfiniBand, 56 Gbit s À1 ) and the capabilities of the HDF5 file writer.With a performance test, we established that a simple single-threaded HDF5 writer can reach a throughput of higher than 4 GB s À1 with the MAX IV infrastructure.This allowed for continuous data acquisition at 2 kHz, with a compression rate slightly above 4.The MAX IV edge cloud infrastructure is being upgraded to 100 Gbit s À1 Ethernet at the time of writing of this article.Together with ongoing development of the Jungfraujoch system, this will allow the file writing of a compressed JUNGFRAU detector data stream on standard MAX IV data-acquisition nodes.
Faster SSX data acquisition
To investigate if an increased frame rate could have a detrimental effect on data quality, we compared lysozyme data acquired at 1 kHz (PDB ID 8p1c) with data acquired at a more 'standard' 100 Hz rate (PDB ID 8p1d).For the purpose of the comparison, we adjusted the X-ray beam transmission and jet speed, so that both X-ray dose and illuminated sample area were comparable.As summarized in Table 1, resolution, indexing rate, CC 1/2 and SNR were comparable for both datacollection modes.This result is in line with the previous comparison on rotational crystallography data quality with the JUNGFRAU detector, which demonstrated that increasing data-collection speed with higher photon flux is not detrimental to protein-crystal data quality (Leonarski et al., 2018).Though the kilohertz data collection shows a minor advantage in the statistics, we believe that the difference is not significant, as some experimental parameters, for example jet speed, cannot scale exactly by a factor of ten.
Kilohertz continuous data acquisition
Next, to assess the full capabilities of the detector and dataacquisition system, we collected diffraction data of lysozyme crystals at 2 kHz frame rate.In the first experiment, we collected raw data without any compression, expecting that the resulting data rate of 17 GB s À1 is beyond the capability of the network and storage infrastructure.In this mode, we were able to collect roughly 22 000 frames before running out of intermediate storage space, which marks the burst capability of the system.Subsequently, we enabled the conversion and compression mode.To ensure this mode allows for continuous measurement, we aimed to collect 500 000 images, which is an order of magnitude higher than the burst capability.The resulting 500 000 image lysozyme dataset was collected in 4 min and 10 s, without any lost frames (PDB ID 8p1b).The compression factor of the data was roughly sevenfold, resulting in a proven decrease in detector data rate from 17 GB s À1 in raw mode to a 2.4 GB s À1 compressed data rate.
To evaluate the quality of data acquired in a short time, a subset of the data of 10 000 images, collected in 5 s, was randomly selected (frames 40 001-50 000).This subset was processed using the same parameters as the full dataset and a 2.05 A ˚resolution structure could be obtained (PDB ID 8p1a).A summary of the datasets is found in Table 1 and the raw data are accessible at https://doi.org/10.48391/b0c36bb8-a00c-4519-8dcc-08d5ca60a313.
Kilohertz time-resolved data acquisition
In marine bacteria, light-driven sodium pumps maintain a low intracellular sodium-ion concentration and membrane potential (Inoue et al., 2013).These proteins are members of Components and connection of the Jungfraujoch data-acquisition system used at MAX IV.The following networks were used for the experiment: (gray/ black) a network specifically installed for the experiment, (red) InfiniBand fabric for file-system access, (purple) a fast (40 Gbit s À1 ) Ethernet network for streaming and (blue) a slow (1 Gbit s À1 ) control network.
Using the pump-probe setup described, we collected $29 000 diffraction patterns for each of the one-millisecondresolution time points, covering the range 1-130 ms.This time range was chosen deliberately, as we needed to ensure that exposed crystals were completely clearing the interaction area between pump-probe events.With the sample jet speed at 1.5 mm s À1 , the average crystal travels 200 mm in 130 ms, ensuring clearance of the 100 mm diameter laser spot.The pump laser was turned on 10 ms after the detector trigger to ensure complete initialization.The laser was on for 10 ms, and the crystal diffraction was probed for an additional 110 ms after the pump event.At a 10% indexing rate, the complete dataset, comprising 120 time points with 29 000 indexed frames each, was collected in roughly 10 h, allowing for a very efficient data collection [Fig.3(a)].
Comparison of the dark structure obtained here with those previously measured (Skopintsev et al., 2020) showed no significant difference at the achieved resolution of $2.3 A ˚.The example in Fig. 3(b) uses the time bin #21, which corresponds to 10-11 ms after laser illumination was initiated and 0-1 ms after illumination ended.The F o probe-F o dark difference map [Fig.3(b)] clearly shows the retinal isomerization, the valine flip and the movement of the -helix, which are characteristic features occurring in the millisecond time range of the KR2 photocycle, as observed previously (Skopintsev et al., 2020).
The comparison shows that the 1 kHz synchrotron difference map is comparable to the 1 ms SwissFEL result (Skopintsev et al., 2020) [Fig. 3(c)].However, because of limitations in dose and due to radiation damage when collecting synchrotron data at room temperature, the XFEL dataset benefits from a more powerful X-ray beam and therefore contains information at higher resolution.The synchrotron map shows less defined densities than the XFEL map, which allows one to resolve small amino acid movements more precisely, as can be seen around tyrosine 218 [Figs. 3(b) and 3(c)].However, most of the features observed at 1 ms are present in both maps.This shows that kilohertz data acquisition at a fourth-generation synchrotron, combined with a fast detector like the JUNGFRAU, is perfectly suitable for timeresolved studies on dynamics in the millisecond time range and possibly even below.
Conclusions and outlook
With the advent of a new era of protein crystallography, focusing on more dynamic and biologically relevant experiments, accessibility to beam time and methodology has become a bottleneck.The measurement times at XFELs are very limited, so enabling the use of the more readily available synchrotron beam time is crucial.For this to work, the most brilliant fourth-generation sources are needed and measurement time needs to be utilized as efficiently as possible.Collecting time-resolved data as fast as possible at the available X-ray sources is a major step towards infusing the field by effectively providing more experiment time.Here, we demonstrated the ability of the Jungfraujoch setup to collect a whole serial crystallography dataset with 5000 indexed frames in less than 5 s, without any significant loss in data quality.Furthermore, we showed that it is possible to push the time resolution for synchrotron-based time-resolved experiments to the low millisecond regime, all the while collecting multiple time points simultaneously, making these beam times highly efficient.With improvements on the detector side already pointing towards even faster speeds, and in combination with the advent of more fourth-generation sources and the Jungfraujoch data-acquisition architecture, it is thinkable to get into the microsecond domain in the near future, closing the gap to the ultra-fast XFEL-based measurements.The first beamline to push this frontier will be the upcoming MicroMAX beamline, which is tailored towards these kinds of experiments.Also, after the upcoming upgrade to SLS 2.0, the new PXI-VESPA beamline will focus on serial crystallography experiments, with its combination of new detectors, X-ray chopper and optional pink X-ray beam, targeting the 10 kHz data rate.Acquiring data at multiple kilohertz frame rates places a significant challenge on data infrastructure, but we were able to show that these challenges can be overcome.Since the kilohertz data are not noticeably worse compared with lower acquisition rates, it is suggested to always collect as fast as possible -here, faster is better.
Figure 1
Figure 1The setup at the BioMAX beamline.Blue represents the JUNGFRAU 4M prototype detector, yellow is the existing MD3 diffractometer, orange is the transient laser triggering setup and green is the mounted HVE.
Figure 3 (
Figure 3 (a) A schematic explanation of the data-collection pattern used to record the millisecond datasets.(b) A difference map of F o from bin #21 (probe, 1 ms after illumination) minus F o of bin #8 (dark).(c) A difference map of 1 ms F o (PDB ID 6tk2) minus dark F o (PDB ID 6tk6) recorded at SwissFEL [data from Skopintsev et al. (2020)].All the maps are shown at AE3 and resolution was cut at 2.38 A ˚.
Table 1
Data collection statistics and parameters used for spot finding.Values in parentheses refer to the highest resolution bin. | 5,317.2 | 2023-10-14T00:00:00.000 | [
"Physics",
"Engineering"
] |
X-ray scattering by many-particle systems
This paper reviews the treatment of high-frequency Thomson scattering in the non-relativistic and near-relativistic regimes with the primary purpose of understanding the nature of the frequency redistribution correction to the differential cross-section. This correction is generally represented by a factor involving the ratio ω α /ω β of the scattered (α) to primary (β) frequencies of the radiation. In some formulae given in the literature, the ratio appears squared, in others it does not. In Compton scattering, the frequency change is generally understood to be due to the recoil of the particle as a result of energy and momentum conservation in the photon–electron system. In this case, the Klein–Nishina formula gives the redistribution factor as . In the case of scattering by a many-particle system, however, the frequency and momentum changes are no longer directly interdependent but depend also upon the properties of the medium, which are encoded in the dynamic structure factor. We show that the redistribution factor explicit in the quantum cross-section (that seen by a photon) is ω α /ω β, which is not squared. Formulae for the many-body cross-section given in the literature, in which the factor is squared, can often be attributed to a different (classical) definition of the cross-section, though not all authors are explicit about which definition they are using. What is shown not to be true is that the structure factor simply gives the ratio of the many-electron to one-electron differential cross-sections, as is sometimes supposed. Mixing up the cross-section definitions can lead to errors when describing x-ray scattering. We illustrate the nature of the discrepancy by deriving the energy-integrated angular distributions, with first-order relativistic corrections, for classical and quantum scattering measurements, as well as the radiative opacity for photon diffusion in a Thomson-scattering medium, which is generally considered to be governed by quantum processes.
Introduction
The scattering of electromagnetic radiation in complex many-electron systems is of great interest both for understanding and modelling the transport of high frequency radiation in plasmas [1][2][3][4][5][6] and as a diagnostic tool for probing states of matter [7][8][9][10][11]. In this paper, we address, in some detail, the formal derivation of the general formulae for the scattering of high frequency x-ray photons by non-relativistic electrons. We show that generalizing from the single-particle formula to the many-body formula, is not straightforward and can readily lead to an incorrect result. In the following, we derive the general formula for the many-body Thomson cross-section from first principles and then deduce the one particle cross-section deduced from it. In this way it is possible to gain an understanding of the cause(s) of the apparent discrepancies in the literature.
The regime considered is one in which the frequency of the scattered radiation is well above the (electron) plasma frequency, e , and the electron motions are non-relativistic both before and after scattering. For scattering from systems in near-thermodynamic-equilibrium, the prevailing assumptions are therefore that e ω mc 2 , where ω is the frequency of the radiation, m is the mass of an electron, c is the velocity of light and T e is the electron temperature, and where units used are such that k B = 1,h = 1. The theory presented, in which first-order relativistic terms are retained, would be expected to be applicable, for example, to scattering of x-rays in the energy range 0.1-50 keV in matter at solid densities and below and temperatures 10 keV.
While the electrons are assumed to remain non-relativistic, the Compton recoil is generally not negligible being readily measurable and of importance in respect of the residual effects on the scatterer for which the recoil energy can be very significant, as well as the coherence and interference between temporally and spatially separated scattering events. It is important to bear in mind that the energy scales for the probe radiation (keV) and the scatterer system may be quite different, making it necessary to keep track of (at least) first-order recoil corrections. The underlying process is therefore Compton scattering by electrons, but in the non-relativistic regime. However, the interactions between the component particles of the scatterer system can mean that the recoil is taken up, not by a single electron, but by many particles, such as whole atoms (Rayleigh scattering) or even whole crystals (Bragg scattering, Mossbauer effect). In these regimes, the distinction between Compton and Thomson scattering is somewhat blurred. Nowadays, the term Thomson scattering is commonly used to describe scattering of electromagnetic radiation in the non-relativistic limit when many-body correlations between the particles in the scatterer play a significant role, while Compton scattering is generally reserved for a fully relativistic description of incoherent scattering by individual uncorrelated electrons. Another feature of Thomson scattering is that the polarization of the scattered radiation is completely determined by the polarization of the primary radiation and the scattering geometry while Compton scattering features an unpolarized component in the relativistic regime. According to these definitions, the treatment that follows is of Thomson scattering.
Low-energy Compton scattering
The Klein-Nishina differential cross-section dσ β for the Compton scattering of a photon into the solid angle element d α by an electron that is initially at rest is [12,13] in which α and β denote the final and initial photon states respectively; e α,β is the photon polarization unit vector, which, for transverse waves, is orthogonal to the photon wavevector, k α,β ; ω α,β is the photon frequency and r e = e 2 /4π 0 mc 2 is the classical electron radius; and where ω α and ω β are related by the Compton condition where m is the electron mass and µ is the cosine of the scattering angle. Combining (2) and (3) yields the Klein-Nishina equation in the form At low photon energies, that is when ω α , ω β m c 2 , this can be approximated by in which is the quantum Thomson one-electron cross-section as we define it here. Note that only terms of second and higher order in ω/mc 2 have been neglected. By quantum, we mean that it is the cross-section seen by a quantum of the radiation field, i.e. a photon. Equation (5), in which ω α is given by the Compton condition (3), provides a reasonably accurate description of Compton-scattering into all angles at sub ∼50 keV photon energies, and at considerably higher energies in the case of small angle scattering. The approximation (5) picks out only the polarization-dependent component of the crosssection, which then vanishes if e α · e β = 0. Since k α · e α = k β · e β = 0, it follows that e α · e β = 0 only if e β ×k α = 0 and (e α × e β ) ·k α = 0, which are the classical non-relativistic selection rules, and which are applicable to the polarized scattering component in the relativistic regime. These relations are sufficient to determine the outgoing polarization If the scattering angle is θ , and φ is the angle between the initial plane of polarization and the scattering plane, then (k α ×k β ) · (e β ×k β ) ≡ e β ·k α = cos φ sin θ and hence which is the Thomson scattering angular distribution, and in terms of which d α = sin θ dθdφ. One might naïvely generalize this formula to a system containing N e electrons according to ∂ 2 σ β /∂ α ∂ω α N e r 2 e (ω α /ω β ) 2 (e α · e β ) 2 S(k α − k β , ω α − ω β ), where S(q, ω) is the Van Hove dynamic structure factor, which encodes the correlations between the electrons, and k denotes the photon wavevector. However, despite formulae apparently supporting this generalization appearing in the literature e.g. [9], this would be wrong. In the following, we resolve the issue of differing powers of ω α /ω β appearing in various published cross-section formulae, which although largely attributable to differing definitions of the cross-section, can sometimes be due to, or result in, misunderstanding.
At this stage it is worth remarking that equation (2) applies specifically to Dirac particles. In the case of Compton scattering by a Klein-Gordon particle (meson) initially at rest, the corresponding (fully relativistic) cross-section [13] is given exactly by the quantum Thomson cross-section (5) in conjunction with (3) without the need for any further approximation. The neglected polarization-independent term in the Klein-Nishina cross-section is therefore peculiar to Dirac (fermion) particles.
Thomson scattering by a system of many electrons
We proceed by analysing the problem of Thomson scattering by a non-relativistic many-electron system from first principles. The interaction of electromagnetic radiation with a non-relativistic particle of mass m and charge e is described, in the first instance, by the classical Hamiltonian where A is the vector potential. The terms e 2 A 2 /2m and − (e/2m) (p · A + A · p) comprise the principal perturbation terms that give rise to scattering of the radiation. We consider only the term proportional to A 2 , which is the principal source of high frequency scattering. The remaining A · p term is the Kramers-Heisenberg polarization contribution [12], which contributes to scattering in second order, and gives rise to Raman scattering, by bound electrons for example. In general, this term makes a relatively small contribution to the scattering of photons whose frequencies are much greater than the plasma frequency and which are not subject to significant dispersion [12,14]. It is neglected for the purpose of the discussion presented here. To describe the interaction of radiation with a many-body system, we move to a second-quantized formulation. The effective interaction term in the Hamiltonian operator for the scattering of high-frequency radiation, by a many-body system confined to a volume V , is then which is given at an arbitrary time t = 0, where ρ (r) is the particle density operator and the electromagnetic vector potential operator at is represented by the modal expansion [15] A where 0 is the vacuum permittivity, a k,e is a boson annihilation operator for a photon in state k, e where k is the wavenumber, e is a unit vector in the direction of polarization (such that e · k = 0); and ω = k c is the corresponding frequency. Note that, although (9) describes a nonrelativistic particle, for Klein-Gordon particles, the interaction (10) represents the lowest order perturbation, even in the relativistic regime. For electrons, when the elementary scattering is described by the Klein-Nishina formula, the additional Dirac contribution is second order in ω α,β /mc 2 . This means that, despite (9) being non-relativistic, we can expect the following to have validity in the near-relativistic regime, as least as far as first-order relativistic corrections. Substituting (11) into (10) yields ω ω a † k ,e a k,e ρ k −k , where is the density operator in reciprocal (k) space. Taking matrix elements between the states (i, β), denoting the initial state of the system, and ( f, α), denoting a possible final state, in which the labels α, β denote the states of the scattered photon and i, f denote the states of the scatterer system, yields where q = k β − k α and e α is given by (6). The matrix element given by (14) does not depend upon time and, for a system in a steady state, gives the matrix element at an arbitrary time t = 0. Even for scattering from systems in equilibrium, the density exhibits time-dependent fluctuations, which influence the scattering. For other times, t = 0, the matrix element, in the interaction picture, is where H int is the perturbation Hamiltonian in the interaction picture, i.e. as given by (A. 5), (15) yields the t-matrix in the Born approximation, equation (A.42), in terms of which, according (A.43), the corresponding differential cross section for scattering into the photon channel d α = dω α d α is in which . . . denotes the average over the initial states of the scatterer and where the density of final states is defined by We choose to give (16) in the 'time-symmetric' form [15], which is generalizable to nonequilibrium systems (see the appendix) rather than in the more usual time-asymmetric form, (A.36), to which it is entirely equivalent for systems in equilibrium.
Substituting (15) into (16) yields where is the Van Hove dynamic structure factor [16][17][18][19], which is a real quantity for real q, ω. The density of final states g α follows from whereby the definition (17) yields Combining (18) and (21) yields finally where σ T = 8πr 2 e /3 is the Thomson cross-section, which is presented as the general quantummechanical formula for Thomson scattering of photons from a many-particle system. It is relativistically accurate as far as terms of order ω/mc 2 . The formula for the scattering by a single particle follows from this and is considered below. An important feature of this formula is that it explicitly allows for an exchange of energy between the radiation and the particles with an associated change in the photon frequency. As well as the argument of the structure factor, this energy exchange also appears in the factor ω α /ω β , which, note, does not appear squared, as it does in the single-particle formula (5).
As an aside, before proceeding further, we should caution against interpreting k β − k α as a momentum exchange, since, as the k s actually represent reciprocal vectors, this is, strictly speaking, a pseudomomentum [20]. This is related to the fact that, in an extended system, the continuous translational symmetry, which gives rise to momentum conservation, is replaced by a discrete topological symmetry, that of a three-torus, resulting from the imposition of cyclic boundary conditions. For electromagnetic radiation, this becomes an issue only when the refractive index differs significantly from unity. However, for x-rays, the refractive index is typically sufficiently close to unity for k β − k α to be considered as a real momentum exchange due to the scattering. In the following, we shall be concerned only with regimes in which the refractive index is effectively unity.
Thomson scattering by individual electrons
Equation (22) gives the double differential cross-section for Thomson scattering of photons by a many-particle system. The details of the individual scatterings, and the correlations on which they depend, are hidden in the structure factor, which is a property of the scattering system. By reversing (18), and substituting for g α from (21), we can rewrite the differential cross section in the form which we apply to the scattering of high energy photons from free electrons in the non-collective regime, i.e. to incoherent scattering. Integrating both sides over ω α , noting that, for a given scattering direction, ω α and ε f are not independent, but are related through the Compton condition, yields the differential cross-section for the angular distribution in which E i = ε i + ω β and E f = ε f + ω α denote the initial and final state energies respectively and where the derivative ∂ω α /∂ E f is calculated for fixed initial conditions and fixed scattering geometry, and subsequently evaluated on the energy shell. The density operator for a single free particle in a finite volume is ρ q (0) = e −iq·r , which leads to in which P i = p i + k β and P f = p f + k α now denote the initial and final total momenta respectively, and where Now Differentiation of (27) with respect to ω α , for fixed p i , k β ,k α , and subsequently eliminating P · k α c using (28), yields and hence where which is the relativistically exact result. Combining this with (25) now yields (in which the stray ε f /ε i factor is due to not using relativistically normalized wavefunctions). If the electron is initially stationary, then, neglecting terms of order (ω/mc 2 ) 2 as previously, this becomes which is in agreement with one-electron Thomson formula (5) which also applies to an electron initially at rest. Recoil and quantum-relativistic corrections are included as far as O(ω/mc 2 ) as represented by the factor (ω α /ω β ) 2 1 + 2 ω α −ω β ω β . As has thus been shown, this is consistent with the more general many-body formula (22) involving the dynamic structure factor where the correction factor appears as ω α /ω β = 1 + ω α −ω β ω β , the missing factor of ω α /ω β having effectively been subsumed into the structure factor.
Photon angular distribution for scattering by a many-electron system
The energy-integrated photon angular distribution for scattering from a many-electron system that follows from (22) is where q = k β − k α . There are two things to note about the integral in the rightmost expression in (34): firstly, the integral over ω does not extend all the way to ∞; and secondly, the integration must be carried out for fixed scattering geometry, as expressed by α = (θ, φ) in which case q is not independent of ω = ω β − ω α . since, by definition, qc = ω βkβ − ω αkα which yields, neglecting terms of order ω 2 /c 2 , where and µ =k α ·k β is the cosine of the scattering angle. Using that S (q, ω) is generally a function of q 2 , then, by means of Taylor series expansions around q 2 = q 2 0 , one obtains where ) j . The next thing we need to consider is the residual integration over ω β < ω < ∞. The high-frequency part of the structure factor is considered to be dominated by quasi-free particle motions, with resonant and collective behaviour confined to much lower frequencies. We therefore deem it appropriate to use the high-frequency limit of the random phase approximation (RPA), which, for arbitrary electron degeneracy, is where η = µ e /T e is the degeneracy parameter, I j (x) = ∞ 0 y j (1 + exp (y − x)) −1 d y is the standard Fermi integral, and ν q = q 2 /2m. Using (38) together with the asymptotic form [21] ∞ →∞ it follows straightforwardly that, for large in which, ignoring the minor difference between q and q 0 , and 2 1.
The exponential factor in (40) is therefore exp(−mc 2 /8T e ) which means that, at low enough temperatures, certainly those below ∼ 10 keV, the residual contribution to the integral over ω is negligible, allowing the limit to be extended to infinity. Combining the above results then yields the photon angular distribution according to Now, from the elastic and f-sum rules [17][18][19] +∞ −∞ S(q, ω)dω = S(q), where S(q) is the static structure factor, while, for hot (classical) plasmas, those for which T e e , q 2 /2m, the second moment is given by [22] +∞ −∞ The higher moments depend upon more detailed properties of the scatterer. Explicit formulae for the fourth moment, for example, are given in [22,23]. The odd moments vanish in the classical limit. For electrons in a hot plasma, for which T e e , we assume the general semiclassical forms 1 where ω n q denotes the nth frequency moment of the dynamic structure factor and where, for n 0, the functions F n (q) are relatively slowly varying O(1) functions of q whose derivatives will be ignored. For the first few values of n we find F 0 (q) = 1, F 1 (q) = 1 and for small-q and large-q respectively, where D e = √ T e /m e is the electron screening length. Equations (47) yield the leading order dependences on q 2 T e /m and e /T e . Carrying out the integrals in (43) according to these formulae and making reference to (36), yields the angular distribution dσ β d α N e r 2 e (e α · e β ) 2 S (q 0 ) − in which only the lowest order recoil correction terms of order ω β /mc 2 and T e /mc 2 have been retained. In the classical limit, (48) becomes dσ β d α N e r 2 e (e α · e β ) 2 S (q 0 ) + where q 0 is defined by (36).
Scattering of energy-the classical cross-section
The above defines the differential cross-section ∂ 2 σ β /∂ω α ∂ α as being the ratio of the number of scattered photons per unit time in dω α , d α to the photon flux in a collimated monochromatic incident beam (the incident channel). This is the quantum cross-section. It describes scattering in terms of discrete processes involving a quantized electromagnetic field in which the number of energy quanta (photons) is conserved. However, in classical systems, the concept of a photon is not recognized and the differential cross-section ∂ 2 β /∂ω α ∂ α is defined differently to be the ratio of the scattered energy (or power) in (dω α , d α ) to the energy (or power) incident per unit area in the form of a collimated monochromatic beam. Since the energy of a photon is proportional to the frequency, the relationship between the classical and quantum cross-sections is readily found to be given by This yields, from (22), in which the factor of ω α /ω β now does appear squared. Equation (51) agrees with formulae in the literature that are derived classically in accordance with this definition, e.g. [10]. Moreover, in the same way, referring to (5) and applying (50), the classical one-electron Thomson crosssection is The relationship between the double differential cross section of a many electron system and the corresponding one-electron Thomson cross-section is thus expressed by [24] ∂σ which holds generally for both the classical,σ = β , and quantum,σ = σ β , cross-sections. The angular distribution of the scattered energy is given by which is the formula for the classical angular distribution that replaces (43). Evaluating (54) in the same manner as that leading to (48) then yields in which the term proportional to ω 3 q 0 /ω 3 β contributes only in the small-q regime as defined above. Equation (55) is still a fully quantal expression for the angular distribution of the scattered energy. It is similar, but not identical, to the photon angular distribution (48). Which of these two forms may be applicable depends upon the particular experimental setup, whether envisaged or actual. If the detectors measuring the scattered radiation at each angle respond proportionally to the number of photons, then (48) is the appropriate formula, while, if, as is more widely the case, they respond proportionally to the energy, then (55) is the appropriate form to use. Equation (55) also defines the classical angular distribution. However, a fully classical description generally requires ω β T e (which does not hold for x-ray scattering in cold and warm matter) while only the even moments of the structure factor are classically finite quantities. Eliminating O(h) terms from (55) with reference to (47) yields (cf (48)). Equation (56) gives the angular distribution of the scattered energy in a classical system. We observe that, in both (49) and (56), the retained terms involving q 2 0 T e /mω 2 β = (2T e /mc 2 )(1 − µ) are first-order relativistic corrections involving the temperature that vanish in the non-relativistic limit when T e mc 2 .
The Rosseland scattering opacity
An example of a process that is governed, at the microscopic level, by quantum processes is the transport of thermal radiation in dense matter. Thermal radiation transport involves the absorption, emission and scattering of photons by the constitutive atoms and electrons. In the case of transport in systems in local thermodynamic equilibrium (LTE), these processes can be represented by the frequency-dependent or monochromatic opacity κ(ω) which is essentially the effective photon absorption cross-section per unit mass, taking account of all relevant processes and their inverses. In the diffusion limit, radiative energy transport is governed by the Rosseland opacity, which is related to an appropriate average over frequency of the monochromatic mean free path λ(ω) = 1/ρκ(ω) taken over a black body distribution. In this context, the scattering contribution to κ(ω) is quite complicated, as it not only needs to take account of the change of direction of the photon, but also the effect of any frequency change of the emergent photon on its subsequent transport, as well as stimulated scattering due to the presence of a background (e.g. black-body) radiation field. These corrections are likely to be of the same order as the above corrections to the angular distribution and therefore need to be considered at the same time.
The scattering contribution to the monochromatic opacity governing the diffusion of radiative energy, in matter that is in LTE at temperature T , is given by [4,5] 1 As well as the differential cross-section, the integrand contains two additional factors. The factor 1−exp(−ω β /T ) 1−exp(−ω α /T ) represents the effect of stimulated scattering in the presence of a background black-body radiation field. The other factor represents the effect of the change in photon frequency on its ability to transport energy in the exit (α) channel and involves the total photon mean free path λ(ω) due to all processes. In a purely scattering medium, λ = λ s and, in general, λ(ω) depends on λ s , via λ −1 = λ −1 s + λ −1 ns where λ ns is the mean free path due to non-scattering processes, such as absorption and stimulated emission, so equation (57) is not closed and needs to be solved iteratively. Two circumstances in which this is not necessary are when the scattering contribution to λ(ω) is small, i.e. λ s λ ns , or when λ does not depend (strongly) on the frequency. The first case is unlikely to be of much interest simply because scattering is then of low importance. In the case of a system dominated by Thomson scattering, a frequency independent mean free path is a reasonable first approximation, which, using (22) and (8) and extending the integration over ω to infinity, becomes, for non-relativistic electrons, where ω α = ω β − ω. Expanding about ω = 0 yields where whereupon (58) becomes Using the expansion (37) and carrying out the integrations in accordance with (44)-(46) yields, after some algebra, where q 0 is given by (36) and in which relativistic corrections only as far as O ω β /mc 2 or O T /mc 2 have been retained. In the fully non-relativistic low-photon-energy limit we recover the Thomson opacity [6, 25] which incorporates, through the static structure factor S(q 0 ), the effect of electron correlations, including exchange and the direct effect of degeneracy, which can be important at very high densities such as occur in stellar cores [26]. The correction terms in (62) become important at the 10% level at temperatures T 6 keV . In the highly relativistic regime, in addition to including the additional relativistic terms from the Klein-Nishina formula (2), it is necessary to account for extra electron-positron pairs in the equilibrium state. A somewhat different approach [5] is then required.
Conclusions
We have presented derivations of the high-frequency Thomson differential scattering crosssections for the scattering of photons by a many-electron system (22) and by a single electron (33). In the case of a many-electron system, the direction and energy of the outgoing photon are treatable as independent parameters and the result is expressed as a double differential crosssection. In the case of scattering by one electron, however, momentum and energy conservation mean that the change in energy is determined by the scattering angle. The key feature to note is that the index ν of the overall factor (ω α /ω β ) ν differs between the two formulae, with ν = 1 for the many-electron formula and ν = 2 for one electron.
In the calculation of the angular distribution of photons scattered from a many-body system, it is necessary to take account of the lack of independence between ω and q in the argument of the dynamic structure factor when integrating over the final state energy for fixed initial energy and scattering geometry. This yields a formula (48) that contains additional O(ω β /mc 2 ) and O(T e /mc 2 ) terms compared with the result when this interdependence is ignored.
However, when the cross-section is defined classically in terms of the energy in each channel, then the cross-section formula acquires an extra factor of ω α /ω β . It is important to be aware of the difference between these two forms of the cross-section. Which choice of cross-section is employed will depend on how the scattering is envisaged as being measured, in particular whether the detectors respond to photon number (quantum detectors) or to the energy (classical detectors). The formulae for the angular distributions differ in respect of the coefficients of higher order relativistic correction terms of O(ω β /mc 2 ) and O(T e /mc 2 ) as well as, in the small-q/ forward-angle regime, terms involving the fourth moment of the dynamic structure factor, which we estimate to be O( 4 e /ω 3 β T e ). The differences between the classical and quantum formulae can lead to confusion if the reader is not at pains to establish which cross-section a particular author is referring to. Authors do not help by simply referring to the Compton or Thomson cross section as if this were uniquely defined. However the cited texts [8][9][10] generally give correct formulae in their respective contexts, with the exception of reference [9], where the one-electron Thomson crosssection, dσ β /d α T is incorrectly deduced from a correct many-electron formula, thus yielding the cross-section as r 2 e ( ω α ω β )(e α · e β ) 2 instead of as given by (5). A guiding principle is that, whichever way the cross-section is defined, the relationship, between the one-electron crosssection and the double differential cross-section for scattering by a many-body system, is always expressed by equation (53).
We also consider the implications for the radiative opacity as used in modelling the diffusion of thermal radiation. The scattering opacity is generally formulated in terms of the photon cross-sections with additional corrections due to photon transport and stimulated scattering. These corrections lead to additional relativistic corrections that are of the same order as those arising from the ω α /ω β factor and which therefore need to be considered at the same time.
the second of which expresses the matrix S † (t) = S † αβ (t) as the Hermitian adjoint of S(t) = S αβ (t) .
The property α S † γ α (t)S αβ (t) = δ γβ : ∀t (A.9) follows from the unitary property of (t) given at (A.2). S(t) is a time-dependent generalization of the standard S-matrix, S 0 αβ , to which it reduces when t = ∞. Using (A.6), the time derivative of the generalized S-matrix is where T αβ (t) is the time-dependent generalization of the t-matrix. Integrating equations (A.10), using that S αβ (−∞) = δ αβ , yields The time-dependence of the t-matrix can be expressed, in the interaction picture, by where T 0 (t) is the transition operator in the Schrödinger picture, which, from (A.6), is given by is the wave operator in the Schrödinger picture. Using (A.12), the t-matrix becomes Substitution of (A.15) into (A.11) yields and where use has been made of the following representation of the step function: For t → ∞, the first of equations (A.17) yields For finite t, by making use of the Cauchy identity, in conjunction with the third of equations (A.17), leads to Furthermore, by Fourier transforming (A.17) whereS αβ (ω) is defined in terms of S αβ (t) analogously to (A.18).
A.2. Stationary scattering theory
The special case when the perturbation H does not depend on time, and energy is conserved, is known as stationary scattering theory. However we still have the prevailing requirement that H (−∞) = H (∞) = 0. This is formally dealt with by allowing H to depend on time for t < −t 1 and t > t 2 during which times, the time dependent processes are adiabatic so as to maintain the system in a pure state, and then taking the limit t 1 , t 2 → ∞. This is sometimes described as adiabatic switching of the perturbation on and off in the distant past and the distant future respectively. In the special case when H is, for all finite times, independent of time, energy is conserved and so The S-matrix, from (A.11), is then the limit of which for t → ∞ is given, using (A.20) and (A.8), by which is the standard relationship between the S-matrix and the t-matrix for conservative systems.
A.3. Transition rate
The S-matrix element S αβ , as defined by (A.8), expresses by how much the state (A.1) has evolved into the 'final' state |α . The total transition probability at time t is therefore S αβ 2 = S † βα (t) S αβ (t). The instantaneous transition rate is therefore ν αβ = ∂ ∂t S † βα (t)S αβ (t) (A.31) which, using (A.10) and (A.11), becomes, for α = β, ν αβ (t) = − i(T † βα (t)S αβ (t) − S † βα (t)T αβ (t)) = t −τ/2 (T † βα (t)T αβ (t ) + T † βα (t )T αβ (t))dt = +τ/2 −τ/2 θ (t − t )(T † βα (t)T αβ (t ) + T † βα (t )T αβ (t))dt , (A.32) where H (t), and hence T αβ (t) etc, are presumed to vanish for t < −τ/2 and t > +τ/2. (The formula (A.32) is therefore unchanged by taking the limit τ → ∞. However, for the moment, we retain the assumption that τ is finite.) The transition rate defined by (A.32) includes the effect of the time-dependent fluctuations of the system. Real measurements however are taken over finite time intervals, and what one generally wants is a time-averaged transition rate. Any average over a finite time interval will itself remain subject to fluctuations. However, in the case of a system that is undergoing steadystate fluctuations, by which it is meant that, in the limit of τ → ∞, the functions T βα (t) are non-square-integrable functions of time, for which the expectations For systems in equilibrium or in LTE, the time average can generally replaced by the ensemble average, in accordance with the ergodic hypothesis, and where the ensemble average is effectively taken over a subset of initial states β around β, comprising the entrance channel, also labelled by β. i.e. f = trace(ρ β f ), ρ β = P β ρ/trace(P β ρ), in which ρ denotes the statistical operator. This formulation is appropriate when the projectile, in this case the photon, is in a definite state, while the scatterer, in this case a many-body system of electrons, is in a statistical state. | 8,299.2 | 2013-01-01T00:00:00.000 | [
"Physics"
] |
Synthesis of prolate gold nanoparticles for use in plasmon-enhanced overtone near-infrared spectroscopy
Gold nanoparticles were obtained by the method of three-stage growth from a seed solution. Scanning electron microscopy images as well as comparison of extinction spectra to the results of numerical simulations prove the formation of prolate nanoparticles. Such particles with localized plasmon resonance in the NIR region are much-needed for the plasmon-enhanced overtone spectroscopy.
Introduction
Near-infrared (NIR) spectroscopy is a type of vibrational spectroscopy that studies the interaction of light with matter in the wavelength range from 750 nm to 2500 nm. As a practical application of NIR spectroscopy, it is possible to conduct quantitative and qualitative analyses of organic molecules in a non-invasive and non-destructive way. This method can successfully detect substances containing functional groups such as CH, NH, SH or OH. Moreover, the spectra in the NIR region can be used to distinguish compounds from each other, since the overtones of the valence vibrations of the hydrogen atom that is part of these groups depend on which atom the hydrogen atom is bound to. NIR spectroscopy has received a new impetus due to the emergence of intensive sources and reliable radiation receivers, which have been intensively developed in recent years for applications in telecommunications based on optical fiber. A significant limitation in the use of NIR spectroscopy is the relatively low intensity of the absorption bands of dipole vibrational transitions in this spectral region compared to those in the mid-infrared region. One of the promising ways to increase the probabilities of vibrational transitions is associated with their enhancement in the near field of metal nanoparticles with plasmon resonances. To do this, it is necessary to fulfill the resonant condition of the spectral overlap between the absorption band of the plasmon resonance and the absorption band of the overtone vibrational transitions [1,2]. Since the plasmon resonance in gold nanospheres is located in the visible range of the spectrum, to create chemical sensors that work on the basis of registering of optical transitions characteristic of organic substances in the near infrared region gold nanoparticles of a different shape, such as gold nanorods, are needed.
Currently, insufficient attention is paid to the synthesis of nanoparticles with plasmon resonances in the near-infrared region of the spectrum. Most of the papers is devoted to the synthesis and study of nanoparticles with plasmon resonances at wavelengths smaller than 1000 nm. Therefore, the creation of 2 such nanoparticles, whose plasmon resonance is located at wavelengths of about 1500 nm, which are characteristic of the first overtone of the valence vibrations of the hydrogen atom, is an urgent task. The position of the plasmon resonance of a nanorod depends on the ratio of its length to its diameter, which is called the aspect ratio. Depending on the magnitude of the aspect ratio, the position of the plasmon resonance in the gold nanorods can vary from the visible wavelength range to the infrared. Therefore, the aim of this work is to select a method for the synthesis of gold nanorods, in which the peak of their absorption will fall in the near infrared range.
Experimental section
The method we used for the synthesis of gold nanorods was that small spherical nanoparticles were first grown, which served as a seed in the subsequent stages of synthesis, in which the length of the nanorods increased without changing their diameter [3,4]. Growth control was achieved by using HAuCl4•3H2O (hydrogen tetrachloroaurate (III)) and CTAB (cetyltrimethylammonium bromide).
To synthesize the gold nanorods a gold seed solution from hydrogen tetrachloroaurate, trisodium citrate, and sodium borohydride were prepared. To achieve the nanoparticles with an aspect ratio greater than 10, a three-stage method of nanoparticle growth was employed. In this case, solutions of HAuCl4, CTAB, and ascorbic acid were added to three test-tubes. Further, these test-tubes were designated as "A", "B" and "C", respectively. Next, a seed solution was added to each test-tube. From the resulting solutions, 1 ml of solution A was added to solution B, and then 1 ml of solution B was added to solution C.
The purification of nanoparticles from the synthesis by-products was carried out in several stages. To begin with, 1 ml of the solution was taken from the test-tube C and centrifuged to deposit the nanoparticles on the bottom of the test-tube. The supernatant was selected and added to the sediment 500 ml of polyethylene glycol (PEG). The procedure was repeated 2-3 times. Figure 1 shows scanning electron microscopy (SEM) images of nanoparticles with the desired aspect ratio of 13, produced by the three-stage seeding method. Figure 1(a) clearly shows that nanoparticles of different geometries were prepared from simple spheres to nanowires of several hundred nm in length. The nanorods of interest are also present. The size and shape distribution of the prepared gold nanoparticles were determined from SEMimages using ImageJ software. Particularly, 100 nanowires, 100 nanorods, and 100 spheres were chosen to plot the histogram. The analysis of the histograms shows that the average length of the nanowires is 589 nm, the have the average length of the nanorods is 128 nm, and the average diameter of the nanospheres is 38 nm. Using these geometrical parameters, the absorption (ACS), scattering (SCS), and extinction (ECS) cross-sections of nanowires, nanorods, and nanospheres in water were numerically After the first synthesis, the nanoparticles were selected and purified from solutions B and C, and then the absorption spectra were measured using a Shimadzu UV Probe-3600 spectrophotometer. The measuring of the absorption spectra of the gold nanoparticles were done in deuterium water (D2O) ( Figure 5). Figure 5. Absorption spectra of gold nanoparticles from solutions B and C. The spectra have the pronounced plasmon resonance absorption band around 530 nm, which indicates the presence of spherical nanoparticles in solutions. It is worthy to notice, the appeared absorption band about 1700 nm, indicating the presence of the prolonged nanoparticles in the shape of nanorods, are located within the vibrational frequencies of CH and NH functional groups of many organic molecules.
Conclusion
The synthesized gold nanoparticles were examined using scanning electron microscopy. It turned out that the morphology of the nanoparticles is different, as there are both the desired nanorods and nanowires, as well as nanoparticles of other shapes, in particular, triangles and spheres. The average size of the obtained nanoparticles was determined. The obtained absorption spectra indicate that prolate gold nanoparticles with the plasmon absorption band in the near-infrared region have been synthesised. Thus, the results obtained make it possible to tune the plasmon resonance band, by tailoring the size and shape of the nanoparticles, for the development of label-free sensor based on surface-enhanced near-infrared absorption effect.
Acknowledgements
This work has been funded by Scholarship of the President of the Russian Federation SP-1654.2021.1 | 1,570.8 | 2021-11-01T00:00:00.000 | [
"Materials Science",
"Physics"
] |
Volatile transport on inhomogeneous surfaces: II. Numerical calculations (VT3D)
Several distant icy worlds have atmospheres that are in vapor-pressure equilibrium with their surface volatiles, including Pluto, Triton, and, probably, several large KBOs near perihelion. Studies of the volatile and thermal evolution of these have been limited by computational speed, especially for models that treat surfaces that vary with both latitude and longitude. In order to expedite such work, I present a new numerical model for the seasonal behavior of Pluto and Triton which (i) uses initial conditions that improve convergence, (ii) uses an expedient method for handling the transition between global and non-global atmospheres, (iii) includes local conservation of energy and global conservation of mass to partition energy between heating, conduction, and sublimation or condensation, (iv) uses time-stepping algorithms that ensure stability while allowing larger timesteps, and (v) can include longitudinal variability. This model, called VT3D, has been used in Young (2012), Young (2013), Olkin et al. (2015), Young and McKinnon (2013), and French et al. (2015).
Introduction
Pluto and Triton have atmospheres whose pressures have been measured by stellar occultations (e.g., Young et al. 2008a, Olkin et al. 1997) and spacecraft (Gurrola 1995, Krasnopolsky et al. 1993, Stern et al. 2015, Gladstone et al. 2016). These measurements reveal atmospheres for Pluto and Triton that are global in extent, almost certainly controlled by vapor-pressure equilibrium of the surface N 2 ice, (Spencer et al. 1997, Yelle et al. 1995, similar to the role of CO 2 on Mars (Leighton and Murray 1996).
Vapor pressure is an exceedingly sensitive function of temperature, and early models predicted that the surface pressures of Pluto and Triton would vary by orders of magnitude over their years (e.g., Paige 1992, 1996;Moore and Spencer 1990;Spencer and Moore 1992). Those early models were based on a single observation of the atmospheric pressure, either the Triton flyby in 1989 or the definitive discovery Pluto occultation in 1988 (e.g., Elliot and Young 1992). Since that time, further occultations have shown a large increase in the atmospheric pressures of both Pluto and Triton since the late 1980's (Elliot et al. 1998;Elliot et al. 2003). Other advances in the past decade include an improved understanding of the surface compositions of Pluto and Triton (Grundy andBuie 2001, Grundy et al. 2010). It is time for new models (Young 2012a, Young 2013, Olkin et al. 2015. This work describes the model used by Young (2012aYoung ( , 2013 and Olkin et al. (2015).
Since the rash of models in the 1990's, the large, volatile-covered ice worlds Pluto and Triton have been joined by other large, volatile-covered bodies in the outer solar system, including the large Kuiper Belt Objects (KBOs) Eris, Sedna, Makemake, Haumea, Quaoar (Schaller and Brown 2007), and 2007 OR10 (Brown et al. 2011). Some of these should have atmospheres at some time in their orbit (Stern & Trafton 2008). In particular, the 98% albedo of Eris argues for a perihelion atmosphere that collapses near aphelion, freshening Eris's surface (Sicardy et al. 2011). Hansen and Paige 1996). Locally, we balance absorbed insolation, S, emitted thermal energy εσT 4 , and latent heat of sublimation or condensation, L S dm V /dt, where m V is the mass per area of the volatile slab and L S is the latent heat of sublimation. Additionally we balance (i) heat to and from the substrate, k dT/dz, where k is the thermal conductivity and dT/dz is the vertical gradient of temperature, and (ii) the heat capacity of the isothermal ice layer, m V dH V /dt ≈ m V c V dT V /dt, where H V is the enthalpy and c V is the specific heat of the volatile slab (subscript V for volatile). At the lower boundary, there is a heat flow of F. All variables except T V are free to vary with latitude and longitude. Compared with Young (2012a; Paper I), this figure illustrates (i) heating within the substrate for vertically varying k, and (ii) enthalpy of the ice slab, H V , to allow the treatment of solid-phase transitions.
The latent heat of sublimation term of the energy equation depends on the mass flux (dm V /dt in Fig. 2-1). For extremely thin atmospheres, such as on Io or possibly currently on Eris, some atmospheric flow occurs, but is ineffective in changing local surface temperatures (Fig 2-2A). In this case, the volatile slab temperature is controlled by local conditions only. The volatile slab temperature and the local atmospheric pressure are generally higher in areas of high insolation. For thin atmospheres, we assume no atmosphere over the bare areas. This approach allows efficient calculation of surface and subsurface temperatures. Once I end this section with a few words about the modularity of the techniques described here.
• Anyone interested in this model should read Section 2 (this section) because it is short, and provides an overview.
• If your object has no volatiles, you do not need to read past Section 3. Volatile Transport II (VT3D) • If you want to characterize which processes are important in controlling surface temperatures, you can stop at the calculation of the thermal parameters, or Eq. 3.2-11 for volatile-free bodies, plus Eq. 4.2-7 and 4.2-8 for isolated volatile-covered areas, or Eq. 5.2-2a-d for volatile-covered interacting areas.
• If you want to very quickly approximate a temperature field based on the solar forcing, read Section 3.1-3.2, plus Section 4.1-4.2 if you have isolated volatiles, and Section 5.1-5.2 if you have interacting volatile covered areas. The critical equations are • If you are calculating temperatures at one volatile-free location at a time, you can stop at Section 3.3. If you are calculating one isolated volatile-covered location at a time, read through Section 3.3, then skip ahead to Sections 4.1-4.3.
• If you are calculating roughly several hundred timesteps per period (e.g., to gain insights at short timescales or to make smooth plots), then the explicit equations will be stable, and the implicit equations will not save much computation time. In that case, you can skip those equations in Section 3.3, 4.3, and 5.3 that are described as implicit (roughly half of them), and all of sections 3.4b and 5.4b. Table 1 recap the definitions of Areas I and III and their interaction with the atmosphere. For Area I (bare, no mass exchange, no atmosphere), the physics in VT3D is identical to the well-known thermophysical model (TPM) used to interpret thermal emission from airless bodies (e.g., Thomas et al. 2000;Spencer et al. 1989;Harris 1988). Heating in the top-most layer is balanced by thermal emission, insolation, and conduction; heating in interior layers is balanced by conduction only; heating in the lower layer is balanced by conduction and a flux condition at the lower boundary.
VT3D for bare locations (Areas I and III)
Area III (bare, no mass exchange, isobaric atmosphere) represents, for example, the "bedrock" H 2 O on current-day Triton. There is no volatile slab and no sublimation. The difference between the two bare area types are (i) Area III is a potential deposition site, and (ii) an increase in the volatile temperature for Area IV (volatile-covered, isobaric) also increases the pressure over Area III, so the atmosphere above Area III needs to be included in mass balance equation for Area IV. As long as there is no condensation (which will alter the state from bare to volatile-covered), the energy balance for Area III is the same as for Area I. Therefore, both bare areas, I and III, are treated in this section. Volatile Transport II (VT3D) These equations demonstrate several aspects of the numerical power of VT3D. In Section 3.1 and 3.2, I show the analytic expressions for the initial conditions, and show that a simple calculation can approximate the numerical solution. In Section 3.3, I present the explicit and implicit (Crank-Nicholson) numerical solutions for a single bare location, showing that solutions spin up in less than a quarter period. In Section 3.4, I show a compact representation of the linearized, discretized equations. In Section 3.5, I present a worked example of Mimas's diurnal temperatures, with code and output in the supplementary materials.
Areas I and III : Continuous expressions for bare areas
At the lower boundary, there may be positive (or negative) heat flow, F, which is balanced by upward (or downward) thermal conduction from a negative (or positive) thermal gradient: where k is the thermal conductivity, and T is the temperature. As with Paper I, z is a height coordinate, defined to be zero at the top of the substrate and decreasing downward. Thus, z = 0 at the substrate-volatile interface for locations where there is a volatile slab, or at the surface on volatile-free areas.
Within the substrate, I assume there are no heating sources, so net conductive heat flux is balanced by changes in the temperature of the substrate: where ρ is the density, c is the specific heat at constant pressure for the substrate, t is time, and T is the temperature.
The energy balance at the surface balances net heating with absorbed sunlight, thermal emission, and thermal conduction. There is no latent heat of sublimation or condensation. The total equation is where S is the absorbed solar energy, and ε is the emissivity, and σ is the Stefan-Boltzmann constant.
The first term of Eq. (3.1-3) describes the solar energy absorbed by the volatile slab, in power per area. For Triton, Pluto, Eris and other large KBOs, the fraction of sunlight absorbed by the atmosphere is small, and we do not need to alter S to account for atmospheric absorption. The absorbed solar energy at a particular location and time of day depends on the solar flux at 1 AU, S 1AU , the heliocentric distance, r, the hemispheric albedo, A h , and the cosine of the solar incidence angle, µ 0 (where µ 0 is 0 when the sun is below the horizon).
where S SS is the absorbed insolation at the sub-solar point. µ 0 depends on latitude, λ, subsolar latitude, λ 0 , and the hour angle, h (where h is the difference between the location's longitude and the subsolar longitude, defined to increase with time at any given location).
€ µ 0 = max 0,sin λ sin λ 0 + cos λ cos λ 0 cos h ( ) The hemispheric albedo, A h , is a local quantity, also known as the directional-hemispherical reflectance, hemispherical reflectance, or plane albedo (Hapke 1993). It is defined as the ratio of the total scattered power to the incident collimated power, € (S 1AU /r 2 )µ 0 , and depends on the location on the surface and the incidence angle. It is useful to approximate the hemispheric albedo by its average over all incidence angles, or where A S is known as the spherical reflectance, spherical albedo, or the Bond albedo (note, however, that Bond albedo is strictly defined for an entire surface). For typical phase functions in the outer solar system, substituting A S for A h tends to slightly underestimate solar heating for direct illumination and overestimate solar heating for large incidence angles. Since there is typically large uncertainty in the values of A S or A h due to uncertain phase functions, this distinction is usually ignored. The remainder of the paper uses A for A h , and does not distinguish between A h and A S .
The second term of Eq. (3.1-3) represents thermal energy emitted by the substrate. For a physical surface, this term might include such effects as self-heating from crater sides (Spencer 1990;Rozitis and Green 2011). In VT3D the emissivity, ε, is treated as a parameter Volatile Transport II (VT3D) that defines the power per area lost by thermal emission. Since ε can vary with location and time, it can be used to encompass these more subtle physical effects.
The final term of Eq. (3.1-3) represents thermal conduction from the substrate. If the substrate just below the interface is warmer than the surface temperature (dT/dz < 0), then conduction expressed by this term warms the surface.
Areas I and III : Analytic approximation and initialization bare areas
This section expands on key results of Paper I. The purpose is to introduce variables that will be used later, and to show the equations that will be used to initialize numerical calculations. For more discussion of the derivation, see Paper I.
If the solar insolation, S, at latitude λ and longitude φ, is a known function of time, t, with period P, then it can be approximated as a sum of M+1 sinusoidal terms where ω = 2π / P is the frequency of the diurnal or seasonal forcing, and Ŝ m λ,φ ( ) are the complex sinusoidal coefficients, with the hat indicating complex quantities (note, however, that S 0 is real). The coefficients are derived from the insolation in an expression closely related to the Fourier transform: A common application is diurnal forcing. For areas in permanent darkness, the solution is trivially € ˆ S m (λ,φ) = 0. For others, the diurnally averaged insolation can be expressed analytically (e.g., Levine et al. 1977). One first finds the maximum hour angle of illumination, h max , (h max = π for areas of constant illumination) where λ is the latitude and λ 0 is the sub-solar latitude, as before. The average insolation is a real quantity, so written without the hat, and is given by where the ratio S 0 /S SS is the longitudinal average of µ 0 . The decomposition of the solar forcing can also be written analytically. For a location that has hour angle h 0 at time t = 0, the first term is and, for m > 1, If the latitude of the surface element or the sub-solar latitude are near equatorial, then the solar terms are dominated by the first two terms, then diminish quickly with higher order; at the equator, the magnitudes of the terms are proportional to 1, π/2, 2/3, 0, -2/15, etc., (Paper I). Fig. 3-1 shows an example of the decomposition of insolation for a body with a sub-solar longitude of 2.24° and at a latitude of +30° into a constant plus one term (dashed) or seven terms (dot-dashed).
Insert fig 3-1 here.
1 mu0 = vt3d_solar_mu(lat, lon, lat0, lon0, /lonavg) 2 sol_terms = vt3d_sol_terms_diurnal(dist_sol_au, albedo, lat, h_phase0, lat_sol, n_terms) Fig 3-1. 1 Solid gray: numerical calculation of insolation on a bare spot at 9.5 AU with A = 0.6, at latitude 30°, a sub-solar latitude of 2.24°, and an hour angle at zero phase of -6 hours (-90°). The off-center maximum heating was chosen to force complex coefficients of the sinusoidal expansion. Solid: sinusoidal approximation with M=1, which captures the approximate phase and amplitude of the solar forcing. Dashed: sinusoidal approximation, with M=7, which is hard to see except at the "corners" near dawn and dusk because of the accuracy of this approximation.
As discussed in more detail in Paper I, the temperature can be written in terms of sinusoidal terms as well. If the density, specific heat, and thermal conductivity are constant with depth, then the solution to the diffusion equation (Eq. 3.1-2) with flux specifying the lower boundary condition (Eq. 3.1-1) is the sum of damped waves with wavelength € 2π 2mZ and e-folding distance of € 2mZ ( Fig. 3-2), where Γ = kρc (3.2-5) 2 1 vty16_fig3_1, phase, flux_sol, flux_sol_1, flux_sol_7 2 therminertia = vt3d_thermalinertia(dens, specheat, thermcond) Volatile Transport II (VT3D) is the thermal inertia (in cgs units of erg cm -2 K -1 s -1/2 , or MKS units of tiu = J m -2 K -1 s -1/2 , where tiu, or thermal inertia units, is the SI unit proposed in Putzig 2006), and is the skin depth, as defined by Spencer et al. (1989) and HP96. Other authors use definitions of the skin depths that differ by a constant from Eq. 3.2-6 (e.g., Mellon et al. 2008).
The solution to the conduction equation (3.1-2) can be written as (3.2-7) 2 where € ζ = z /Z is identical to the unitless scaled depth introduced by Spencer et al. (1989).
Temperatures for cases where the thermal-physical properties are variable with depth are treated elsewhere (Fivez & Thoen 1996;Grossel & Depasse 1998;Karam 2000 The goal is to use Eq. (3.2-7) to create initial conditions for numerical calculations in the three dimensions of latitude, longitude, and depth, given the coefficients for the temperature. There are three ways to do this. The simplest is to expand Eq. (3.1-3) to get the Fourier terms of the temperature directly, as described in Paper I and recapped here. The second is to follow this step by an adjustment of the average temperature, to ensure time-averaged energy balance. The third is to expand into Fourier terms of (T 4 ).
The average temperature, € T 0 , is found by substituting the sinusoidal forms of S and T into Eq. (3.1-3) and taking the first-order, time-averaged component, resulting in . This simply states that the mean temperature balances the mean solar insolation and the flux at the lower boundary condition.
The temperature coefficients, € ˆ T m , are found for each m by also substituting S and T into Eq. (3.1-3), taking the appropriate derivatives ( € d / dt →imω , € d /dz → imZ −1 ), and taking only those terms proportional to exp(imωt). The results are most simply expressed by defining the following variables, which represent the derivative of energy flux or heating with respect to temperature (in cgs units of erg cm -2 s -1 K -1 ) for the fundamental frequency: As described in Paper I, a system where Φ S is zero has temperatures that track the solar forcing, while positive Φ S serves to dampen the amplitude of the temperature variation and introduce a lag. The temperature variation (T m ) as a function of the solar variation for bare or volatile-covered area is found from: The temperature is then calculated from Eq. (3.2-8) overestimates mean temperatures, with the discrepancy being worse with larger peak-to-peak temperature variations, because the time average of T 4 is larger than Once an estimate of the peak-to-peak variation is found, the value of € T 0 can be adjusted downward so that the time-average thermal emission equals the sum of the insolation and internal heat flux, iterating over Eq. (3.2-9b) and (3.2-10) until the mean thermal emission converges on the mean absorbed insolation.
As described in Paper I, the time lag and smaller temperature variation can be described by a dimensionless parameter, Θ S .
(3.2-11) 1 phi_s = vt3d_dfluxdtemp_substrate(freq, therminertia) 2 phi_e = vt3d_dfluxdtemp_emit(emis, temp) 3 temp_terms = vt3d_temp_terms_bare(sol_terms,flux_int,emis,freq, therminertia) temp_terms= vt3d_temp_terms_bare_iter(sol_terms,flux_int,emis,freq, therminertia,thermcond) Volatile Transport II (VT3D) The ratio Θ S =4Φ S /Φ E is essentially the thermal parameter of Spencer et al. (1989), but defined at the time-averaged local temperature, rather than at the subsolar temperature. As in Paper I, Θ S can be used to quantify the shift and decrease in amplitude of the response to solar forcing . See Paper I for more discussion on the interpretation of Eq. (3.2-12) in terms of real quantities. Fig 3-1, with unit emissivity. The result of the numerical integration is shown as solid, thick gray. Initial approximations to the temperature are shown for M = 1, without an adjustment of ! to balance energy fluxes (solid), M = 7, without Volatile Transport II (VT3D) an adjustment of T 0 (triple-dot-dash), and M = 7 with an adjusted value of T 0 (dashed), for a period of 22.6 hours and a thermal inertia of 16 tiu (for a thermal parameter Θ S = 6.0).
In some situations of large variation in the solar forcing and small values of Θ S , the linearization of T 4 is poor, and it is better to expand in the emitted flux instead.
The mean term is found from Eq. (3.2-8): F 0 E = S 0 + F . Before, we expanded the emitted flux in terms of temperature, but now we expand temperature in terms of emitted flux: The conduction is a now a small correction to the thermal emission, so the error in the linearization is confined to the second-order term. Substituting Eq. 3.2-14 into the original equation for energy balance, Eq. (3.1-3), gives: which, with some manipulation, gives an expression similar to Eq. (3.2-10): and one similar to Eq. (3.2-12): From F E , calculate the surface temperature and its Fourier terms from T = (F E / εσ ) 1/4 . In some cases, more Fourier terms (M = 30 to 100) need to be used than when calculating the temperature terms directly, to avoid ringing at sharp transitions in the solar forcing. The analysis in this section can be used for more complex insolation patterns as well. Any insolation pattern, no matter how complex, can be decomposed with a Fast Fourier Transform (FFT) algorithm. If the forcing happens on two different frequencies, such as seasonal and diurnal, then the sums (in e.g., 3.2-7) can be performed over a discrete set of m, not necessarily contiguous. For the specific case of combined seasonal and diurnal variation, we can often decouple the two timescales (Young 2012a). First, calculate the seasonal thermal wave as a function of time, using longitudinally averaged insolation. If the seasonal and diurnal skin depths are sufficiently different, then the diurnal wave is superimposed on the uppermost portion of the seasonal one, and the seasonal wave can be treated as a linear contribution to the diurnal wave. This is mathematically identical to an internal heat flux term, F, already introduced. In other words, the seasonal thermal heat flow to and from deeper layers affects the diurnal temperatures by affecting the energy flux at the lower boundary. This works because the orbital periods in the outer solar system are orders of magnitude longer than the rotational periods. Pluto, for example, has an orbital period of 248 years and a rotational period of only 6.4 days. The seasonal scale height is larger than the diurnal one by (248 years/6.4 days) 1/2 , or a factor of 119. A typical depth for the lower boundary is 6 diurnal skin depths. This is only 0.05 times the seasonal skin depth, or a tenth of a tick in Fig 3-2, clearly in the linear regime of the seasonal wave.
Numerical solution for a single bare location (Area I or III)
For some applications, the results of the analytic calculations may be adequate. For others, higher accuracy is needed. Even for these applications, the analytic solution provides an initial condition that improves convergence.
The continuous equations of Section 3.1 are converted to a form suitable for computation. This is done by discretizing the variables into L locations on the surface (indexed by l), J + 1 layers within the substrate (indexed by j), and choosing time step schemes that take the state from time n to time n + 1 (i.e., no leap-frogging time step schemes). The general approach is to treat the time step as a finite-difference diffusion problem, with flux conditions at both the upper and lower boundaries (Press et al., 2007;Haltiner and Williams 1984). Figure 3-4 represents the discretization of the numerical model. The substrate is divided into J+1 layers, indexed with j = 0 for the top-most layer to j = J for the lowest layer, and defined by a depth z j and a thickness Δ j . Depths (z) are less than or equal to zero, and become more negative with increasing index. Thicknesses of the layers (Δ j ) are positive. Thickness can vary with index j to speed computation ( Table 2). All layers except the top layer extend from € z j − Δ j /2 to € z j + Δ j /2, with temperature T l,j,n defined at the center of the layer. The Volatile Transport II (VT3D) top layer extends from z = 0 to z = -Δ 0 , with the temperature T l,0,n defined at the top of the layer. If the layering is the same across the globe, then The use of layers that are free to vary their thickness 1 with depth improves efficiency, since the computational time is proportional to the number of layers, requiring only a little additional computation at the beginning of a calculation. A common layering approach uses a geometrically increasing thickness, where the thickness of each layer is some factor larger than the layer above (typically a factor of 1.1 to 1.5, e.g., Hansen and Paige 1996;Keiffer 2013). When modeling a diurnal wave, this allows modest computational savings, since geometrically thickening layers can span down to six skin depths with 2-3 times fewer layers than for layers of equal thickness. Unevenly spaced layers is even more important for practical modeling of the diurnal and seasonal wave simultaneously. Because the skin depth is proportional to ω -1/2 , the ratio of diurnal and seasonal skin depths equals the square root of the ratio of their periods, if thermophysical properties are constant with depth. This is important even for Mars, where the orbital period is roughly 669 times the rotation period, so the seasonal skin depth is roughly 25 times the diurnal skin depth (if thermophysical properties are constant with depth). In the outer solar system, the orbital periods can be quite long, so that the equivalent ratio of seasonal to diurnal skin depths is 88 for Enceledus, 118 for Pluto, and 700 for Eris. If the thermal conductivity is greater at depth, these ratios can be even larger. Here the savings for geometrically thickening layers is dramatic, allowing calculation to 100, 1000, or even 10,000 diurnal skin depths with computational savings of ~20, ~100, or ~1000 respectively. For example, layers that begin with a thickness of 0.25 diurnal skin depths can reach 10,000 diurnal skin depths with only 41 layers for a thickening factor of 1.5, or with 87 layers for a thickening factor of 1.2. The goal is to cast the equations as matrix operations to take advantage of the fast array operations that are available in many modern computer languages. The continuous equations of Section 3.1 can be cast as explicit equations (Fig 3-5), where the new temperature depends explicitly only on the previous temperature (Press et al. 2007;Haltiner and Williams 1984). The explicit expressions for diffusion equations are only accurate to first order in the time step, Δt, and require small time steps for stability. For explicit equations, the timesteps must satisfy (Δt / P) ≤ (Δz / Z ) 2 / 4π , or slightly more than 200 steps per period for a vertical sampling of 4 layers per skin depth.
The explicit linearized problem can be described with a (J +1) × (J +1) tridiagonal matrix ( Fig. 3-5). The new temperatures depend on the current temperatures in the layer above (with matrix element α, mnemonically "a for Above"), the current temperature in that layer (with matrix element η, mnemonically "h for Here"), and the current temperatures in the layer below (with matrix element β, mnemonically "b for Below"). Accuracy and stability can be improved by using implicit (Crank-Nicholson) methods, which solve equations involving both the current and the next temperatures ( Fig. 3-7), at the cost of computational complexity (Press et al. 2007;Haltiner and Williams 1984). The Crank-Nicholson scheme results in an equation that is accurate to second order in the time step, and satisfies von Neumann stability criteria for all sizes of time step. The implicit (Crank-Nicholson) problem uses two (J +1) × (J +1) tridiagonal matrices, with primed elements on the right-hand side of the equation and double-primed elements on the left. The goal of this section is to derive the matrix elements, which are summarized in Tables 3 to 5.
Matrix equation Matrix elements Explicit
To find the energy balance in layer 0, integrate the conduction equation (Eq. 3.1-2) over the top layer, from z = -Δ 0 to z = 0. Add this to the energy balance equation (Eq. 3.1-3) to get: where the overbar indicates the time-averaged value over the time step t n to t n+1 . The subscript for time in the insolation and emission terms is n' to indicate that it varies over the time interval from n to n+1. The change in enthalpy over layer 0 can be approximated as a function of the temperature sampled at the top of the layer: where € Φ l, j H has units of erg cm -2 s -1 K -1 , with the superscript H representing heat or enthalpy.
Eq. 3.3-2 samples the temperature of layer 0 at the top of the layer. If the temperature is integrated over layer 0 instead, then the slope of the temperature through layer 0 needs to be included; this depends on € T l,1,n , and is a second-order effect that I ignore here.
Defining a unitless measure of the time step, τ, (radians per timestep): and a unitless measure of the thickness of layer j expressed as a fraction of the skin depth (c.f., Spencer et al. 1989) (3.2-9a) and only depends on the physical properties of the problem. In contrast, H additionally includes non-dimensional factors that depend on the numerical choices of τ and € δ l, j . In general, I represent the fluxes-per-temperature that depend only on the physical properties with a single subscript for the physical process (e.g., S or H), and the ones that are discretized and depend on τ and € δ l, j with the superscript for the process and a subscript for the indices of location and time.
The average solar insolation between t n and t n+1 , S l,n' , depends on the geometry (heliocentric distance, and subsolar latitude and longitude) and the albedo. If the insolation is evaluated at the start of the timestep ( S l,n' ≈ S l,n ), then the results will be skewed in time by half a timestep, which is acceptable when timesteps are small (e.g., Spencer et al. 1989), but not at the larger timesteps allowed by the Crank-Nicholson method. A simple correction is to average the insolation at the start and end of the timestep Volatile Transport II (VT3D) The average thermal emission at the midpoint of the time interval is found by evaluating the first-order Taylor expansion of € T 4 at the average temperature for the time interval, € (T l,0,n +1 − T l,0,n ) /2, assuming that the emissivity is constant over the time interval.
where € Φ l,n E has units cgs of erg cm -2 s -1 K -1 , with the superscript E representing emission: where Φ E is defined in Eq. (3.2-9b). As with the enthalpy term, € Φ E only depends on the physical properties of the problem, and € Φ i, j E is the value used in the descretized calculation.
Unlike the enthalpy term, The next term in Eq. (3.3-1) is the thermal conduction. For explicit equations, the derivative is evaluated at the start of the time interval: where € Φ l, j K ,B has cgs units of erg cm -2 s -1 K -1 . The superscript K represents thermal conduction, and the superscript B represents conduction from the layer below. The expression for is essentially the distance to the middle of the layer below, modified to ensure continuity of fluxes at layer boundaries: and the unitless distances used for calculating thermal gradients from the layer below is Volatile Transport II (VT3D) Even if z j and Δ j are constant from one location to the next, the dependence on k means that € Δ l, j A and € Δ l, j B may vary with location. Again, € Φ S only depends on the physical properties of the problem, and € Φ i, j K ,B additionally includes non-dimensional factors that depend on the numerical implementation.
The more accurate and more stable Crank-Nicholson scheme (Press et al. 2007) replaces the derivative in with the average of the derivatives calculated at the start and end of the time step (at time t n and time t n+1 ): The explicit discretized equation for energy balance of layer 0 becomes while the implicit equation is Collecting terms for the explicit equation (only € T l,0,n +1 on the left-hand side) results in: and for the implicit equation ( € T l,0,n +1 and € T l,1,n +1 on the left-hand side) results in: The goal is to now turn Eq. (3.3-16a) and (3.3-16b) into equations that express the matrix multiplication shown in Fig. 3-5 and Fig. 3-6, respectively. For the top layer, the matrix equations are T l,0,n+1 = η l,0,n T l,0,n + β l,0,n T l,1,n +γ l,0,n (3.3-17a) for explicit and for implicit time step schemes. Divide Eq. (3.3-15) by the total flux per temperature with units erg cm -2 s -1 K -1 , where the superscript T represents total, to get the matrix elements for j = 0, Areas I and III (Table 3). The forcing is a function of time, and is subscripted n.
Because the derivative of the thermal emission depends on time, the matrix elements β l,0,n and η l,0,n also depend on time.
For interior layers, the integral form of the diffusion equations (Eq. 3.1-2), averaged over time step n is € ρc ∂T ∂t dz where the overbar indicates the time-averaged value over the time step t n to t n+1 .
In the lowest layer, as in the interior layers, the net change in enthalpy of the layer is balanced by the difference between the flux entering from below and leaving from above ( Fig. 3-4). For layer J, unlike for layers j = 1... J-1, the flux from below is specified as a lower boundary condition. The energy balance equation for the lowest layer is: where F l is the heat flux at the lower boundary for location l.
The change in enthalpy over layer j (j = 1.. J) can be approximated as a function of the temperature sampled at the middle of the layer: Volatile Transport II (VT3D) The expressions for conduction into the layer above for from the layer j are similar to those into layer 0 from layer 1 (Eq. 3.3-9 and 3.3-13). For the explicit scheme, it is: and for the Crank-Nicholson implicit scheme it is: and substituting Eq.
Collect terms and divide by € Φ l, j H , to get the matrix elements (Tables 4 and 5). For the interior layers, the matrix elements α l,j , β l,j and η l,j are independent of time.
Fig. 3-7 compares the sinusoidal, explicit, and implicit calculations (at large and small timesteps) for a bare spot at 9.5 AU with A = 0.6, ε=1, and Γ = 16000 erg cm -2 s -1/2 K -1 = 16 tiu, at latitude of 30°, a sub-solar latitude of 2.24°, P = 22.6 hours, and an hour angle at zero phase of -6 hours (-90°). Calculations were performed on a vertical grid with For the simplest, the single frequency sine-wave (M = 1) with no adjustment to the mean temperature, the numerical answer agrees well with the converged answer within 60° rotational phase (Fig 3-8, top); the other initial conditions agree with the converged answer even more quickly (within one time step, for the M=7 case with adjusted mean temperature). The calculated temperatures for both M = 1 and M=7 are too warm at the end of one period if the mean temperature for the initial condition was not adjusted (Fig 3-7, top and middle), but reaches the proper temperature with adjustment (Fig 3-7, bottom). All three cases shown have a similar convergence rate. Most of the gain is in the first period, with subsequent periods improving the solution by 12-20% per period.
3.4a. Overview and explicit timesteps
In this section, I present notes on how to solve the matrix equations in Figs. 3-5 and 3-6 in a way that takes advantage of the fact that for many problems substrate properties are often constant with time and location. I show how the implicit and explicit equations can be computed as a single matrix operation for those locations which share common substrate properties. This speeds calculation because it avoids "for-loop" constructions, with a speed savings that depends on the computer language involved. This section also shows how to 1 vty16_fig3_7a vty16_fig3_7b vty16_fig3_7c precompute the matrices associated with the substrate: both the elements for explicit calculations (the light gray elements in Fig 3-5, and the light-gray single-primed elements on the right-hand side of Fig 3-6), and the Lower-Upper (LU) decomposition of the matrix needed for implicit calculation (the light gray double-primed elements on the left-hand side of the equation in Fig 3-6.). Since LU decomposition is the first of the two steps needed in solving a tridiagonal matrix (Press et al. 1997), precomputing the LU decompositon of the substrate portion of the tridiagonal matrix cuts computation time roughly in half.
The key to these efficiencies is to separate the calculations for the uppermost layer (j = 0) from the lower layers (j = 1 to J). In addition to helping with the bare calculations, some of the notions introduced here will be required for implicit calculations of the interacting surfaces.
We separate the temperatures at a given location into a scalar describing the temperature of layer 0, T l,0,n , and a row vector of length J describing the temperatures of interior layers, T l,1.. J,n : With this separation, the timestep for Areas I and III can be written for the explicit timestep (Fig 3-5 € a l is a Jelement column vector with one non-zero element. 1 vt3d_step_expl_1loc, alpha_i, beta_0, beta_i, gamma_0, gamma_J, temp_0, temp_i Volatile Transport II (VT3D) S l is a tridiagonal matrix whose J-1 lower elements are vector with one non-zero element: To simplify the graphic, the time and location subscripts are dropped (e.g., η 0 for η l,0,n ). The temperature array is divided into the uppermost layer, T 0 , the next lower layer, T 1 , and remaining layers for j = 2..J, T j . The elements of the substrate matrix S consist of the three arrays α 2..J , η 1..J , and β 1..J-1 . Darker elements with white lettering correspond to the dark gray elements in Fig. 3-5, and change with each time step. Lighter elements with black lettering correspond to the light gray elements in Fig. 3-5, and are independent of time. White elements are zero.
Computation of Eq. (3.4-2) is displayed graphically in Fig 3-9. The uppermost temperature, € T l,0,n , is calculated by simple scalar arithmetic. The interior temperatures are calculated by matrix multiplication using a matrix that is likely to be time-independent, with additional terms added for T 1 and T J . In many applications, the substrate properties and internal heat flux are assumed to be constant over much of the body. In that case, in Eq. 3.4-2, the substrate arrays, and the temperatures in the interior layers 1 .. J as a J × L matrix with J rows and L columns formed by the concatenation of L temperature arrays of length J: It is admittedly awkward that T {L},0,n is an array, while α {L},1 is a scalar. I hope that context and Appendix A can help.
The surface temperatures are listed as a single 1-D array covering all the locations, rather than as a rectangular matrix of longitude and latitude. This is to simplify the matrix expressions of multiple locations. In addition, this allows for other divisions of the surface rather than simply a rectangular division, which tends to have needlessly small surface elements near the poles. Tiling schemes that maintain similar areas per tile need π/2 fewer tiles than equirectangular tiling schemes.
The new temperatures can be calculated in a way that takes advantage of array arithmetic: The computation represented by Eq. 1 vt3d_step_expl_nloc, alpha_i, beta_0, beta_i, gamma_0, gamma_J, temp_0, temp_i Elements are labeled as in Fig 3-8. . "*" indicates element-by-element multiplication of two arrays (above the dotted line) or the multiplication of each row by a scalar (below the dotted line). "X" indicates matrix multiplication.
3.4b. Implicit timesteps
With the division of temperatures into layer 0 and layer 1 .. J in Eq. (3.4-1), the implicit timestep for a single location for Areas I and III (Fig 3-6) can be written as We treat Eq. (3.4-9b) as a banded tridiagonal matrix to take advantage of the fact that the terms € " " a l and € " " S l are constant with time. This is a special case of inversion by partitioning, whose solution is presented in Press et al. (2007;section 2.7.4). A similar problem was addressed by Xing-Bo (2009). This allows us to precompute the lower-upper (LU) decomposition of € " " S l . The solution to can be written by defining two column vectors € y l and z l,n of length J, and two scalars c l,n and d l,n : The solution is shown graphically in Fig 3-12. Note that the only the time-independent substrate matrix needs to be inverted, and this can be done at the start of the computation, rather than for each time step. Furthermore, the array y is also independent of time. The new temperatures are then Volatile Transport II (VT3D) is an outer product of a J-length column vector and an L-length row vector, yielding a J × L matrix obtained by The graphical schematic is shown in Fig 3- (equivalent to the outer product of two arrays for the multiplication below the lowest dotted line).
Example: Mimas
As a worked example, Fig 3-
VT3D for local volatile-covered locations (Area II)
In this section, I consider locations that have volatiles on their surfaces, but for which the energy balance is essentially local. For worlds where the surface pressure is too low to effectively transport volatiles over the surface, the transport of energy, through latent heat of sublimation and deposition, does not effectively influence on the surface temperatures. This is the case on Io, and almost certainly the case on the large volatile-covered Kuiper-belt objects when far from perihelion. These are the isolated, volatile-covered areas (Area II) in Fig 2-2.
Within the substrate, the physics of thermal conduction and the lower boundary condition for the volatile covered locations (Area II) is identical as for the bare locations (Areas I and III, Section 3), and will not be repeated here. At the surface, on the other hand, the energy equation contains two new terms, one related to the energy needed to heat the volatile slab, and another related to latent heat exchange between the surface and the local gas column via deposition and sublimation. The continuous form is discussed in Section 4.1 and analytic expression for an initial condition is discussed in Section 4.2. Because the energy equations are strictly local, the form of the numerical implementation is very similar to that in Section 3. Only the form of the matrix elements € η 0 and € β 0 change, as discussed in Section 4.3.
Analytic expressions for isolated volatile-covered locations (Area II)
The energy equation at the surface balances net heating or crystalline phase changes with absorbed sunlight, thermal emission, thermal conduction, and latent heat of sublimation/condensation. The total energy equation is where m V is the mass per area of the volatile slab, € ∂H V /∂t is the time derivative of the enthalpy of the volatile slab in energy per mass (equal to € c V ∂T /∂t if there is no phase change, see Eq. 4.1-2, where c V is the specific heat of the volatile slab. Note c V is subscripted V for volatile, not V for constant volume), and L S is the latent heat of sublimation. L S is subscripted with S to distinguish it from the latent heat of crystalline phase change (L C ) and or the number of discrete locations on the surface (L, Section 3.3).
At the surface, a volatile slab is assumed to be isothermal within its vertical extent (See Fig 4.1), with a temperature equal to that at the top of the substrate. As described in Paper I, the isothermal slab was assumed in Hansen and Paige (1992) and Hansen and Paige (1996). This has been justified (David Paige, personal communication) by assuming that if the slab porous, it is in contact with the local atmosphere and the gas can isothermalize the solid; conversely, if the slab is not porous (e.g., from annealing, Eluszkiewicz et al., 1998) then its conductivity will be high, helping to isothermalize a thin enough slab. For very thick deposits, such as the suspected N 2 reservoir seen on Pluto, one approach is to keep track of mass per area of the volatiles available for sublimation as a separate quantity from the mass per area that is isothermalized (Young et al., 2016). Layering within the volatile slab will be treated in a later paper.
Energy Fluxes Thickness or Separation
Temperature temperature of a crystalline phase transition, the derivative of H V with respect to T at constant pressure equals c V , the specific heat of the volatile slab, Eq. (4.1-2a). Adding energy to the slab raises its temperature. At the temperature of a crystalline ice phase transition, the latent heat equals the difference in H V between two phases (L C ); adding energy to the slab converts ice from the low-temperature to the high-temperature phase without changing the temperature. This gives: where T C is the temperature of a crystalline phase transition, L C is the latent heat of crystalline phase change, and X is the mass fraction of the high-temperature phase. If c V is treated as a constant, then we can write € H V = c V T +L C X , which is proportional to the "pseudo temperature" used by John Spencer (personal communication).
Tracking the enthalpy of the slab, rather than its temperature, was introduced because N 2 has a reversible transition between the α and β phase at 35.6 K (e.g., Scott 1976), a relevant temperature for Pluto, Triton, and elsewhere in the outer solar system. Some volatile ices have no solid-state phase transitions at relevant temperatures, which simplifies matters. Others have multiple transitions, or non-reversible transitions. In all cases, the enthalpy is the general quantity that can account for phases as well as temperatures, and Eq. 4.1-2b represents the "special case" of enthalpy change at a phase transition temperature.
Area II satisfies local energy and mass balance. Assuming negligible horizontal transport of mass, any mass lost by the atmosphere either condenses or escapes.
where m A is the mass per area of the atmosphere, and E is the escape rate in mass per area per time. Negative values of E can be used to account for injection into the atmosphere from non-sublimation sources such as geysers (see Paper I). If the atmosphere is in vapor-pressure equilibrium with the surface, then the mass of the atmosphere is a function only of the surface pressure and effective gravity (defined by g = p S /m A , which is smaller than the surface gravity for extended atmospheres by a factor of 1 -2 H/R, where H is the scale height and R is the surface radius, see Paper I): where € p S (T) is the equilibrium vapor pressure at temperature T. The pressure derivative in Eq. (4.1-4) can be evaluated using the Clausius-Clapeyron relation, where m molec is the mass of one molecule and k B is Bolzmann's constant. Substituting Eqs.
(4.1-2a), (4.1-2b), (4.1-3), and (4.1-4) into Eq. (4.1-1) and collecting like terms yields: Eq (4.1-6a) is strikingly similar to the equivalent equation for the bare areas (3.1-3), differing only by the inclusion of the enthalpy and latent heat terms on the left-hand side, and the latent heat of the escaping atmosphere on the right side. The enthalpy and latent heat of sublimation introduce terms proportional to the frequency, ω, in the analytic equations (Section 4.2). They also introduce two additional terms to the total expression for the change in energy flux per temperature for the upper-most layer ( € Φ l,n T ) in the numeric solutions (Section 4.3), but the form of the matrix equations is unchanged. When there is a phase change, (4.1-6b), the analytic and numeric forms are both simpler, as the temperature does not change with time.
Analytic approximation and initialization for isolated volatile-covered areas (Area II)
As in Section 3.2, an analytic form of the continuous equations (Eq. 4.1-6a, b) can be found by decomposing the solar insolation and temperature into a sum of sinusoidal terms of frequency ω (Eqs. 3.2-1, 3.2-7). Additionally, we specify that the temperature of the volatile slab equals the substrate temperature at the substrate-slab interface € T V (λ,φ,t) = T(λ,φ,z = 0,t) (4.2-1) The escape rate is decomposed into a sum of sinusoidal terms in an analogous manner to the solar forcing where is the frequency of the diurnal or seasonal forcing, and € ˆ E m is the complex sinusoidal coefficient (the complexity is indicated by the hat).
As in Section 3.2, the average temperature is found by substituting the sinusoidal forms of S and T into Eqs. (4.1-6a, b) and taking the first-order, time-averaged component.
Φ V is simply related the to specific heat per area of the volatile slab, being the energy per degree per area. Φ A is related to the energy needed for the atmosphere to vary its column mass (atmospheric "breathing"). If the surface temperature rises, the equilibrium pressure rises too. The column mass of the equilibrium atmosphere increases due to sublimation from the surface. This takes energy, through the latent heat of sublimation. The result is that the specific heat of the volatile slab and the atmospheric "breathing" delay and decrease the thermal response (Paper I). The resulting expansion of 4.1-6a is: 1 temp_0 = vt3d_temp_term0_local(sol_0, flux_int, emis, latheat, mflux_esc) 2 phi_v = vt3d_dfluxdtemp_slab(freq, mass_0, specheat) 3 phi_a = vt3d_dfluxdtemp_atm(freq, temp_v, frac_varea, gravacc, name_species) € ω = 2π /P Volatile Transport II (VT3D) If the equilibrium temperature is at a crystalline phase boundary, then the corresponding equation for the change in the slab's state is Latent heat of escaping gas , T = T C (4.2-6) As described in Paper I, we can define non-dimensional thermal parameters, analogous to the thermal parameter of Spencer et al. (1989), to quantify the importance of heating of the volatile slab and atmospheric breathing. The substrate thermal parameter, Θ S , is defined in Eq. 3.2-11. Two new parameters are: Substituting into Eq. (4.2-5) shows how the amplitude and phase of the thermal response depends on the thermal inertia, the specific heat and depth of the volatile slab, and the extent of the atmospheric "breathing." As for the bare areas (Areas I and III), the expansion can be written in terms of the emitted thermal flux in the case of large temperature variations, giving Volatile Transport II (VT3D)
Numerical solution for isolated volatile-covered areas (Area II)
The discretization for the interior layers (j = 1..J-1) and the lowest layer (j = J) is the same for the isolated, volatile-covered locations (Area II) as it is for the bare locations (Areas I and III). The discretization for the volatile slab and the upper two layers are shown in Fig 4-1. Although the physics is different in the presence of a volatile, the numerics are nearly identical for all calculations on a local level, whether volatiles are present or not.
First consider usual case where the volatile slab is not at a crystalline phase transition temperature. As with Areas I and III, to find the energy balance in layer 0, integrate the conduction equation (Eq. 3.1-2) over the top layer, from z = -Δ 0 to z = 0. Add this to the energy balance equation (Eq. 4.1-6a) to get Eq. (4.3-1), the volatile-covered equivalent to Eq. where the overbar indicates the time-averaged value over the time step t n to t n+1 .
The enthalpy of layer 0, insolation, emission, and conduction are the same as for Areas I and III (Section 3.3).
The second term in Eq (4.3-1) reflects the change in the enthalpy of the volatile slab with temperature. The volatile slab mass, m l,n V , can change over the time interval. However, this change is going to be small unless the slab is about to completely sublime, in which case this term contributes little. Ignoring the change in volatile slab mass during the time interval, this term becomes: Volatile Transport II (VT3D) where c l V is the specific heat of the volatile slab at location l, € Φ l,n V has units of erg cm -2 s -1 K -1 , and the superscript V stands for volatile slab The third term in Eq (4.3-1) is related to the amount of latent heat required sublime the atmospheric mass needed to maintain vapor-pressure equilibrium with a higher surface temperature. Linearizing the change in surface pressure with respect to time gives where € Φ l,n A has units of erg cm -2 s -1 K -1 , and the superscript A stands for atmosphere.
The temperature dependence of pressure is highly non-linear. If this is a dominant source of error, then one either chooses a small τ, or iterates from an initial guess at a temperature € T l,0,n +1 approx to an improved temperature € T l,0,n +1 . In the latter case, by Taylor expansion of p around This can be cast in a form parallel to that of Eq. (4.3-4) by where the derivative in € Φ l,n A is evaluated at the current guess at a temperature € T l,0,n +1 approx . The term € F l,n A has units of erg cm -2 s -1 , and combines mathematically with the solar forcing.
By writing in terms of the change in temperature relative to the previous time step (i.e., € T l,0,n +1 − T l,0,n ), rather than in terms of the smaller change in temperature relative to the current guess (i.e., € T l,0,n +1 − T l,0,n +1 approx ), can be simply combined with the other terms in the discretized energy equation. On the first iteration, € T l,0,n +1 approx = T l,0,n , and Eq. (4.3-7) reduces to , so Eq. (4.3-7) can be used with very little added computational complexity.
The escape rate, E, if present, can be calculated at the start or mid time, similarly to the insolation.
Substituting the expressions for the explicit equations gives an equation similar to Eq. 3.3-14: Collecting terms for the explicit equation gives As in Section 3.3, divide by € Φ l,n T , with units erg cm -2 s -1 K -1 , where the superscript T represents total, and the total "flux-per-temperature" now includes terms for enthalpy of the slab and interaction with the atmosphere The explicit equations for Area II can be written in a form that is identical to the explicit equation for the bare areas, Areas I and II (See Fig 3-6), with the resulting matrix elements given in the first row of Table 6. Volatile Transport II (VT3D)
The implicit form of the energy balance equation for Area II away from a crystalline transition temperature is found by substituting the Crank-Nicholson expression for the conduction term into Eq. 4.3-1. The energy balance for the implicit equation is Collecting terms for the implicit equation gives the volatile-covered equivalent to 3.3-16b: Again, divide by € Φ l,n T , with the resulting matrix elements given the second row in Table 5.
The matrix elements for j = 1 to J are identical as for the bare areas, Areas I and III (Section Volatile Transport II (VT3D) Tables 3 and 4). The methods for solving the matrix equations are identical as for the bare areas, Areas I and III (Section 3.4).
Matrix operations for single or multiple isolated volatile-covered locations (Area II)
As with Areas I and III, computation can be sped up considerably by taking advantage of matrix operations to calculate the temperature evolution on multiple locations with a single operation. The form of the matrices for isolated volatile-covered locations (Area II) is the same as for bare locations (Areas I and III). Therefore, once the matrix elements are found, the calculations can proceed identically to Section 3.4.
Example: KBOs with bare areas or locally-supported atmospheres
As an example, consider a point on the equator of a generic KBO with A = 0.7, ε = 0.9, no internal heat flux or mass loss, 50 cm s -2 surface gravity, and an equatorial sub-solar latitude, at a range of heliocentric distances (r) from 30 to 80 AU (Fig 4.2). The thermal parameter for the substrate (Θ S ) ranges from ~4-17 for 5 tiu (similar to those found by Lellouch et al. 2013), and ~1600 to ~7000 for 2100 tiu (pure, compact water ice). The thermal parameter for heating one g cm -2 of a volatile slab (Θ V ) is 7 times larger than Θ S for the 5-tiu case, or 27 to117, so it is not insignificant. Both Θ S and Θ V increase with heliocentric distance, since their numerators stay constant and their denominators (proportional to T 3 ) decrease. Thus, the same object can be a slow rotator at perihelion and a fast rotator at aphelion. The atmospheric thermal parameter (Θ A ), which has equilibrium pressure in the numerator, varies by 5-7 orders of magnitude over the range of r from 5.2 x 10 3 to 8.6 x 10 -2 for N 2 and 1.0 to 1.5 x 10 -7 for CH 4 .
For simplicity, the remainder of Fig 4.2 only contrasts a bare substrate with thermal inertia of 5 tiu, with a surface that is either N 2 -covered or CH 4 -covered. The effect of the decreasing temperature and increasing Θ S with r is clear in the progression for the substrate temperatures in the second panel, which plots the temperature for a bare substrate as a black solid line. The N 2 atmospheric "breathing" (green dashed line) has little effect at 70 AU, modifies the temperatures at 60 AU; by 40 and 30 AU, it nearly flattens out the temperature variation. The atmospheric breathing shifts the maximum by 90° phase, while the thermal conduction into the substrate shifts it by 45°; this is most evident when Θ A is comparable to Θ S , such as for N 2 near 60 AU or CH 4 (red triple-dot-dashed line) near 30 AU. This has the effect of decreasing the peak temperature, and increasing the temperature at both the dawn and dusk limbs.
The third panel of Fig. 4.2 shows the increase in the mean and amplitude of the temperature for a bare substrate (gray fill) with decreasing heliocentric distance. For N 2covered areas (green slanted fill), the temperatures are similar to the bare temperatures beyond ~70 AU. Closer than that, first the maximum temperature decreases while the dusk temperature rises, then the minimum and dawn temperature rise in tandem, until finally the maximum, dusk, dawn, and minimum temperatures all converge inward of 40 AU. For CH 4covered areas (red vertical fill), the temperatures match the bare temperatures beyond ~40 AU; inward of 40 AU, as with the N 2 , the maximum temperature decreases while the dusk temperature rises, with the slight rise in the minimum and dawn temperatures. The corresponding minimum, dawn, dusk, and maximum pressures are shown in the final panel. values of thermal inertia, for 1 g of slab at 1.3e7 erg g -1 K -1 , and for atmospheric "breathing" by N 2 (green dashed) and CH 4 (red triple-dotdashed). Second: Temperature for Γ=5 tiu over a single day for bare (solid, black), N 2 -covered (green dashed, indistinguishable from bare at 70 AU), and CH 4 -covered (red triple-dot-dashed, indistinguishable from bare at 40 AU and farther) at selected distances. Third: Minimum, dawn, dusk, and maximum temperatures over a range of distances for bare (gray), N 2 -covered (green), and CH 4 -covered (red) areas. Fourth: Minimum, dawn, dusk, and maximum pressures over a range of distances for N 2 -covered (green) and CH 4 -covered (red) areas.
VT3D for interacting volatile-covered areas (Area IV)
Currently, Pluto and Triton are expected to have similar surface pressures over the entire globe, independent of local insolation (Trafton & Stern 1983, Trafton 1984, Spencer et al. 1987. N 2 sublimes from areas of high insolation, with latent heat loss balancing the excess insolation. Sublimation winds carry this mass to areas of low insolation, where N 2 is deposited, adding latent heat as well as solid N 2 (Fig 2-2B). As long as the atmosphere is dense enough, transport of mass and latent heat will keep the volatile ice temperatures nearly constant over the globe. Through vapor-pressure equilibrium, the surface pressures will also be nearly constant. If the atmosphere is thin enough so that the sublimation winds are a significant fraction of the sound speed, then the surface pressures will vary over the globe. This case can be handled efficiently by treating the surface as a "splice" between the interaction regions or isobaric regions, which share the same surface pressure, and local regions, for which the surface pressure varies with location (Fig 2-2C In this section, I consider areas that have volatiles on their surfaces and which interact to share the same volatile ice temperature and surface pressure. This includes the entire globe for dense atmospheres, or the interacting portions of the splice for intermediate atmospheres ( See Fig 2-2B, 2-2C). I will discuss the continuous equations in Section 5.1, analytic equations in Section 5.2, the discrete equations in Section 5.3, and efficient solutions to the matrix equations in Section 5.4. In Section 5.5, I present a worked example of Pluto's seasonal activity, with code and output in the supplementary materials.
Continuous expressions for interacting volatile-covered locations (Area IV)
For interacting volatile-covered locations, Area IV, energy is transported between locations through mass transport of volatiles through the atmosphere and the latent heat of sublimation. What ties the multiple locations together is (1) a common volatile-ice temperature, € T V , and (2) conservation of mass over the interacting regions. This latter includes the atmosphere over all areas that share a single surface pressure, whether bare (Area III) or volatile-covered (Area IV), because raising the surface pressure increases the atmospheric mass over all locations that share a common surface pressure. That is, if the surface pressure of the atmosphere increases in the region of effective transport, the mass of the atmosphere will increase above both the volatile-covered areas (Area IV) and the bare areas (Area III). The expression for mass balance in the area of effective transport is found by integrating Eq. 4.1-4 over both Area III and Area IV: where Ω III and Ω IV represent the solid angle of areas III and IV. Both the surface pressure and the temperature of the volatile slab are constant over Areas III and IV; the terms involving gravity, pressure, and temperature can be factored out of the middle integral. Futhermore, the mass flux for Area III is zero, so that the first integral can be evaluated over just Area IV. With these changes, the mass balance equation becomes The areal average of the mass flux over Area IV is: where brackets represent an areal average over Area IV. The atmosphere escapes from above both bare and volatile-covered areas, so the areal average of E is taken over Areas III and IV: where primed brackets represent an areal average over Area III and Area IV.
f V is the fraction of the interacting areas (III and IV) covered with volatiles. In Paper I, which only treated a global atmosphere, this was fraction of the surface covered by volatiles. Here, with the possibility of a spliced atmosphere, the expression is written more generally.
With these definitions, the equation for mass balance over the areas of isobaric surface pressure becomes Eq. 5.1-5 illustrates the significance of the fraction of the surface covered by volatiles, f V . If the volatile ices are confined to a small patch, then that patch has to lose a lot of mass to supply an increase of the entire atmosphere in the isobaric area.
The local energy balance is the same as for localized volatile-covered areas, Eq. (4.1-1). Integrating Eq. 4.1-1 over Area IV, and substituting the equation for conservation of mass over isobaric areas, yields an equation for energy balance over all of Area IV, using the same notation for spatial averages as in Eq. 5.1-3a.
While the temperature of isolated volatile-covered areas depend only on local conditions b), the volatile ice temperature in the interacting areas depends on the spatial average of energy sources and sinks.
Analytic approximation and initialization for interacting volatile-covered locations (Area IV)
The analytic form of the continuous equations (Eq. 5.1-6) is very similar to that for the isolated volatile-covered areas, Area II, described in Section 4. As in Section 4, the solar forcing, the atmospheric escape, and the thermal wave are (1) decomposed into sinusoidal terms (3.2-1 for absorbed insolation, 3.2-7 for temperature, and 4.2-2 for escape), (2) substituted into Eq 5.1-6, and (3) isolated term-by-term. The m=0 term gives the expression for the time-averaged temperature: To find the variation in the temperature (the terms with m = 1 and higher), substitute the expressions for solar forcing, temperature, and escape into Eq. 5.1-6, expand the thermal emission term to first order in T m , and take only those terms proportional to exp(imωt). This expression is simpler with the spatially averaged versions of the "flux-per-temperature" expressions: If the substrate under all of the volatile ices has the same thermophysical properties, then the first two terms reduce to their local equivalents: Eq. 3.2-9a, b. Likewise, if the specific heat of the volatile ices are the same over Area IV, then the third equation (Eq. 5.2-2c) differs from its local equivalent (4.2-4a) simply by replacing the local volatile slab mass with the areal average. If there is no bare ground in the isobaric area (that is, if no locations are Area III), then f V = 1, and the last expression is identical to its local equivalent (4.2-4b). However, if only part of the isobaric area is volatile-covered, then Φ A (T ) > Φ A (T ) . A change in volatile temperature increases the atmosphere above both bare and volatile-covered locations in the isobaric areas, so more mass is exchanged between the surface and atmosphere, and more latent heat of sublimation is required. This means that the latent heat term is more effective at suppressing the temperature variation when there is a smaller fraction of surface volatiles. For temperatures away from a crystalline phase, with these substitutions, the spatially averaged energy equation is: If the equilibrium temperature is at a crystalline phase transition, then the corresponding equation for the change in the slab's state is
Numerical solution for interacting volatile-covered locations (Area IV)
Fig 5-1 shows the interaction between different locations in Area IV. There is no horizontal heat flow within the substrate. However, the volatile slabs exchange energy through latent heat of sublimation and condensation, and share a single temperature, T V . The temperature of the volatile ice slab therefore depends on the insolation over the entire volatile-covered interacting region, and the conduction from each of the substrate layers (layer 0) that immediately underlie the volatile ice slab. The temperatures of each of the topmost substrate layers depend, in turn, on the single volatile slab temperature, through thermal conduction. Because there is no horizontal heat flow within the substrate, the discretization for layers j = 2 .. J is the same as the other areas, so that much of the matrix form for the explicit equations is tridiagonal (Fig. 5-2). However, because volatile slabs of the areas interact ( Fig 5-1), the explicit discretized equation for the new T V has non-zero coefficients accounting for the conduction upward from each of the j = 1 layers (the upper row of the matrix). Similarly, the explicit discretized equation for each new T 1 has non-zero coefficients accounting for the conduction downward from each of the j = 0 slabs, all assumed to be at T V . The resulting The elements of the substrate arrays are derived from the discretation of the conductivity equation, Eq. (3.1-2), as before. The matrix elements for the substrate-the light gray elements in Figs 5-2 and 5-3-are unchanged from the previous cases. This holds even for the first layer, j = 1. The dependence of the temperature of the first layer at location l, T l,1 , depends only on the temperature below (T l,2 ) and above (T l,0 ). For Area IV, the assumption is that T l,0 = T V (that is, the upper surface of the substrate equals the volatile slab temperature, Fig 5-1). While this changes the format of the matrices (the line of α's in the left-most column in Fig 5-2 and 5-3), it does not change the value for the α's themselves. To find the elements for the implicit arrays α l, j , η l, j , β l, j (j = 1 .. J) and the lower-boundary element γ l,J , or their explicit counterparts (primed for the right-hand side and double-primed for the left) consult Tables 3 and 4.
The elements for the volatile slab-the dark gray elements on the top row of Figs 5-2 and 5-3-are related to, but different than, the corresponding elements for Area II (volatile- covered, non-interacting). As before, I first solve for temperatures away from the solid phase transition (T ≠ T C ). For Area IV, I integrate the conduction equation (Eq. 3.1-2) over the top layer, average that over Area IV, and add the result to Eq. 5.1-6a to replace the term with conduction at z = 0 (at the slab-substrate interface) with one at −Δ 0 (at the bottom of the first substrate layer). Taking the time average from time n to n+1 (indicated by overbars) yields Eq. 5.3-1, the areal averaged equivalent to Eq. 5.3-1 has areal averages for the thermophysical parameters (density, specific heat, mass of a slab, thermal conduction, emissivity), areal averages for the solar gain and heat lost by escape, and the inclusion of f V , the fraction of the interacting area that is covered by volatiles, in the latent heat and escape terms.
where ρ 0 c 0 is the areal average of the product of density and specific heat in layer 0, with cgs units of erg K -1 cm -3 , and m V c V is the areal average of the product of volatile slab mass and specific heat in the volatile slab, with cgs units of erg K -1 cm -2 .
The treatment of the first term is similar to that in the bare case; see the discussion near Eq. 3.3-2. As before, the temperature of layer 0 is sampled at the top of the layer. Because this is the slab-substrate interface, the temperature of layer 0 equals the volatile slab temperature within Area IV: T l,0,n = T n V . With the assumption that we can sample the temperature at the top of layer 0, the enthalpy term depends only on the change in the volatile slab temperature: , has units of erg cm -2 s -1 K -1 , with the superscript H representing heat or enthalpy. The discrete form for the areal average (cf. is simply the weighted average of the local values, summed over the locations within Area IV, The weights are simply the ratio of the solid angle of each location ( Ω l ) to the total solid angle of Area IV: Continuing to treat Eq. 5.3-1 term-by-term, the change enthalpy of the volatile slab also depends on the change in volatile slab temperature; see discussion near .
The latent heat term is the same over all locations, but differs from the local equivalents by the factor of f V : The insolation terms is simply the areal average of the insolation at each location in Area IV: The thermal emission depends on the areal average of the emissivity: For explicit equations, the expression for the areal average of thermal conduction is found by taking the areal average of Eq. 3.3-9, and making the substitution that T l,0,n = T n V : where Φ l,0 K,B is given by . Similarly, the expression for the implicit (Crank-Nicholson) equations takes the areal average of Eq. 3.3-13: Finally, the escape rate is calculated by the average over all the interacting regions, Area III and Area IV: Substituting the expressions for the explicit equations gives Collecting terms for the explicit equation gives As in Section 3.3 and 4.3, divide by Φ n T , with cgs units erg cm -2 s -1 K -1 , where the superscript T represents total. The total "flux-per-temperature" includes terms for enthalpy of the slab and interaction with the atmosphere The resulting of dividing Eq. 5.3-13 by 5.3-14, and the resulting matrix elemens, are given in Table 8.
The implicit (Crank-Nicholson) equation (5.3-15) differs from equation (5.3-12) only with the substitution of the conduction term: Collecting terms for the implicit equation gives 5.3-16) This equation is used to derive the elements for the matrix elements in Figs. 5-2 and 5-3, given in Table 8.
Matrix equation Matrix elements Explicit
is given by 5.3-14.
The discrete form of the equation for the change in temperature at a crystalline phase is trivial, since the volatile slab temperature does not change from time n to time n+1 in Eq. 5.1-6b.
5.4a. Overview and explicit timesteps
In Section 3.4, I divided the temperature into the upper layer and the interior layers (Eq. 3.4-1), as a means to speeding up calculations in Areas I, II and III. In Area IV, this division is required, as the temperature of each of the upper layers ( The matrix elements € η n V and € γ n V are defined in Table 8 or 9. The b arrays are similar to Eq. (3.4-3), except that the weighting factor is included: The new temperature of the volatile slab depends on the substrate ; this is similar to Eq. 3.4-8a, but slightly simpler. The multi-location matrix equation for the temperatures of the substrate is also similar to the non-interacting equivalent , differing only in that the topmost substrate temperature equals the temperature of the volatile slab.
Graphically, this is represented by Fig 5-5. . Elements are labeled as in Fig 5-5. "*" indicates scalar multiplication (above the dotted line) or element-by-element multiplication of two arrays (above and below the dotted line). "X" indicates matrix multiplication.
5.4b. Implicit timesteps
For the implicit case, it is most straight-forward to write the Crank-Nicholson scheme in terms of intermediate temperatures for the volatile slab T n V and substrate, T l,1.. J,n .
For the other areas, the banded tridiagonal matrix was a computational convenience. For Area IV, it is the most direct way of solving . The solution to can be written by defining two column vectors y l and z l of length J (defined as in 3.4-16a, 16b), and two scalars c n and d n : with which the temperatures at the next time step for location l are This solution can be confirmed by direct substitution into . The solution is shown graphically in Fig 5-6. Note that only the time-independent substrate matrix needs to be inverted, and this can be done at the start of the computation, rather than for each time step. Furthermore, the array y is also independent of time.
y 0 Fig 5-6. Graphical schematic of the implementation of an implicit time-step from time n to time n+1 for multiple non-interacting locations .
Elements are labeled as in Fig 3-9. "*" indicates element-by-element multiplication of two arrays (above the dotted line). "X" indicates matrix multiplication (equivalent to the outer product of two arrays for the multiplication below the lowest dotted line).
Example: PNV9 from Young 2013
As a worked example, Fig 5-7 shows the results of the calculations used for case PNV9 (permanent northern volatile #9) from Young 2013. This example was illustrated in Fig 1 of Young (2013) and Fig 3 of Olkin et al. (2015). The format of the figure is a still from the movies that show the seasonal evolution, as shown in various talks (e.g., Young 2012a). The code is included in the supplemental materials as vty16_fig5_7. The code included here is taken from the code actually run for Young (2013), with only superficial changes, to allow myself or others to reproduce the results of Young (2013) and Olkin et al. (2015). ε V ), the Bond albedo and emissivity of the substrate (A S and ε S ), the thermal inertia of the substrate (Γ), and the globally averaged N 2 inventory (N 2 ). Top right: Pluto's temperature and volatile mass for the listed year (2014.6). The subsolar latitude and heliocentric radius are listed (48° and 32.7 AU). The purple line gives an indication of the direction and magnitude of the sublimation winds, running from the North to the South. Blue indicates the volatile mass, where volatiles are present; the thickness of the bars are proportional to the mass, and the maximum mass is indicated (66.6 g/cm 2 ). The thin solid line indicates the surface temperature, which is a uniform 39.0 K for volatile covered areas, and is just above 40 K for bare areas (south of ~20°). Top left: Pluto as seen from the sun. Volatiles and substrate are shaded by their respective albedos. Pluto is tilted by the subsolar latitude. When plotted as a movie, the size of Pluto varies with the inverse of the heliocentric distance. Bottom left: graphical depiction of the seasonal volatile evolution. The shape of the orbit is in scale with Pluto's eccentricity. The 12 light and dark gray "pie pieces" mark out equal durations in the orbit, with the sun at the vertex of the pie pieces. The circles represent Pluto as seen with a zero sub-observer latitude. The pole is a squat bar running behind the circles. The circles and the pole bar are oriented so that the pole is perpendicular to the Pluto-sun line at the two equinoxes and so the summer hemisphere is oriented toward the sun. The red line and the circle outlined in red represent Pluto's position and state at the listed year (2014.6). Within the circles, lighter gray shows the location of volatiles, and darker gray shows substrate. Lower right: 1 vty16_fig5_7, which calls res = pluto_mssearch_func(run, av, ev, as, es, ti, mvtot, n_off, res_all) vty16_plutostill_mssearch_func, run, res, yr_still vty16_pluto_mssearch_resub_mat, flag_frostslab,time_delta,n_loc,n_z,emis,temp_surf,eflux_sol,mass_slab,specheat_frost,z_delta,z_delta_bot,dens,specheat,thermcond,eflux_int,beta,alpha_bot,alpha_mid vty16_pluto_mssearch_resub_timestep,flag_atm,freq,time_delta,gravacc,name_species,flag_stepscheme,n_loc,lat,n_z,z_delta,z_delta_bot,is_xport,angarea_delta,emis,temp_surf,eflux_sol,mass_slab,specheat_volatile,dens,specheat,thermcond,alpha_top,alpha_mid,alpha_bot,beta,denom,eflux_net,temp,temp_volatile,temp_next,temp_volatile_next,angarea_atm,mflux vty16_plutowrite_mssearch_func,run,res_all Volatile Transport II (VT3D) Surface pressure (log scale) and geometric albedo (linear) as a function of year, with the listed year marked by a small red circle. The pressure at the listed year is indicated (33 µbar), and the years of the equinoxes and solstices are marked.
In order to relate skin depth to depth with physical units, the substrate is assumed to have a density, ρ, of 0.93 g cm -3 and the skin depth, Z, is assumed to be 15 m; from the specified thermal inertia, Γ, Eq. 3.2-5 and 3.2-6 define the specific heat, c, and the thermal conductivity, k. The specific heat of the volatile, c V , (I remind the reader that the V is for volatile, not volume) is assumed to be that of N 2 (β), or 1.3e7 erg g -1 K -1 (Spencer & Moore 1992).
The run is initialized with the entire surface of Pluto covered with N 2 at aphelion, and the initial surface and subsurface temperatures are calculated assuming that the entire surface was volatile-covered and interacting over the previous Pluto year. The solar forcing is calculated assuming orbital elements of eccentricity of 0.254, inclination of 23.439°, Longitude of Ascending Node of 43.960°, argument of perifocus of 183.994°, last periapsis at Julian date 2447899.597, mean motion of 0.00392581°/day, a semi-major axis of 39.79700 AU, and a pole with right ascension 132.993° and declination -6.163° (see code for full precision). The diurnally averaged absorbed solar flux was calculated at 240 time steps over Pluto's orbit, at each of 60 latitude bands, and expanded to M = 2 (constant and two sinusoidal terms). The initial temperature field is calculated from the sinusoidal expansion of the absorbed solar flux, assuming a flux from the interior of 6 erg cm -2 s -1 . This follows the prescription of Section 5.2, except that the atmospheric "breathing" term is ignored (it is small on the seasonal timescales, Young 2013). The substrate uses a "medium" grid, with 19 layers of width 0.4 Z, where Z is the skin depth. The top layer is half that, or 0.2 Z.
Conclusions
A variety of mathematical techniques for speeding up thermophysical models or volatile transport models have been presented. They include an improved initial condition, an implicit time-step step scheme, and a matrix formulation that allows for the calculation of several locations at once. These can be used separately or in combination.
This formulation described here has been previously applied to Pluto's diurnal cycle with volatile distributions and albedos that vary with both latitude and longitude (Young 2012a). The speed gains allowed me to perform a wide parameter-space search of Pluto's seasonal cycle in anticipation of New Horizons (Young 2013). This work has also been used to study KBO seasons (Young and McKinnon 2013) and the first Pluto volatile transport models to include an N 2 reservoir .
Other routines are taken from elsewhere in the layoung IDL library, and from the astron library. | 19,206.6 | 2015-11-18T00:00:00.000 | [
"Physics",
"Environmental Science"
] |
On particle number fluctuations in an interacting pion gas with dynamically fixed number of particles
We consider a hot isospin-symmetric pion gas with the dynamically fixed number of particles in the model with a λφ4 interaction. In the thermodynamic limit, for temperature above the critical value for the Bose-Einstein condensation we calculate the effective pion mass, the chemical potential and the normalized variance. In contrast to the ideal gas, the normalized variance remains finite in the critical point of the Bose-Einstein condensation.
Introduction
More than a thousand of pions can be produced in central heavy-ion collisions at RHIC and LHC energies [1]. The measured pion spectra show an enhancement at low transverse momenta, p T m π , where m π is the pion mass, whereas spectra of harder pions are approximately exponential, cf. [2]. The latter circumstance allows to assume that pions form a hot expanding fireball characterized by the temperature T ( r, t) and density n( r, t). The kinetic freeze-out temperature is estimated to be T kin ∼ 100−120 MeV. An estimate [3] shows that at T ∼ 120−140 MeV processes of the pion absorption are rather suppressed, whereas re-scattering processes are still important. Thus, one may assume that the total pion number is dynamically fixed on a time scale from a chemical freeze out, when the total pion number becomes frozen, till the thermal freeze-out, when the momentum distributions stop changing [4]. The existing estimates for the typical pion density in the pion fireball are contradictory [5,6] yielding values varying in a broad interval, for LHC from n ∼ 0.8 to 2.5n 0 , where n 0 is the nuclear saturation density, whereas from measured volume in HBT correlations and pion yield [1] follow much higher values.
Already the first estimates [7,8] have demonstrated that experimental low-p T pion yields in collisions at 200 AGeV can be fitted if the pion chemical potential ∼ 120−130 MeV is introduced. Recent work [5] evaluated the non-equilibrium chemical potential of pions as µ ≃ 134.9 MeV, being very close to the critical one (µ = m π ≃ 135 − 140 MeV) for a Bose-Einstein condensation (BEC) in an ideal pion gas. Non-equilibrium supercooling effects [9,10,11] as well as decays of resonances [12] may provide effective mechanisms to drive the pion system to the Bose-Einstein condensation (BEC). Reference [9] studied a possibility of the pion BEC in the interacting relativistic pion gas with a λϕ 4 interaction under the assumption that the number of pions created in the course of ultrarelativistic heavy-ion collisions is (dynamically) fixed in the time interval between the chemical and thermal freeze-outs. The most central collisions with high pion multiplicity were proposed as the most preferable to observe effects of the pion BEC. Utilizing the Weinberg Lagrangian, the investigation of the pion BEC was continued in [13,14] Reference [14] incorporated a possibility of a change of the initial isotopic composition due to π + π − ↔ π 0 π 0 reactions. Reference [11] demonstrated that before the formation of the BEC the interacting pion gas in ultrarelativistic heavy-ion collisions may pass several stages, including a wave-turbulence stage, effects studied in [15] in relation to the BEC in application to nonrelativistic gaseous systems. A growth of fluctuations near the critical point of the second-order and first-order phase transitions is a general property manifested in various critical opalescence phenomena, cf. [16]. References [17,18] returned to consideration of the ideal pion gas and argued for the divergence of the normalized variance in the critical point of the BEC in the thermodynamic limit. The result was supported by the analysis in the micro-canonical approach. An enhancement of the normalized variance was observed in the high pion multiplicity events in pp collisions in the energy range 50-70 GeV [19]. Here, a care should be taken, when one compares naive theoretical expectations for thermal fluctuation characteristics with actual measurements, which incorporate background contributions, the dependence on center-of-mass energy, other dynamical effects, collision centrality, kinematic cuts, etc.
In this contribution we will compute normalized variance in the non-ideal hot pion gas with the dynamically fixed number of particles considered in thermodynamical limit.
The 4-point interaction among the fields ϕ included in the Lagrangian (1) modifies the pion properties and the pion-pion interaction in medium. The resulting in-medium retarded pion propagator is determined by the Dyson equation where the thin line is the free pion retarded propagator G 0 (ω, k) = 1/(ω 2 − k 2 − m 2 π + i0), and ω and k stand for the frequency and momentum of the pion, and Π is the full pion retarded polarization operator determined by the diagrams shown in (2). The first diagram in (2) is the "tadpole" diagram and the second one is the "sandwich" diagram. Now we will consider the pion system with the dynamically fixed particle number. To do this we turn in the Lagrangian (1) to the new complex fields ϕ − , ϕ + , ϕ 0 corresponding to pions with the positive frequencies π − = ϕ − + ϕ + † , π + = ϕ + + ϕ − † , and π 0 = ϕ 0 + ϕ 0 † . In a system with the fixed and, in general, different numbers of pions of each species the particle-anti-particle symmetry is lost, and therefore, it is possible that ϕ − = ϕ † + , and neutral pions are described by a complex field ϕ 0 . In the second quantization new fields of π − , π + , π 0 mesons possess the following operator representations in the quasi-particle The operatorsâ,b andĉ define now the numbers of pions of the corresponding denotes averaging over the vector of state of the quantum system with the Gibbs weight factor. The mean valueâ , is determined by the Bose distribution with frequencies ω i and chemical potentials µ i fixing the particle numbers N i .
Replacing new fields in the Lagrangian (1), we are able to separate terms, which correspond to the particle number conservation. In terms of the new fields the Lagrangian density (1) is Here the terms L π i = |∂ϕ i | 2 − m 2 π |ϕ i | 2 − λ|ϕ i | 4 describe the interaction of pions of a certain sort and we neglected a difference in pion masses for various species, the term describes the interaction of the different pion species with each other via reactions π + π − ↔ π + π − and π ± π 0 ↔ π ± π 0 . These processes conserve the total number of pions and also the number of pions of each sort. Note that the pion self-interaction contained in the terms L fix gives a contribution to the pionic polarization operator already in a first-order in the coupling constant λ. Such a contribution is depicted by the tadpole diagram in (2). The part of the Lagrangian density corresponds to the processes π 0 π 0 ↔ π + π − , which, while keeping the total number of pions fixed, change the relative fractions of the pion species so that the chemical potentials obey the relation 2µ 0 = µ + + µ − . For the case of the isospin-symmetric pion gas, that we will focus on in this contribution, µ 0 = µ + = µ − ≡ µ . The term containing non-equal numbers of the pion creation and annihilation operators, L 3↔1 = L(ϕ † i ϕ j ϕ k ϕ l ; ϕ i ϕ † j ϕ † k ϕ † l ), i, j, k, l = +, −, 0, corresponds to the processes with a change of the number of pions in the system, π ↔ πππ. These processes bring the system to the chemical equilibrium. The kinetic equation for the pion gas allowing for these reactions has the Bose distribution with vanishing chemical potential as an equilibrium solution.
The terms L 2↔2 and L 3↔1 give contributions to the pion polarization operator only in the second order of the coupling constant. They are represented by a sandwich diagram in (2) with different pion species in the internal lines allowed by charge conservation. The direction of the internal lines takes also into account the distinct processes: ππ ↔ ππ (two lines are directed to the one side and one line to the other side) and π ↔ πππ (all lines go from the left to the right). For T m π of our interest the real part of the sandwich diagram proves to be small in comparison with the tadpole contribution. Thereby we omit the former contribution. The imaginary part of the sandwich diagram determines the rates of rescattering and absorption/production reactions. As we have discussed, the processes ππ ↔ ππ responsible for the thermal equilibration occur essentially faster then processes π ↔ πππ responsible for the chemical equilibration at T m π . Therefore, we drop furthermore the term L 3↔1 of the Lagrangian, introducing instead the non-vanishing pionic chemical potentials fixing the pion numbers determined by reactions π + π − ↔ π 0 π 0 . Doing so we assume that these reactions are operative on the time scale of the pion fireball expansion up to its break up. Being armed by the effective Lagrangian (3) describing the system with the fixed total particle number and an arbitrary electric charge, we can study its properties.
Pion spectrum in self-consistent Hartree approximation
Varying the Lagrangian (3) with respect to ϕ † + , ϕ † − , ϕ † 0 fields we obtain a system of coupled nonlinear equations of motion. For isospin-symmetric system under consideration ϕ † + = ϕ † − = ϕ † 0 ≡ ϕ and we have only one equation of motion. We solve this equation within the self-consistent Hartree approximation, which is rather appropriate at temperatures of our interest, T m π . Within this approximation the behaviour of a certain particle is determined by an averaged interaction with surrounding particles, which form a thermal bath. The properties of particles in the bath are, in turn, determined by the same equation of motion as that for the given pion (cf. bold line in (2)).
Formally, we represent the field ϕ as a superposition of some picked-out fieldφ and an environmental field ξ, ϕ →φ + ξ . Then we keep in the equation of motion only the terms that are linear in theφ and quadratic in ξ (other terms vanish after averaging). As the result we find equation of motion for the fieldφ in the Hartree approximation with the spectrum where the polarization operator and the total pion density n = 3 f k . The non-commutativity of the creation-annihilation operators produces divergent contributions of quantum fluctuations to the macroscopic characteristics of the system, e.g. in (7), which have to be renormalized by subtraction of the corresponding vacuum values. The remaining finite contributions from quantum fluctuations are proved to be rather small and we omit them. When the temperature decreases up to the value T ind c there appears an "induced" BEC. It occurs for µ = m * . Following [14], we say about induced BEC, since the BEC could also occur in a first-order phase transition at a higher temperature, when µ reaches the free pion mass, m π , if it were energetically favorable. Any first-order phase transition requires an extra time and may occur only in rare events. Therefore, we will ignore this possibility in our study here.
The dependence of the effective pion mass m * and the chemical potential µ on the temperature in the interacting pion gas for the isospin-symmetric matter is shown on the left panel in figure 1. On the middle panel we demonstrate the critical temperature of the induced BEC as a function of the pion gas density. We see that the effective pion mass is larger than the free pion mass and increases with a decrease of the temperature and an increase of λ. Also the mass increases with an increase of the density. The critical temperature T ind c for the interacting pion gas is smaller than that for the ideal gas.
Normalized variance of the particle number
In the case of the uniform system the normalized variance of the particle number N is given by [21] (∆N ) 2 N = T n ∂n ∂µ T . Using parial integrations for the isospin-symmetric pion gas we get where dimensionless integrals are In the critical point of the induced pion BEC, i.e. for T = T ind c , we have µ = m * , for λ = 0, and the distribution behaves as f k → 2m * T /k 2 for k → 0. Using these limiting expressions we obtain that I 1,2,3 diverge. Separating the divergent parts we have for k 0 → 0, where δI 1,2,3 are already convergent terms. For the non-ideal gas perturbation expansion in λ is not valid for any finite λ = 0, since the integrals in denominator in (9) diverge. However the self-consistent account of the interaction in the Hartree approximation utilized here leads to the finite result For the ideal pion gas, λ = 0, at T = T id c one has µ = m π and f k → 2m π T /k 2 for k → 0. Thus, On the right panel in figure 1 we show the normalized variance of the pion number (9) for the interacting pion gas (for λ = 1 and 2) and for the ideal gas (λ = 0) as functions of a temperature for three values of the density. For the ideal gas the normalized variance diverges at the critical temperature of the BEC in agreement with the statement of [18]. Therefore, the authors [18] suggested that the strong enhancement of the normalized variance can be considered as a clear signature of the BEC although effects of the finite volume smear the singularity. On the contrary, we see that for the interacting gas even in the infinite volume limit the singularity disappears provided the interaction is taken into account self-consistently (within the self-consistent Hartree approximation in our current study).
Finally, we would like to remind that the comparison of the idealized thermal fluctuations with those appeared in real experiments is in any case quite indirect because of many additional effects not taken into account, like a finite volume, realistic pion-pion interaction, the dependence on the collision energy, collision centrality, kinematic cuts, and other dynamical effects. Thus, we may only say that, if a significant growth of various pion number fluctuation characteristics were observed, it could be associated with a proximity of the system to the pion BEC either at the chemical freeze-out or at the thermal freeze-out, depending on the specifics of the measurements. Measurements of the normalized variance and higher moments like sckewness and kurtosis in pre-selecting high multiplicity events are desirable in order to observe a manifestation of the signatures of the Bose-Einstein pion condensation in heavy-ion collisions. | 3,539.2 | 2017-12-01T00:00:00.000 | [
"Physics"
] |
Estimation and Synthesis of Reachable Set for Singular Markovian Jump Systems
The problems of reachable set estimation and state-feedback controller design are investigated for singularMarkovian jump systems with bounded input disturbances. Based on the Lyapunov approach, several new sufficient conditions on state reachable set and output reachable set are derived to ensure the existence of ellipsoids that bound the system states and output, respectively.Moreover, a state-feedback controller is also designed based on the estimated reachable set. The derived sufficient conditions are expressed in terms of linear matrix inequalities. The effectiveness of the proposed results is illustrated by numerical examples.
Introduction
The research on singular systems has attracted significant attention in the past years due to the fact that singular systems can better describe a larger class of physical systems such as robotic systems, electric circuits, and mechanical systems.When singular systems experience abrupt changes in their structures, it is natural to model them as singular Markovian jump systems [1,2].The analysis and synthesis of such class of systems have gained considerable attention because of their importance in applications (see, e.g., the literature [3][4][5][6][7][8][9][10] and the references therein).
Reachable set is one of the important techniques for parameter estimation or state estimation problems [11].Reachable set for a dynamic system is the set containing all the system states starting from the origin under bounded input disturbances.However, the exact shape of reachable sets of a dynamic system is very complex and hard to obtain; for this reason a number of researchers began to turn their attention to the reachable set estimation problem.The common strategies for reachable set estimation are ellipsoidal method [12] and polyhedron method [13].The main idea of these methods is to detect simple convex shapes like ellipsoid or polyhedron, which contains all the system states.Compared with polyhedron method, the primary advantage of ellipsoidal method is that the ellipsoid structure is simple and directly related to quadratic Lyapunov functions.As a result, linear matrix inequalities (LMIs) techniques can be used to determine bounding ellipsoids.In the framework of bounding ellipsoid, the reachable set estimation problem for linear time delay systems has received significant research attention in recent years.In [14] sufficient conditions for the existence of bounding ellipsoids containing the reachable set of continuous-time linear systems with time-varying delays were derived by using the Lyapunov-Razumikhin function.
In [15], by using Lyapunov-Krasovskii functional method, the author derived some less conservative conditions than those in [14].In [16] the reachable set of delayed systems with polytopic uncertainties was investigated by using the maximal Lyapunov-Krasovskii functional approach, and some new conditions bounding the set of reachable states are derived.Interesting results on reachable set of delayed systems with polytopic uncertainties can also be found in [17][18][19][20].In addition, some other strategies without using Lyapunov-Krasovskii functional have been provided to estimate the reachable set of continuous-time linear time-varying systems [21] and nonlinear time delay systems [22,23].The authors in [24] extended the ideas of reachable set estimation of continuous-time systems to discrete-time systems, wherein a fundamental result (Lemma 2.1 [24]) for the reachable set estimation of discrete-time systems was proposed.The authors in [25] improved the fundamental result obtained 2 Complexity in [24] and provided a basic tool (Lemma 4 [25]) for the reachable set estimation of discrete-time systems.On the basis of the general ideas proposed in [25], the reachable set estimation problem was also extended to some classes of complicated systems, such as singular systems [26], Markovian jump systems [27], switched linear systems [28], and T-S fuzzy systems [29,30].For the reachable set estimation of discrete-time systems, the other important contributions can be found in [31,32].On the other hand, the problem of controller design for specifications involved with the reachable set of a control system is also a very important issue [33].The controller design problems concerning reachable set were studied in [34] and [35] by using ellipsoidal method and polyhedron method, respectively.Two issues were raised in [34]: the first one is to design a controller such that the reachable set of the closed-loop system is contained in an ellipsoid, and the admissible ellipsoid should be as small as possible; the second one is to design a controller such that the reachable set of the closed-loop system is contained in a given ellipsoid.By constructing suitable Lyapunov-Krasovskii functional, LMI-based sufficient conditions for the existence of controller guaranteeing the ellipsoid bounds as small as possible have been derived for continuous-time delay systems [34] and discrete-time periodic systems [36].It is obvious that the LMI-based controller design is quite simple and numerically tractable.However, it should be pointed out that the reachable set estimation and synthesis problems of singular Markovian jump systems are much more difficult and challenging than that for nonsingular Markovian jump systems since the ellipsoid containing the reachable set is not directly related to quadratic Lyapunov functions.To the best of the authors' knowledge, no related results have been established for reachable set estimation and synthesis of singular Markovian jump systems, which has motivated this paper.
In this paper, we consider the problems of reachable set estimation and synthesis of singular Markovian jump systems.By using the Lyapunov approach, the estimation conditions on state reachable set and output reachable set are derived, respectively.Moreover, the desired state-feedback controller is designed based on the estimated reachable set.
Notation.Throughout this paper, R denotes the dimensional Euclidean space; represents the transpose of ; Sym() stands for + ; > 0 (<0) means is a symmetric positive (negative) definite matrix; E{⋅} refers to the expectation; × denotes the matrix composed of elements of first rows and columns of matrix ; ‖ ⋅ ‖ refers to the Euclidean vector norm; the symbol " * " in LMIs denotes the symmetric term of the matrix; is the unit matrix with appropriate dimensions.
In this paper we are interested in determining ellipsoids that contain, respectively, the state reachable set and output reachable set.In the reachable set analysis, it is required that systems should be asymptotically stable.When this requirement is not met, we will further design a statefeedback controller such that the reachable set of the closedloop system is contained in the smallest ellipsoid.
The state reachable set of the free system in (1) is defined by An ellipsoid E() bounding the reachable set can be always represented as follows: Particularly, when = for ∀ > 0, the ellipsoid E() will become a ball which is denoted by B().
Since rank() = < , there exist two nonsingular matrices and such that Let . Then the free system can be rewritten as the following differential-algebraic form: The following definition and lemma are also useful in deriving the main results.
Definition 1 (see [2]).(I) The free system is said to be regular if det( − ) is not identically zero for each ∈ S.
(II) The free system is said to be impulse-free if deg(det( − )) = rank() for each ∈ S.
State Reachable Set Estimation.
In this subsection, we will focus our attention on determining a ball which contains the state reachable set of the free system.
Theorem 4. If there exist nonsingular matrices ∈ R × and a scalar > 0 such that the following LMIs hold for each ∈ S, then the state reachable set of free system starting from the origin is mean-square bounded within the following set: where Proof.We first prove the regularity and nonimpulsiveness of the free system.Let Then, by (10), we obtain that 12 = 0. From (11), it is easy to show that Pre-and postmultiplying ( 14) by and , respectively, we get where ⋆ will be irrelevant to the results of the following discussion; thus the real expressions of these two variables are omitted.It follows from (15) that which implies that 22 is nonsingular for each ∈ S. Therefore, by Definition 1, we have that the free system is regular and nonimpulsive.
It should be noted that inequality (10) represents a nonstrict LMI.This may lead to numerical problems since equality constraints are usually not satisfied perfectly.Below, we will develop a numerically tractable and nonconservative LMI condition.Theorem 5.If there exist symmetric positive definite matrices ∈ R × , nonsingular matrices , and a scalar > 0 such that the following LMI holds for each ∈ S, then the state reachable set of free system is mean-square bounded within the following set: where ∈ R (−)× is any matrix with full row rank and satisfies = 0; ∈ R ×(−) is any matrix with full column rank and satisfies = 0.
Remark 6.In order to make the ellipsoid E( P ) as small as possible, we require trace( P ) → max.For this purpose, we can add the additional requirement P > and then maximize a positive scalar , which is equivalent to the following minimization problem: where = 1/.
Output Reachable
then the output reachable set of free system is mean-square bounded within the following set: where Ψ , , , and P are defined in Theorem 5.
Remark 8.The output reachable set is also expected to be as small as possible.To achieve this goal, we first solve LMI (30) and get P satisfying trace( P ) → max, which can be implemented by using (29).Then we add the additional requirement > and maximize a positive scalar , which is equivalent to the following minimization problem: where = 1/.
State-Feedback Controller Design.
In this section, we turn our attention to the state-feedback control problem.Our goal here is to find a state-feedback controller, which not only stabilizes the closed-loop system, but also makes the ellipsoid bound on the reachable set of closed-loop system as small as possible.Now, consider the state-feedback controller () = (), where is a gain matrix to be determined later.By using this controller, the closed-loop system can be obtained as where à = + .
Theorem 9. Consider singular Markov jump system (1).If there exist nonsingular matrices ∈ R × , matrices , and scalars > 0, > 0 such that the following LMIs hold for each ∈ S, then the reachable set of system ( 1) is mean-square bounded within the set , and the desired controller gain matrix is given by = −1 , where Proof.Denote = −1 and = for each ∈ S.Then, pre-and postmultiplying (39) by diag( , , ) and its transpose, respectively, we obtain where Using Lemma 2, we have From ( 41) and (42), it is easy to obtain that where . By Schur complement, the previous matrix inequality becomes where where Φ = Sym( Ã ) + ∑ =1 + .Pre-and postmultiplying (37) by and , respectively, we obtain From the above discussion, we show that if (37)-(39) hold, then (45) and (46) hold.Thus, it follows from Theorem 4 that the closed-loop system can be stabilized by the designed statefeedback controller.
Numerical Examples
In this section, two numerical simulation examples are given to show the effectiveness of the main results derived above.
The switching between two modes is described by the following transition rate matrix: In this example, we choose = , = .By solving optimization problem (29) with the aid of fminsearch, the minimal and the corresponding are 0.1312 and 1.7419, respectively.Using the above parameter values, we can obtain P1 = (1/r 2 1 ) = 2.1414 and P2 = (1/r 2 2 ) = 0.9992 by solving (26).Owing to Theorem 5, the state reachable set R is meansquare bounded within the set B( P1 ) ∩ B( P2 ) = B( P1 ).
By applying Theorem 7, we have the following results: Therefore, the output reachable set of free system is meansquare bounded within the set B( 1 ) ∩ E( 2 ).
For simulation we assume that 0 = [0.20.1303] and the disturbance is chosen as () = sin().A case for stochastic variation with transition rate matrix Π is shown in Figure 1.The state reachable set R and the ball B( P2 ) are depicted in Figure 2. Figure 2 shows that the trajectory of the system is mean-square bounded within the region B( P2 ).
The output reachable set R , the ball B( 1 ), and the ellipsoid E( 2 ) are depicted in Figure 3. Figure 3 shows that the output reachable set R is mean-square bounded within the region B( (59) The switching between two modes is described by the following transition rate matrix: (62) The corresponding parameter values r2 1 and r2 2 are, respectively, 27.7747 and 314.7993, which imply 1 = (1/r 2 1 ) = 0.0360 and 2 = (1/r 2 2 ) = 0.0032.Applying this controller makes the state reachable set of closed-loop system (36) mean-square bounded within the region B( 1 ).
For the purpose of the simulation, we assume the initial condition 0 = [0.30.9763] and the disturbance is chosen as () = sin(0.2).Figure 4 shows one possible switching between two modes.Figure 5 depicts the state reachable set of closed-loop system (36).
Conclusions
This paper has dealt with the problems of reachable set estimation and state-feedback controller design for singular Markovian jump systems.New sufficient conditions for the state reachable set estimation and output reachable set estimation have been, respectively, derived in terms of linear matrix inequalities.Based on the estimated reachable set, the state-feedback controller has also been designed.Numerical examples and simulation results have been provided to demonstrate the effectiveness of the proposed methods.
Figure 2 :Figure 3 :
Figure 2: The state reachable set R and the bounding ball B( P1 ). | 3,112.2 | 2018-06-26T00:00:00.000 | [
"Engineering",
"Mathematics"
] |
Evaluating the academic trend of RFID technology based on SCI and SSCI publications from 2001 to 2014
Radio frequency identification (RFID) is one of the most influential technologies of the twenty-first century. Today, RFID technology is being applied in a wide array of disciplines in science research and industrial projects. The significant impact of RFID is clearly visible by the rate of academic publications in the last few years. This article surveys the literature to evaluate the trend of RFID technology development based on academic publications from 2001 to 2014. Both bibliometric and content analyses are applied to examine this topic in SCI-Index and SSCI-Index documents. Based on the bibliometric technique, all 5159 existing RFID documents are investigated and several important factors are reviewed, including contributions by country, organizations, funding agencies, journal title, authors, research area and Web of Science category. Moreover, content analysis is applied to the top 100 most cited documents and based on their contents, these top 100 documents are classified into four different categories with each category divided in several sub-categories. This research aims to identify the best source of the most cited RFID papers and to provide a comprehensive road map for the future research and development in the field of RFID technology in both academic and industrial settings. Six key findings from this review are (1) the experimental method is the most popular research methodology, (2) RFID research has been a hot area of investigation but will branch out into related subset areas, (3) South East Asia is positioned to dominate this research space, (4) the focus of research up to now has been on technical issues rather than business and management issues, (5) research on RFID application domains will spread beyond supply chain and health care to a number of different areas, and (6) more research will be related to policy issues such as security and privacy.
of Malaysia).The other indexing categories of the WoS such as Art & Humanities Citation Index (A&HCI) and conference proceedings were excluded from the investigation process.As mentioned, all analysis and perceptions are only based on academic publications (SCI and SSCI databases) and other information and publications such as published documents in other databases, industrial reports, commercial news, general information resources, and white papers and so on were not involved in this study.
To achieve the above objectives, two different strategies were applied in this research: bibliometric analysis and content analysis.In this order, the SCI and SSCI databases were systematically searched for RFIDrelated materials published from 01 January, 2001 to 31 December, 2014, with an update from the databases on 31 March, 2015.All documents which included "RFID" or "radio frequency identification" in the title, abstract or keywords were captured as presented at the attached file (AttachmentI_RFID_RawData.xlsx).The bibliometric analyses quantitatively investigate all document characteristics such as country of origin, organization affiliation, funding agency, journal, year published, research area, WoS category, author, and number of times cited.The first strategy of gathering documents related to "RFID" or "radio frequency identification" provides comprehensive historical information related to RFID technology.Based on the number of published documents, we quantitatively ranked the different characteristics of the documents, but to provide a clear view of RFID academic trends we also considered qualitative parameters of the documents including citation, Avg citation, self-citation and h-index as defined below:
Citation: A citation is a reference to a published document.We used only the SCI and SSCI databases such that to count as a citation, both cited and citing documents must be documented in these databases, and other citations are not considered.
Avg.citation: The average number of citations, which is the total citations of all documents in a category, divided by the number of documents in that same category.
Self-citation: a citation where the cited and citing documents share at least a same author, a same journal, or a same category (Couto, Grego, Pesquita, & Verissimo, 2009).
h-index: The h-index "gives an estimate of the importance, significance, and broad impact of a scientist's cumulative research contributions".The h-index is defined as the number of papers, h, with citations greater than or equal to h.For example, an h-index of 25 indicates that the author has at least 25 papers with at least 25 citations each (Hirsch, 2005).
To give the right picture of academic trends in this area, in addition to the two main terms (RFID and radio frequency identification) several other RFID-related keywords were also investigated and analyzed including Internet of Things (IoT), wireless sensor networks, ubiquitous computing, automatic identification, wireless communication, Near Field Communication (NFC), Global Positioning System (GPS), Electronic Product Code (EPC), and Ultra High Frequency (UHF).
The second strategy we used was to analyze the contexts of the 100 most-cited documents.To find the top 100 documents, we ranked all 5,159 captured documents from most to least cited (An excel file is attached as a supplementary data to show the all 5,159 documents' information, under the file name: AttachmentI_RFID_RawData.xlsx).Citation analysis was based primarily on the impact factor as defined by the Journal Citation Reports (JCR) and on Citation per Publication (CPP), which are used to assess the impact of Post-Print version of: Shakiba, M., Zavvari, A., Ale Ebrahim, N., & Singh, M. J. (2016).Evaluating the academic trend of RFID technology based on SCI and SSCI publications from 2001 to 2014.Scientometrics 1-24.http://dx.doi.org/10.1007/s11192-016-2095-y 4 journals.In this paper, we sort the documents based on average citations per year, which is defined as the ratio of the number of citations the publication has received to the length of time since publication (Chao, Yang, & Jen, 2007).The full text of the top 100 most-cited articles were carefully reviewed, not only using bibliometric analysis, but each article was classified using a single category for any diversity characteristic.To help frame a series of research agenda items related to RFID, we will briefly consider the top 100 cited papers in four different categories: RFID Technology, RFID Applications, Policy Issues, and Others.For each category, we further assign articles to a sub-category.These four categories and sub-categories are defined based on the contexts of the investigated documents.To minimize the errors, two of the four researchers in the team independently classified each of the articles.If there was a difference in the selection of the two researchers, then the article in question was discussed until an agreement was reached as to whether it should be included in the final set (E. Ngai, Moon, Riggins, & Yi, 2008).
Bibliometric Analysis of RFID Publications
As mentioned before, the keywords "RFID" and "radio frequency identification" were used to search in the SCI and SSCI databases in the range of years 2001 to 2014.A total of 5,159 documents were found which in total have 45,568 citations.In this section, we present the bibliometric analyses we applied to provide future researchers and investigators a general road map of RFID academic trends.In this case, we analyzed all 5,159 discovered documents and the top 100 most-cited documents using a trend analysis for the distribution status (AttachmentII_RFID_AnalysisData.xlsx).
Distribution by country
Based on analyzing the attached excel file (AttachmentI_RFID_RawData.xlsx), and as shown in the "Attachment II, organizations are from the South East Asia, and only 3 institute from USA, and just a single university from the Europe, which is need to be considered.
Distribution by funding agency
The next important subject matter that we can extract form the raw data (Attachment I) is the top organizations that financially supported RFID research projects based on the number of publications.Funding agencies are the government or non-government organizations that provide research and academic funding in the form of research grant or scholarships.These data were extracted based on the acknowledgement parts of the documents and only agencies acknowledged are noted.As illustrated in the "Attachment II, Table III_FundingAgencies" National Natural Science Foundation of China with 311 publications is ranked as a top funding agency which is followed by National Science Council, Taiwan with 168 publications and European Commission with 164 publications in the second and third rank.
Distribution by source title
All published RFID documents existing in the SCI and SSCI databases are published in 1,104 different journals.
As illustrated at the "Attachment II, Impact factor = ℎ Publications such as lecture notes or conference proceedings do not have impact factors and are therefore listed as being indexed in the mentioned databases.Based on impact factor, journals are categorized in four different quartiles, with Q1 being the top 25% and so on.Based on the impact factor, the top journal in contribution with RFID is Nature with 42.351 impact factor.It is interesting that the highest impact journal published the highest cited paper entitled "Ultralow-power organic complementary circuits" by Hagen Klauk, Ute Zschieschang, Jens Pflaum, and Marcus Halik from Germany in 2007.That paper has received 700 citations making it the most cited paper in the RFID literature.However, that is the only RFID-related paper Nature published during that time due to Nature's wide topical scope.
Distribution by author
Top 10 authors who publish papers related to RFID with their publication number, citations, average citations per document, self-citation and h-index is extracted from the raw data and is available at the "Attachment II, Post-Print version of: Shakiba, M., Zavvari, A., Ale Ebrahim, N., & Singh, M. J. ( 2016).Evaluating the academic trend of RFID technology based on SCI and SSCI publications from 2001 to 2014.Scientometrics 1-24.http://dx.doi.org/10.1007/s11192-016-2095-yTable V_Authors".As shown, Leena Ukkonen, professor at Tampere University Technology from Finland is the top author in the field of RFID based on the number of publications with 49 RFID papers.Of the 5,159 published documents in the SCI and SSCI databases, there are 142 documents by anonymous authors.In total, there are 17,907 authors who have published within these database listings.The maximum number of authors for one paper is 24 authors and on average there are 3.44 authors for a single paper.Attached "Attachment II, Table V_Authors" ranks the authors based on the number of publications (quantitatively investigate the authors).But based on the quality of documents, form the attached table, only Manos M. Tentzeris with 3 high cited paper and Gaetano Marrocco with 1 high cited paper have papers in top 100 cited documents -and other authors in attached "Table V_Authors" do not have any papers in the top 100 cited documents.On the other hand, there are authors who focus more on quality than quantity.For example, K. V. Seshagiri Rao from Intermec Technology Corp, USA has published only 10 documents but with an average citation 78.2 and with 3 papers in the top 100 cited documents or Ari Juels from Cornell University, USA has published 12 RFID documents and an average citation 73.58, with 3 papers in the top 100 cited documents.It is also notable, where the Juels and Rao have only 1% and 1.7% self-citation respectively, the authors listed at attached "Table V_Authors", such as Piramuthu, Smail, and Zheng have a high rate of self-citation on average per document with 21.32%, 18.64% and 18.08% respectively.
Distribution by research area
There are some research areas defined by Thomson Reuters based on the topics and the scopes of the journals.
Journals publish documents in different research areas, where some journals publish in more than one research area.Top 20 research areas in contribution to the RFID literature where there are 103 different research areas with contributions to RFID are extracted from the raw data and illustrated at the "Attachment II, In section 3, several characteristics of the RFID published documents were analyzed, and all the discussed characteristics information as a raw data (Attachment I) and as analysis data (Attachment II) are available.
Attachment II involve some additional information, which analyzed some other characteristics of the documents.
Classification based on research methodology
We categorize the top 100 most-cited papers into nine different categories based on the research methodologies applied in the articles.Here, we describe all nine methodologies used to prepare the articles.The methods are sorted based on the frequency used across the 100 papers.
i) Experimental:
Experiments are an essential part of the scientific method in the physical sciences such as engineering.In experimental method, the proposed idea or theory are physically implemented and tested under particular considerations and environmental effects.
iii) Simulation: Imitation of real world operation, normally based on some assumptions and done with software simulators.
iv) Case Study:
Investigate and analyze an event, project or system in a real life situation to explore the causation regarding discovery of fundamental principles.
v) Theoretical Analysis:
An idea is analyzed theoretically based on the mathematics, physics, or electronic principles and is not implemented or simulated.
vi) Propose a Model:
Proposal of a new idea and creation of a model based on the idea; there is no experimental result, but includes a discussion about the theoretical model results.
vii) Introduction:
General information regarding the topic, but does not review any methods and previous ideas.
Classification based on the subject of study
The 100 most-cited papers were investigated in detail based on the subject of the studies.In this regard, we classified the articles within the four different major subject groups.Table 7 gives a summary of all 100 top cited articles in the classification scheme.For each paper, we list the total number of citations followed by the average citations per year.This should be a helpful resource for those researching RFID papers.
RFID Technology
Readers, tags, antennas, software, middleware, and computing software are included in RFID based systems.
Based on the reviewed papers, RFID Technology has been divided into the following three sub-categories:
i. Tag design and fabrication
RFID systems are integrated with tags.RFID tags are divided into active, passive and semi-passive tags.Active tags having an on-board power battery and can transmit their IDs without require to the external power supplier.
On the other hand, passive tags have not an on-board energy source, therefore they response back the readers' signals, where the signals supply the energy of the ID transmission process and any further communications.
Moreover, semi-passive tags that can act as active or passive tags, where they have their own on-board power supplier, but in the interrogation zone of the reader, they can use the readers' signals to support the energy.
Diverse RFID tags are available in the market, however, memory based integrated circuits are typically incorporated in the RFID circuits.All testing sessions, design plans, production processes, power vendors or material for RFID tags are included in this sub-classification.
ii. Communication infrastructure
The wired and wireless networks collectively form the communication infrastructure over which a series of information transfer actions take place that deliver the data stored on a tag to the reader.Research articles that include protocols and communication criteria, network connectivity matters and anti-collision algorithms are all incorporated into this particular category.
iii. Antenna design
The antenna is a key element in any RFID system.The readers and tags require the antenna to facilitate communication.Between the tag and the reader, data sharing is carried out by the antennae communication channel.Moreover, an antenna configuration holds a significant role in evaluating the coverage area and precision of a tag link.On the other hand, in passive tags' communication structure, the antenna draws power from the reader's signal to both power the tag and send data.As a result, articles focusing on RFID antennae are included in this category.
Post-Print version of: Shakiba, M., Zavvari, A., Ale Ebrahim, N., & Singh, M. J. ( 2016).Evaluating the academic trend of RFID technology based on SCI and SSCI publications from 2001 to 2014.Scientometrics 1-24.http://dx.doi.org/10.1007/s11192-016-2095-y11 Table 3 shows the number of papers belonging to the RFID Technology group.Of the 36 paper in this group, about 58 percent (21 papers) are related to tag design and fabrication, where the other two sub-categories are represented by about 22 percent and 19 percent of the publications by the technology category.
RFID Applications
Production, supply chain management, and logistics systems frequently employ RFID.However, the vision and range of applications that are using RFID is much broader than these systems.RFID is being pilot tested and deployed in many more dynamic and robust types of applications.RFID technology is a fundamental key element for implementing electronic commerce, maintaining efficient supply chain management, employing logistics tracking systems, and having effective resource management.To have successful enterprises today, it is essential to monitor all resources, products, employees, and have efficient electronic relationships with suppliers and customers.Heinrich indicates that RFID is becoming the hi-tech and thrilling next generation business tool with respect to a diversity of applications (Heinrich, 2005).The RFID Applications category is therefore further divided into a number of sub-categories based on the articles investigated here to include:
Policy Issues
These issue can be divided into sub-categories of security and privacy as shown in Table 5.Of the eight articles six (75 percent) pertain to the security sector and 2 documents (25 percent) pertain to the privacy area.
i. Security
Security of confidential data and protection from illegal access and operation are the key matters pertaining to security (Bhuptani & Moradpour, 2005).Being a wireless technology, RFID systems are subject to many security threats with illegal tracking of the RFID tags being the most significant one.Other major security hazards such as integrity, privacy, confirmation, authority, non-censure, and ambiguity can only be defeated by implementing robust and enhanced security into the system (Knospe & Pohl, 2004).
ii. Privacy
RFID related privacy issues include likely misuse of data by allowed users resulting in incursion and infringement of individual or company secrecy (Bhuptani & Moradpour, 2005).Consumer advocates are calling for greater regulation and codes of practice, especially for tags that are readable worldwide because of the risk that they pose to personal location privacy.Since everyone is concerned about privacy, RFID advocators have acknowledged that procedures such as ''kill'' functionality and confining the chip range can be incorporated into a system to avoid private data from being used illegally.Hence, a number of issues are captured in this subcategory, including RFID/human interaction, legal security, RFID protection issues, and records safety laws.
Table 5. Policy issues sub-categories publication number
Others
There are many other issues covering important aspects of RFID technology like fabrication of electronic devices, Internet of Things, general introduction of RFID systems, bibliometric studies, review papers, and research on sensor networks.As shown in the Table 6, there are 23 documents in this category, with fabrication of electronic devices being the most frequent representing more than 43 percent of the papers (10 documents), to sensor networks with only one document located at the end of this group.
Table 6. Others subjects sub-categories publication number
As previously mentioned, Table 7 presents the summary of the top 100 papers by categories and subcategories, where we list the authors, publication year, total citations, and average citations per year respectively for each paper (see the Reference section for full reference citation).We believe this table will be a useful source for those researching RFID papers.
Table 7. Classification of the reviewed literature
Limitations
The applied methodology in this research has the following limitations:
The study is only based on academic publications, where we do not investigate industry and market information regarding RFID technology.
To collect the academic publications, only the SCI and SSCI indexed documents were selected.Hence we lose the opportunity to consider high quality papers in other databases and indexes.However, while this means that the review is not exhaustive, we believe that it is comprehensive.
The quality of the papers evaluated is based on the number of citations of the paper.The citation count is also limited to the SCI and SSCI indexed documents and citations from other documents are not included.Further, in our study citation presents the quality of the paper; other criteria of the publication such as scope of study, methodology, transfer to industry and so on are not investigated when evaluating the top papers.
Discussion and Conclusion
In this review paper, we applied the bibliometric method to investigate all RFID documents in the SCI Index and SSCI Index from 2001 to the end of 2014.All 5,159 existing documents are categorized based on several different elements to show the trends in RFID research during this period of time.The most influential countries, journals, organizations and researchers are presented.Furthermore, the 100 most-cited RFID documents are investigated in detail based on their content and classified into nine different groups based on the applied research methodologies, followed by a categorization based on topic into four categories and sub-categories.
We believe the key findings of this research can be listed as follows: i.
Our review classifies the top 100 most-cited RFID articles into nine different research methodologies as shown in Table 1 and Fig 5 .Our findings illustrate that by far the most popular approach is to apply the experimental method accounting for 42 percent of the top articles.Since papers that applied the experimental methodology are based on real observed data using a controlled experiment, their results are highly reliable and replicable to other settings which helps account for their citation noteworthiness and high quality.Hence it is apparent that experimental studies are very attractive for researchers and receive more citations.In addition, the technical nature of RFID lends itself to a research mindset of experimentation to solve technical problems as the technology matures.We expect the experimental methodology will continue to dominate this area for the next decade.However, as these technical problems are addressed we also expect to see other methodologies employed more frequently to address the non-technical such as incentives to encourage adoption, best practices for usage, and approaches to achieve value maximization.To this end we expect to see more case studies, theoretical analysis, and proposed models to address these issues.
ii.According to the rates of publication in the investigation period from 2001 thru 2014, RFID had only 13 published documents in 2001 but then exhibited a sharp increase in publication activity during the study period resulting in an increase of more than a factor of 53 to reach a total of 699 published documents in 2014.An even sharper increase occurred in the citation numbers for these papers.RFID documents were cited seven times in 2002, but demonstrated more than a thousand times increase citations in 2012 and after.Therefore, we believe we can state that RFID has become a hot research area during the past ten years.Based on the publication and citation trends, we anticipate that research on RFID topics will Post-Print version of: Shakiba, M., Zavvari, A., Ale Ebrahim, N., & Singh, M. J. ( 2016).Evaluating the academic trend of RFID technology based on SCI and SSCI publications from 2001 to 2014.Scientometrics 1-24.http://dx.doi.org/10.1007/s11192-016-2095-y14 continue to increase and evolve over the next decade, but shift toward RFID-related subset areas.We expect to see a relative decline in RFID-specific research but an increase in topics related to RFID as the field matures and broadens to other areas.RFID related topics in which we expect to see increased publication activity include the Internet of Things (IoT), wireless sensor networks, ubiquitous computing, automatic identification and more application-based studies.The papers noted here, especially the most highly cited papers, should provide a strong foundation on which to build a growing body of literature in these related areas. iii.
In our review, we find that in total 75 different countries have contributed to the growing publication stream of RFID research papers.Industrialized countries have had more contributions in this regards, with the USA accounting for about a quarter of the activity due to 1,215 published documents, and 48 percent of the 100 most-cited articles.While the USA has more publications up to this time, based on "Attachment II, Table II_Organizations as engineering, computer science and telecommunication.This is understandable since our framework category with the most highly-cited publications was for RFID Technology with a focus on tag design, antennae design and development of the communication infrastructure.While the SSCI database and publication outlets related to management, business and economics are investigated in this study, the ratio of publications on technical issues is much higher than non-technical issues such as business and management issues in the existing literature stream.This is evidenced by the fact that the SCI database produced ten times more RFID-related papers than the SSCI database.The scant number of business and management related publications is surprising considering that one of the papers on the most-cited list specifically calls for, and suggests research directions for, research in this area (Curtin et al., 2007).
For example, standards development naturally lends itself to technical papers, however the role of standards is also critical for adoption by numerous parties which then allows externality benefits to occur.It is clear that if organizations are to take full advantage of RFID and related technologies these "soft" issues related to adoption, incentives, and value allocation must be examined.Some progress has Post-Print version of: Shakiba, M., Zavvari, A., Ale Ebrahim, N., & Singh, M. J. (2016).Evaluating the academic trend of RFID technology based on SCI and SSCI publications from 2001 to 2014.Scientometrics 1-24.http://dx.doi.org/10.1007/s11192-016-2095-y15 been made in this area where outlets such as Expert Systems with Applications (62 documents) and International Journal of Production Economics (54 documents) illustrate this trend.We are eager to see which other journals, particularly those in operations management and information systems, take up the mantle for advancing research on RFID and related emerging technologies.We also hope to see a modification of the perspective taken by funding agencies on this issues.Much funding is currently being funneled to the STEM fields of science, technology, engineering and math.While funding agencies such as the US National Science Foundation have funded some technical studies related to RFID, in general business and management issues, even when related to specific technologies such as RFID, are not considered a STEM-related area.Until information technology management is considered a STEM-related area research funding of these issues is likely to be sparse.Ultimately, though we expect to see a shift to more investigation of the management and business issues in the coming decade.Based on this discussion, the trend of RFID research has tended to stay in the technical area and we echo that suggestions made by Curtin, et al. (2007) of lingering security problems is a social science issue that impacts adoption.On a related issue, RFID and related wireless sensor networks have the potential to threaten individual privacy which would hamper adoption in public settings.These privacy issues need to be examined from the perspective of personal information as a private good which can be exchanged given proper incentives.All of these issues must be given more attention by researchers and we hope will result in more publications in these areas in the coming decade.
It is hoped that this examination of the existing literature stream of research related to RFID technology based on current academic publications will provide a roadmap and support the future direction of research in this area.In addition, it provides a foundation on which to build new literature streams in related areas such as the internet of things, wireless sensor networks, ubiquitous computing, and automatic identification systems.
Fig 2
Fig 2 shows the distribution by year for all published documents and the top 100 most-cited papers.Based on Fig 2, there has been a sharp increase in RFID research, insofar as it shows just under 54 times growth in number of publication from 2001 with 13 documents to 2014 with 699 published documents.On the other hand, the top 100 cited documents follows a different trend, where the major numbers of top cited documents were published in 2006, 2007, and 2008 respectively.Since the top 100 cited papers were selected based on the average number of citations per year during the investigated time period we need to consider two important facts:
Fig 2 .
Fig 2. Distribution by publication year from 2001 to 2014
Fig 3 .
Fig 3. Distribution by publication year from 2001 to 2014 for top 10 countries
Fig 4 .
Fig 4. Distribution by citation per year from 2001 to 2014
4
Content Analysis and Classification of the Top 100 Most-Cited RFID Papers Our framework includes a content-oriented classification of the RFID literature.First, we classified the top 100 most cited papers within nine different research methodologies as follows: Experimental, Review, Simulation, Case Study, Theoretical Analysis, Propose a Model, Introduction, Framework Extension and Software Development.Next, we classified each of the 100 papers based on the content and the subject of the study into four different categories: RFID Technology, RFID Applications, Policy Issues, and Others.Then each category is divided into several subcategories.The details of our classification scheme based on research methodology and subject categories are given in the following section.
viii) Framework Extension: Authors provide a framework on a special topic to explain, understand and predict a topic and to challenge existing knowledge.ix) Software Development: Based on an existing idea a new interface, middleware, or software artifact is developed.As illustrated in Fig 5, papers based on Experimental results tend to be cited most often with 42 publications, followed by Review papers with 32 papers and Simulation papers with only 9 papers.An interesting finding shown in the
Fig 6 .
Fig 6.Classification of top 100 RFID papers based on the subject of study
Fig 1 .Fig 3 .
Fig 1. RFID Map: contribution of different countries with RFID technology, based on number of
Fig 6 .
Fig 6.Classification of top 100 RFID papers based on the subject of study South Korea with respectively 15.73, 15.21 and 11.75 percent demonstrate the higher rate of the self-citation; and the Germany with only 2.25 percent self-citation shows the lower rate.RFID Map: contribution of different countries with RFID technology, based on number of publications.
Table I_Countries", all RFID documents were published by 75 different countries, where the top 10 countries based on the number of publications are listed in the attached table (see Fig 1 for a shaded-coded map).The number of publications in different countries is extracted based on the authors' affiliations.For documents with authors from different countries we multiple count the document.For example if there is a document with two authors, from two different countries (we assume US and UK) as their affiliations; then we count the document twice and add both US and UK publication number by one.As presented in the attached table, the United States (USA) was the most prolific country publishing RFID-related documents with more than 23.5 percent of all the publications or 1,215 total published documents, as well as 48 percent of the most-cited documents.Further, with 18,440 total citations, the USA accounts for more than 41 percent of all citations of RFID documents in this data set.The USA h-index is 60 which double's China's h-index of 30.Based on the number of publications the USA, China and South Korea are the first three countries, but based on the average and 48 documents in the top 100 cited documents is the country with the most contribution toward RFID publications both in terms of quantitative impact (number of publications) and qualitative impact (high quality publications).The next notable finding is the rate of self-citation.From this perspective, China, Taiwan and
Table VI_ResearchAreas
".As presented in attached table, Engineering (2,881), Computer Science (1,538), Telecommunication (1,279), Physics (327), and Operations Research & Management Science (306) are the five top research areas with contributions toward the RFID literature.The five areas at the top of the 100 most cited papers are the same and in the same order, but the Materials Science also included -Engineering (51), Computer Science (26), Material Science (15), Telecommunication (14), Physics (14) and Operation Research & Management Science (13).Based on the attached "Table VI_ResearchAreas", the Science Technology Other Topics, Operation Research Management Science and Construction Building Technology by 14.85, 14.31, and 13.78 average citation per document are the most cited documents.
Distribution by Web of Science category
Based on subjects, journals are categorized in different Web of Science subject categories by Thomson Reuters.Based on these categories, all RFID documents are published in 178 different subjects, where some journals belong to more than one subject category.Based on WoS Subject category, as presented in "Attachment II, Post-Print version of:Shakiba, M., Zavvari, A., Ale Ebrahim, N., & Singh, M. J. (2016).Evaluating the academic trend of RFID technology based on SCI and SSCI publications from 2001 to 2014.Scientometrics 1-24.http://dx.doi.org/10.1007/s11192-016-2095-y8Based on the average citation per document the Operation research & Management Science with 14.31 cite per document, published highest cited documents.As we already mentioned there are 5,159 published documents in the SCI and SSCI databases with contributions to RFID.These documents are cited 45,568 times.On average a paper is cited 8.83 times, where the h-index of RFID documents is 80; that is, there are 80 documents that have 80 or more citations.It is interesting to note that more than 8.8 percent of all the citations belong to the top 10 most cited papers.The top 100 papers account for about 30 percent of all citations and around 63 percent of all citations belong to the first 10 percent most cited papers.Not all RFID published documents have citations.Our count shows that 1,654 of the documents or 32% have no citations.Fig 4 illustrates the distribution of RFID documents by citation per year, from 2001 to 2014.As shown in Fig 4, the trend of citation is increasing sharply during the mentioned period, which indicates an increase in interest in this area from academics.
Electrical & Electronic with 33 documents is the top category, where Telecommunications and Material Science with 14 and Operations Research & Management Science with 13 documents are in the second and third rank.
Table 1 .
Table 1 is that while there are 42 Experimental papers, the research methodology with the highest average number of citations per document are the two papers that propose a model with 188 citations per document.Followed by this are Review papers, Experimental papers, and Theoretical Analysis with about 141, 131 and 109 average citations per document respectively.Frequency of nine research methodologies across the top most cited 100 papers in RFID Times cited for nine different research methodologies Table 2 presents these four major groups and the number of publications in each group.Based on the results, RFID Technology with 36 publications is the most frequent subject with 5,590 total citations.However, the Policy Issues group with only eight articles has the Post-Print version of: Shakiba, M., Zavvari, A., Ale Ebrahim, N., & Singh, M. J. (2016).Evaluating the academic trend of RFID technology based on SCI and SSCI publications from 2001 to 2014.Scientometrics 1-24.http://dx.doi.org/10.1007/s11192-016-2095-y 10 highest average citations with 184.63 citations per paper.The four major subject categories and their subcategories are presented in the Fig 6.
Table 2 .
Times cited for four different subject categories
Table 4
illustrates these sub-categories of RFID applications and their number of publications.Based on the table, of the 33 papers in the RFID Applications group, Food Industry & Agriculture, Supply Chain
Table 4 .
RFID Applications sub-categories publication number ", there are 16 institutes and universities in South East Asia among the top 20 world organizations advancing research in this area.At the same time, as shown in "Attachment II, Table III_FundingAgencies", it is observable that South East Asia is taking RFID very seriously where six of the top 10 funding agencies are located.This focus is easily demonstrated by a quick check of recent patterns of RFID search terms by regional interest on Google Trends.Consequently, we expect the ratio of RFID-related publications originating from South East Asia countries will increase sharply over the next decade.The recent trends shown in Fig 3 help justify thisexpectation.From this study, we conclude that due to its commitment to advancing RFID knowledge South East Asia is positioned to overtake North America and Europe in this area and dominate this field in the coming years.Combining this point with the previous point, we believe South East Asia may be poised to dominate the emerging RFID-related subset fields as well.iv.As shown in "Attachment II, Table IV_SourceTitle", this study highlights that the majority of research has been published in technical journals such as IEEE publications and other engineering outlets, and according to "Attachment II, Table VI_ResearchAreas" tends to focus on technical research areas such about the need for more well-rounded RFID research agenda.v.Our literature survey shows that about a third of the 100 most-cited articles concentrate on topics in the RFID Applications category of our framework.There are 9 different application areas shown in Fig 6 which are investigated in these documents, where Food Industry & Agriculture, Supply Chain Management, and Retailing & E-commerce are the most popular sub-categories with 7, 6, and 5 papersrespectively.In the future, RFID is expected to be used in more applications in various locations and environments.In addition to supply chain management and health care, over the next 10 years we expect to see more investigation and publications on RFID applications where the technology is applied in many other business contexts including asset tracking, traffic and logistics systems, agricultural settings, electronic commerce, and smart products, just to mention a few.Researchers in the social sciences of management, information systems, and operations management should focus on these areas in the coming decade.In addition to the experimental methodology, this is where we expect to see more case studies, simulations, theoretical analysis, and proposed models to address adoption, usage and value in these environments.In particular, within different domains and contexts the correct business model must be examined to account for value maximization as well as control of costs of implementing Print version of: Shakiba, M., Zavvari, A., Ale Ebrahim, N., & Singh, M. J. (2016).Evaluating the academic trend of RFID technology based on SCI and SSCI publications from 2001 to 2014.Scientometrics 1-24.http://dx.doi.org/10.1007/s11192-016-2095-y16 relatively low security levels in many RFID applications.Consequently, there have been fewer application studies and a large number of technical publications in this area.More recently, however, security problems have been lessened due to advances in RFID technology.The impact of a perception Post-
Table 1 .
Times cited for nine different research methodologies
Table 2 .
Times cited for four different subject categories
Table 4 .
RFID Applications sub-categories publication number | 8,769.6 | 2016-08-08T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
Study on Growth and Optical , Scintillation Properties of Thallium Doped Cesium Iodide-Scintillator Crystal
Single crystal of Thallium doped cesium Iodide –Scintillator crystal was grown using vertical Bridgeman technique. The grown crystal was included for cutting and polishing for the characterization purpose and this crystal was studied by optical transmission properties, photo luminescence and thermally luminescence characteristics. Gamma-ray detectors were fabricated using the grown crystal that showed good linearity and nearly 7.6% resolution at 662 keV.
INTRODUCTION
Room temperature semiconductor radiation detectors such as CdZnTe, CdTe, and HgI2 are widely investigated in nuclear measurements (Carini et.al., 2007).These semiconductors offer very good energy resolution compared with scintillator detectors, but detection efficiency and physical rigidity is very low.Currently, compound semiconductor materials are more expensive than scintillator crystals.So, scintillator detectors are still promising candidates for an NDT and a radiation monitoring system 1 .Thallium doped Cesium Iodide, a scintillator crystal which is a high-potential detector material, due to its wide applications in photocathode, gamma ray detectors and particle detectors in nuclear experiments [2][3][4][5] .Tl doped CsI single crystal is old and scintillation efficiency and is less hygroscopic and less brittle than NaI.This material could not achieve its full potential in the past due to the unavailability of a matching PMT.However, use of PIN photodiodes, which show good efficiency at longer wavelengths and compact in size, has renewed the interest in the CsI(Tl) crystals.Also, since CsI:Tl shows different decay times for different charge particles, pulseshape discrimination techniques can be effectively used for the particle identification .Single crystals of CsI:Tl can be grown by Bridgeman as well as Czochralski methods.Though, the crystal growth using the Bridgman method is not a problem, sticking of the crystal with crucible wall and hence recovery of the crystal after the growth and thermal and mechanical stresses generated there in are important issues of considerable interest 6 .In this paper we are reporting the growth of Thallium doped CsI single crystals in Glassy carbon crucibles by vertical Bridgeman technique.Gamma-ray detectors were fabricated and characterized to study the effect of growth conditions on the quality of crystals.
CsI:Tl crystal growth process
Glassy carbon crucible having 35 mm in diameter with conical bottom was taken for the growth experiment.High purity (99.995%)CsI and Tl were taken in the Glassy carbon crucible and sealed it after the vacuum de-hydration process.This vacuum de-hydration process has done under 10 -4 mbar argon ambient at 150°C using vacuum pump for avoiding the formation of impurities during raw material preparation connecting with the salt hydrolysis process under heating and of oxygen containing impurities.
The crucible was then put inside the furnace on a growth station and temperature was raised so that the bottom of the crucible was at 630°C.Melt was kept at this temperature for 12 hours.After The temperature stabilizes, the ampoule is lowered slowly into the lower temperature gradient zone at a rate of 1mm/h to initiate crystal growth from the ampoule's conical bottom.The upper zone has temperatures above the melting point of Cesium iodide.The lower zone has a temperature below melting point and an adiabatic zone as a baffle between the two.One K type thermocouple with the motor has attached for measuring the axial temperature gradient along the furnace.
To prevent cracking resulting from thermal stress in the crystals, as grown boule was annealed in the lower zone during growth.After the growth had finished, the furnace was cooled to room temperature at a rate of 20 ~ 50°C/h.The temperature profile in the melt is also shown in Fig 1 .A fine single CsI :Tl crystal has been grown as shown in fig.2[a] and fig.2[b].
Result and analysis Transmission Characteristics
Transmission spectrum was recorded in the wavelength range 200-1100 nm employing a double beam photo-spectrometer Chemeto 2500.The polished samples (20 mm thickness) were used for these transmission characteristics (Figs. 3) From the curve shown in Fig 3 ., the cut off wavelength of the transmission is around 300 nm.So we can observe that the grown crystal have the ability to absorb a wide range of energy spectrum.It enables the property of scintillation test [7].
Luminescence Characteristics
For the photoluminescence measurements, samples (2mm thick) were cut from the crystal ingot and polished to optical finish.This luminescence spectrum was recorded in the range 200-900 nm in a reflecting geometry, employing a florescence spectrometer Edinburgh model FLP-920.
The emission spectra of Tl-doped CsI crystals were measured at room temperature.The Fig. 4 shows the emission spectrum of Tl-doped CsI crystals.The addition of Tl by 0.02% exhibits peaks at 550nm.The light emission spectrum of CsI(Tl) excited by ionizing radiation has different components.The main contribution is due to the charge recombination near a Tl+ centre, giving a broad band at 550 nm [8,9].From earlier reports it was found that for 0.02% doping of Tl the peak at 550 nm is mainly due to the Tl dimer centres .The peak at 550 nm can be attributed to the exitons of the CsI (Tl) and this forms the major luminescence part due to doping of Tl.The emission spectrum is having the maximum at 550 nm, which allows photodiodes to be used to detect the emission.The output of the photoelectrons from CsI(Tl) (photodiode) crystal is higher than that for NaI (Tl) photomultiplier detector.The increase in doping concentrations may increases the intensity of exciton emission when compared to dimer.The small peak at 550nm in the emission band is due to excitonic emission.As reported, the 345nm peak is suppressed in crystals activated by Tl.This clearly indicates the vacancy type defects suppression due to doping It is evident that (halogen ion) lattice defects mainly contribute to this lattice emission than structural defects due to plastic deformation.
Thermally Stimulated Luminescence (TSL)
Thermally stimulated luminescence (TSL) glow curves of the crystals were recorded in the 25-200 1C range employing a heating rate of 0.5 0 C/min.Luminescence spectra as well as afterglow spectra were recorded in the range 200-900 nm In a reflecting geometry, employing a florescence spectrometer Edin-burgh model FLP-920.The TSL emission spectrum was recorded by holding the sample at 45°C.
The phenomenon of TSL is explained using a hand picture of solids with respect to its electronic energy levels.It is known that a crystal contains certain number of lattice defects such as vacancies.Interstitials and dislocations present in it as well as some additional defects created by doping them with chemical impurities.These defects introduce localized energy levels (i.e donor and acceptor levels) in the forbidden energy gap.These levels either belong to the impurities / lattice defects present in the crystal or the host lattice under the influence of them.
Free electrons and holes are generated in a crystal, upon exposure to ionizing radiation and most of them recombine immediately resulting in the emission of light or in lattice vibrations.However, some of the electrons and holes are trapped at the donor and acceptor levels respectively resulting in the formation of color centers.Most of them are trapped charges at vacancies (like F-centres, F-aggregates, etc.) or at interstitials (like V centers).According to Pooley, this process of colour centre production by radiolysis is known to proceed through an excitonic mechanism.On warming the crystal after irradiation.Electrons (or holes) are released from their traps, become mobilized and wander through the crystal.When they meet their counterparts, recombination between electrons and holes occurs with a part of the recombination energy released in the form of light and this is termed as TSL 10,11 .
Usually in TSL studies, the irradiated sample is heated at uniform heating rates.The plot of total light intensity emitted with temperature of the sample is called as glow curve and the spectral distribution of the luminescence at a particular temperature is called as TSL emission spectrum.The energy required to release the electrons (or holes) from the trap is called thermal activation energy 'E' and the area under the glow peak is related to the concentration of filled traps.Normally, a number of trapping sites may exist with different activation energies.On heating the material with uniform heating rate, charge carriers are released from these traps at different temperatures.This results in the appearance of a number of glow peaks in a glow curve with their maxima at different temperatures, each glow peak representing the thermal annihilation of particular type of defect 1 color centre in the crystal.
The glow curve for grown crystal (for radiation dose of 200 Gy) is shown in Fig. 5.The peak temperature shifts towards higher temperatures for crystal.As UV light does not have sufficient energy to create new defect centres in the CsI(Tl), the root cause of coloration of the grown crystal may lie somewhere else.One possibility for such coloration could be proposed as the capture of electrons by defect centers already formed during the crystal growth process.It is known that thermally stimulated luminescence peak at nearly 60 o C.
Scintillation test
It is generally accepted that the scintillator contribution to an overall energy resolution, called the intrinsic resolution, is a fundamental limitation of the obtainable energy resolution of scintillation detectors 12 .Scintillation studies were carried out on a 15 mm diameter and 20mm long crystal.The crystal was wrapped with 10 layers of Teflon tape leaving one face open to connect with a photo multiplier tube (PMT).Optical grease was applied to couple the crystal with the PMT avoiding any trapped air bubbles.To verify the linearity of the detection systems, five different isotopes were used: 137Cs, 134Cs, 60Co, 22Na and 57Co, having a dominant gamma emission at roughly 662 keV, 605 keV, 1173 keV, 511 keV, and 1332 keV, respectively 13 .The Gamma ray spectra of 137Cs, 134Cs, 60Co, 22Na and 57Co sources were recorded using the detector assembly consisting of CsI:Tl scientilators , a PMT, a preamplifier, a spectroscopic amplifier and an 8k multichannel analyzer.A shaping time constant of 3 microsecond was used.
Linearity of the pulse height response of the gamma detector was checked upto 1332 keV 7 From it is noted that an energy resolution of about 7.6% at 662 keV.Energy resolution studies determine the ability of a detector to distinguish Gamma sources with slightly different energies, which is of importance for gamma spectroscopy.
In CsI(Tl), the gamma ray (137Cs) interactions are almost entirely due to the photoelectric effect, and the scintillation response can be attributed to primary photo-electrons and the subsequent electrons that appear as the result of filling the vacancy in the ionized atom 14 .The samples used were of good optical quality, with dimension of 1 cm diameter and 1 cm height cylindrical shape crystals.All crystal surfaces were polished and the crystals coupled to PMT using the BICRON optical grease.The crystals were packed before measurements into aluminium foils with wall thickness of 4 mm.To ensure light collection only through the contact area scintillator-PMT and to avoid the photocathode exposure to scattered radiation, the container was covered from all surfaces, including the photocathode side, by black absorbing tapes.For measurements, we used gamma quanta source 137Cs.The reported energy resolution with respect to the 137Cs are 11.270.2%for CsI(Tl).But the present study shows the better energy resolution 7.6% for CsI for the optimized Tl concentration of 0.02 mole% at room temperature.Energy resolution studies determine the ability of a detector to distinguish gamma sources with slightly different energies, which is of great importance for gamma-spectroscopy 15,16 .
Fig. 6-7.Scintillator performance grown CsI(Tl) crystal with calibration curve drawn from detector to different radio nuclides
CONCLUSION
High quality single scintillator crystal of 0.02 mole % Tl doped Cesium Iodide was grown by vertical Bridgeman technique.Glassy Carbon crucible was used to avoid sticking of crystal with the crucible wall.The grown crystal was extracted from the crucible without involving an inversion process thereby avoiding thermal and mechanical stresses.The transmission and luminescence measurements shows that the purity of the crystal quality and wide transparency.Using these crystals ray detectors were fabricated which show good linearity and 7.6% resolution at 662 keV. | 2,745.4 | 2014-06-26T00:00:00.000 | [
"Materials Science",
"Physics"
] |
New material platform for superconducting transmon qubits with coherence times exceeding 0.3 milliseconds
The superconducting transmon qubit is a leading platform for quantum computing and quantum science. Building large, useful quantum systems based on transmon qubits will require significant improvements in qubit relaxation and coherence times, which are orders of magnitude shorter than limits imposed by bulk properties of the constituent materials. This indicates that relaxation likely originates from uncontrolled surfaces, interfaces, and contaminants. Previous efforts to improve qubit lifetimes have focused primarily on designs that minimize contributions from surfaces. However, significant improvements in the lifetime of two-dimensional transmon qubits have remained elusive for several years. Here, we fabricate two-dimensional transmon qubits that have both lifetimes and coherence times with dynamical decoupling exceeding 0.3 milliseconds by replacing niobium with tantalum in the device. We have observed increased lifetimes for seventeen devices, indicating that these material improvements are robust, paving the way for higher gate fidelities in multi-qubit processors.
Participation Ratio
To isolate geometric contributions to relaxation we simulated the participation ratios of the 70 µm gap double-pad geometry using a method similar to Wang et al. 3 , assuming the same simplified junction geometry. A device with a dielectric layer of thickness 3 nm and dielectric constant = 10, similar to the aluminum oxide layer simulated in 3 , gave a substrate-metal interface participation ratio of 1.6 * 10 −4 , excluding the areas within 1 µm of the junction.
Supplementary Note 2: Transmon on a Silicon Substrate
We fabricated a 2D, double-pad, tantalum transmon on silicon (Device Si1) with a similar design to that used for the devices on sapphire. The primary elements that changed during the fabrication process were: (i) a different plasma etch time to avoid overetching into the silicon, (ii) no aluminum layer was deposited on top of the e-beam resist prior to e-beam lithography, and (iii) the e-beam intensity was adjusted during the lithography step. We found that reactiveion etching severely roughened the silicon surface (17 nm RMS surface roughness, measured with a Keyence Optical Profilometer). We plan to optimize this fabrication process in the future.
Supplementary Note 3: Additional Materials Characterization X-ray Diffraction
We use XRD to study the crystal structure of our films over a much larger area than is feasible with STEM images (Supplementary Fig. 7). An acquired spectrum of a film exhibits a strong peak corresponding to α-tantalum [110] 4 , corroborating STEM images that suggest that our films grow uniformly along that direction (Fig. 3a). Additionally, we observe peaks corresponding to sapphire [006] 5 and α-tantalum [220] 4 . We do not detect a β-tantalum [002] peak at 33.7 • (2θ) ( Supplementary Fig. 7, inset left) 4 . This provides further evidence along with our T c and STEM measurements that the tantalum films are uniformly in the α phase. We note that there are a few unassigned small peaks which could result from contamination, instrumental artifacts, or impurities or defects in the tantalum films ( Supplementary Fig. 7, inset right).
Grain Boundaries
We further interrogate the grain boundaries visible in a plane-view image ( Supplementary Fig. 8a) by using energy dispersive x-ray spectroscopy (EDS) to perform spatially-resolved elemental analysis. We find a uniform distribution of tantalum ( Supplementary Fig. 8b) and oxygen ( Supplementary Fig. 8c) over the region, and no oxygen enrichment at the grain boundaries. This suggests that our films do not grow oxide between the grains, and that the image contrast observed in Supplementary Fig. 8a arises instead from diffraction contrast caused by interfacial defects.
A high-resolution STEM image of a grain boundary elucidates the crystal structure at the boundaries ( Supplementary Fig. 8d). Taking a diffraction pattern of a grain boundary region indicated by a green square in Supplementary Fig. 8d gives a pattern consistent with twinning ( Supplementary Fig. 8e). A diffraction pattern of the whole region in Supplementary Fig. 8d illustrates the rotational symmetries of the grains ( Supplementary Fig. 8f).
Tantalum Oxide
An atomic-resolution STEM image of a 50 nm region of the tantalum surface reveals an amorphous oxide that is 2-3 nm thick ( Supplementary Fig. 9a). We further study this oxide using XPS to estimate oxide thickness and composition over a larger area (250 µm spot size) (Supplementary Fig. 9b, d-f). XPS scans of the tantalum film show two sharp lower binding energy peaks assigned to tantalum metal 4f 7/2 and 4f 5/2 orbitals (lower binding energy to higher binding energy, respectively), two peaks at higher binding energy corresponding to the same orbitals of Ta 2 O 5 6,7 , and two small 5p 3/2 peaks corresponding to the metal and oxide, respectively 8 . Assuming the mean free path of electrons in tantalum is 2 nm at 1480 eV 9 , and only taking into account inelastic scattering, a thickness can be estimated by comparing the ratio of oxide to metal peak areas. We corroborate this estimation using angle-resolved XPS (ARXPS), where we vary the angle between sample and detector, changing the relative distances that the emitted photoelectrons travel through the metal and oxide layers to reach the detector ( Supplementary Fig. 9b). We account for this geometry in our modeling, and extract the oxide thickness at different angles ( Supplementary Fig. 9c). The thickness estimation remains fairly consistent until higher angles, when other effects related to surface morphology or elastic scattering become more significant (Supplementary Fig. 9c) 10 .
To investigate the variability of oxide thickness between devices, we show normal incidence XPS data from three devices from different tantalum depositions with different surface cleaning fabrication procedures (Supplementary Fig. 9d-f). In addition to variations in other fabrication steps, we note that the device shown in Supplementary Fig. 9d was only solvent cleaned, and the devices in Supplementary Fig. 9e and Supplementary Fig. 9f were piranha cleaned. The peak shapes and ratio of oxide to metal peak area are similar between all these devices, suggesting the oxide thickness and composition is robust to processing steps.
Sapphire-Tantalum Interface
We study the heteroepitaxial growth interface in our devices by directly imaging small regions of the sapphire-tantalum interface using iDPC STEM. In addition to the iDPC STEM image shown in Fig. 3e, we include an image showing the interface between sapphire and tantalum viewed from 1100 sapphire and 100 tantalum zone axes ( Supplementary Fig. 10a). We also propose atomistic models for an ideal sapphire-tantalum interface shown in Supplementary Figures 10b and c to help visualize the lattice matching between sapphire and tantalum, and as a starting point for future studies on the impact of sapphire surface morphology on heteroepitaxial growth. For example, the interfacial dislocations visible in Fig. 3e likely result from the 12.6% lattice mismatch between the [112] axis of tantalum and the [1120] axis of sapphire ( Supplementary Fig. 10c), as well as atomic layer steps in the sapphire that are evident in the STEM image.
XPS, AFM, XRD characterization
All XPS, AFM, and XRD data were acquired using tools in the Imaging and Analysis Center at Princeton University.
XPS was performed using a Thermo Fisher K-Alpha and X-Ray Spectrometer tool with a 250 µm spot size. The data shown in Fig. 3d, Supplementary Fig. 3c and d, and Supplementary Fig. 9d-f were obtained by collecting photoelectrons at normal incidence between sample and detector. The angle-resolved XPS (ARXPS) spectra shown in Supplementary Fig. 9b were collected by changing the angle between sample and detector. All AFM images were taken with a Bruker Dimension Icon3 tool operating in tapping mode (AFM tip from Oxford Instruments Asylum Research, part number AC160TS-R3, resonance frequency 300 kHz). The XRD spectrum shown in Supplementary Fig. 6 was collected with a Bruker D8 Discover X-Ray Diffractometer configured with Bragg-Brentano optics. Two 0.6 mm slits were inserted before the sample, and a 0.1 mm slit was placed before the detector.
Electron Microscopy Characterization
SEM and STEM images were also collected at the Imaging and Analysis Center at Princeton University. STEM thin lamellae (thickness: 70-1300 nm) were prepared by focused ion beam cutting via a FEI Helios NanoLab 600 dual beam system (FIB/SEM). All the thin samples for experiments were polished by a 2 keV Ga ion beam to minimize the surface damage caused by the high-energy ion beam. Conventional STEM imaging, iDPC, atomic-resolution HAADF-STEM imaging and atomic-level EDS mapping were performed on a double Cs-corrected Titan Cubed Themis 300 STEM equipped with an X-FEG source operated at 300 kV and a super-X energy dispersive spectrometry (super-X EDS) system.
Lithography and etching process development SEM images were collected with a FEI Verios 460XHR SEM and a FEI Quanta 200 Environmental SEM. Various tilt angles, working distances, and chamber pressures were used to eliminate charging effects.
Supplementary Note 4: CPMG
To reduce our devices' low-frequency noise sensitivity we applied a sequence of π-pulses 11 . Each pulse had a Gaussian envelope with σ around 20-50 ns and was truncated at ±2σ. Due to the large number of sequential pulses, we found that reducing gate error through frequent calibration was important.
To derive the qubit's noise spectral density ( Supplementary Fig. 11) from such a pulse sequence, we follow the procedure in 11 . The signal-to-noise ratio decreases as the overall delay time between initial excitation and measurement increases. For clarity, we include only delays spanning up to approximately T 1 . For simplicity we assume the gates are instantaneous. We find a noise power spectral density that is well fit by A/f α + B with α = 0.7.
Supplementary Note 5: Fitting Procedure
We fit our transmon T 1 data to f (∆t) = e −∆t/T 1 , where T 1 is a fit parameter and the function represents the population in the excited state. We fit any T 2 data taken with fringes to the fit f (∆t) = 0.5e −∆t/T 2R cos(2π∆tδ + φ 0 ) + 0.5 where T 2R , δ, and φ 0 are fit parameters. For echo and CPMG experiments, we fit our T 2 data with a stretched exponential, f (∆t) = 0.5e −(∆t/T 2 ) n + 0.5, where T 2 and n are fit parameters. If n < 1, the data is refit to a pure exponential. Supplementary Fig. 12 shows a representative decay for a low, average, and high value of T 2,CP M G for the data shown in Fig. 2a. In time sequences, data traces with obvious abnormalities or poor fits as measured by root-mean-square error are discarded. Here we include measurements of devices with different designs, fabrication procedures, and packaging. Devices labeled "Nb" were made with niobium instead of tantalum (Nb1 was heated to 350 • C then cooled for 20 minutes before deposition, Nb2 was deposited at approximately 500 • C) and all other devices were made from tantalum. Device Si1 was composed of about 200 nm of tantalum deposited on high-resistivity silicon. Each individual device is labeled with its own number. Devices marked with an additional letter indicate different thermal cycles of the same device. Entries marked with a " †" had three or fewer repeated measurements, and the reported errors were calculated by propagating the fit uncertainties. Otherwise the errors were calculated by finding the standard deviation of multiple measurements. Devices labeled with a " * " were fit without constraining the line of best fit to be normalized and have the proper offset. The average T 2,CP M G column denotes the time averaged dynamical decoupling decoherence time at an optimal gate number. The quality factor is calculated using Q = ω q T 1 where ω q is the qubit frequency. After piranha cleaning and etching, carbon is reduced by around a factor of five, and zinc is no longer detected. "Before" corresponds to the surface after dicing and solvent cleaning but before acid procedures, and "after" is following acid cleaning steps.
Supplementary Supplementary Figure 11. Spectral decomposition for Device 11c. a, T 2,CP M G as an increasing number of pulses reduce the qubit's sensitivity to low-frequency noise. At each point we apply the pulse sequence shown in the inset with a fixed number of π-pulses and vary the delay, ∆t, with values ranging from 16 µs to 2 ms. Error bars give the standard deviation in the T 2,CP M G fit parameter. Inset: Measurement pulse sequence. X and Y identifies the axis of rotation. Subscripts 90 and 180 refer to a π/2 and π pulse, respectively. b, Noise power spectral density S(ω) of the same data as (a), following 11 . The blue dashed line indicates a fit by eye to A/f α + B where α = 0.7, A = 2e6s −1 , and B = 3e2s −1 .
Supplementary Figure 12. CPMG traces. Low (a), middle (b), and high (c) T 2,CP M G traces from the data in Fig. 2a, showing the excited state population P e as a function of delay time.
All three traces were fit to a stretched exponential with the exponent constrained to be larger than one.
Supplemental Information References: | 3,040.8 | 2020-02-28T00:00:00.000 | [
"Materials Science",
"Physics"
] |
An Intelligent Quadrotor Fault Diagnosis Method Based on Novel Deep Residual Shrinkage Network
In this paper, a fault diagnosis algorithm named improved one-dimensional deep residual shrinkage network with a wide convolutional layer (1D-WIDRSN) is proposed for quadrotor propellers with minor damage, which can effectively identify the fault classes of quadrotor under interference information, and without additional denoising procedures. In a word, that fault diagnosis algorithm can locate and diagnose the early minor faults of the quadrotor based on the flight data, so that the quadrotor can be repaired before serious faults occur, so as to prolong the service life of quadrotor. First, the sliding window method is used to expand the number of samples. Then, a novel progressive semi-soft threshold is proposed to replace the soft threshold in the deep residual shrinkage network (DRSN), so the noise of signal features can be eliminated more effectively. Finally, based on the deep residual shrinkage network, the wide convolution layer and DroupBlock method are introduced to further enhance the anti-noise and over-fitting ability of the model, thus the model can effectively extract fault features and classify faults. Experimental results show that 1D-WIDRSN applied to the minimal fault diagnosis model of quadrotor propellers can accurately identify the fault category in the interference information, and the diagnosis accuracy is over 98%.
Introduction
UAV is a reusable unmanned aerial vehicle controlled by remote control or its own procedures through radio technology. After entering the 21st century, UAV has shown its skill in various fields. In the Marine industry, UAV can be used to complete coastal zone monitoring and Marine mineral exploration [1]. In agriculture, UAV can be used to fertilize fields and monitor the growth status of crops [2]. In geology, UAV can be used to conduct geological disaster investigation and geological mapping [3]. In terms of traffic management, UAV can be used to monitor road conditions and assist in dealing with traffic accidents [4].
As a representative of small multi-rotor aircraft, quadrotor UAV has the advantages of small size, light weight, and easy control, so it is widely favored by people. The quadrotor has four rotors that generate upward thrust and torque, which control the speed of the rotator to perform various actions [5]. With the popularization of quadrotor UAV, the research on its fault identification and diagnosis has become more and more important. Under the long-term operation of the quadrotor, its actuator system, namely the electronic governor, motor and propellers, bears a heavier load. If the actuator system has a serious failure, such as serious damage to the propeller or serious failure of the motor, the UAV will not be able to obtain the required lift, resulting in flight instability or even crashing, which will eventually cause serious property losses and even casualties.
If there is a minor fault in the actuator of the UAV, the output power of the UAV will be reduced correspondingly. In this case, the UAV can still fly, but its stability and attitude will be slightly different from the healthy UAV. Although the quadrotor has good fault-tolerant control capabilities, it is also necessary to diagnose the fault when it is in the early state, because the accumulation of such minor faults may eventually cause serious problems such as the crash of UAVs. So it is of great significance to find a method to accurately detect and locate the early minor faults of UAV. However, due to the interference of the external environment and the fault-tolerant control of the UAV itself, these potential faults such as minor faults of actuator are difficult to be detected. Therefore, how to accurately diagnose the early minor faults of the UAV actuator from the numerous interference information is an urgent problem to be solved.
Especially for UAVs used for disaster surveys, such UAVs need to fly into various unknown dangerous locations, such as areas suffering from earthquakes, floods, and tornadoes. This kind of UAV is more likely to suffer body damage caused by external collisions, so the propellers and motors of such UAVs are more prone to failure. Therefore, designing a fault diagnosis method, which can diagnose minor faults in time when the propeller of the UAV is damaged in the early stage, is of great significance to the practical application of UAV.
In recent years, many scholars have used model-based methods to identify the actuator failures of quad-rotator UAV, such as Zhang et al. [6] used interactive multi-models to detect and locate single-actuator failures of quad-rotors. Avram et al. [7] used a non-linear adaptive estimation method to detect and isolate quad-rotor actuator failures. Bauer et al. [8] used a multi-model adaptive estimation method for fault detection and reconstruction of small UAVs. Miao et al. [9] designed and constructed a UAV fault diagnosis algorithm based on adjustable nonlinear PI observer to realize the effective fault identification of continuous-time system. Kim et al. [10] used the extended Kalman filter to estimate the efficiency parameters of each UAV actuator, and the estimated coefficients were used to perform fault diagnosis. Antonio et al. [11] developed a robust actuator fault diagnosis algorithm based on Adaptive eXogenous Kalman Filter by combining nonlinear observer and linearized adaptive Kalman filter, and the algorithm tested for actuator fault diagnosis of a hexacopter UAV. The model-based diagnosis method has good robustness and can diagnose unknown faults, but the premise of this method is to establish an accurate model for the target system, but the structural parameters of the quadrotor UAV are relatively complicated. Therefore, this method relies too much on the mathematical model of the object, which is not easy to implement in practical applications and has limited scalability.
Others uesd signal processing-based methods to detect faults. This kind of method does not need to establish a quantitative or qualitative mathematical model of the system, but only collects flight data and extracts fault features from the original data through signal processing to diagnose system faults. For example, Rangel-Magdaleno et al. [12] use discrete wavelet transform and Fourier transform to process audio data to identify propeller unbalance faults of unmanned aerial vehicles.Yousefi et al. [13] used logistic regression and linear discriminant analysis algorithm to detect unmanned aerial vehicle faults after de-noising the voltage and current data of the four actuators of the UAV. Park et al. [14] used multivariate statistical analysis techniques such as partial least squares regression to process data and diagnose propeller failure. Bondyra et al. [15] used wavelet packet decomposition, fast Fourier transform, and signal power spectrum to extract features of vibration signals under single-sided damage of the UAV propellers, and then advanced fault diagnosis through support vector machine classifier. Altinors et al. [16] applied Decision Tree, Support Vector Machines, and K Nearest Neighbor algorithms to machine learning, and applies these algorithms to the sound data received by the motor for UAV fault diagnosis.
With the development of computational intelligence, deep learning has become a hot spot in the field of fault detection. Fault diagnosis method based on neural network preprocesses the original data and inputs it into the neural network model to obtain the fault diagnosis result directly. Chen et al. [17] first used wavelet packets to extract energy entropy, and then used a genetic algorithm optimized BP neural network to construct a fault diagnosis model to realize the detection of UAV sensor faults. Guo et al. [18] used short-time Fourier changes to transform the signal into a time-frequency graph, and then used a convolutional neural network to extract fault features to realize the sensor fault diagnosis of the UAV. Iannace et al. [19] collected the audio on the UAV propellers in flight, and used the artificial neural network model to analyze it to detect faults. Gao et al. [20] used the raw data of MEMS inertial sensors as input to establish a multi-scale convolutional neural network model to diagnose the fault of the UAV temperature sensor. Xiao [21] used wavelet packet to extract the fault characteristics of UAV sensors, and proposed a observer method based on BP neural network to detect and process single or multiple sensor faults online. Liu et al. [22] transformed the audio data of UAV flight into time-frequency spectrogram for diagnosis, and developed a diagnostic model based on convolutional neural network and transfer learning techniques, realizing the damage detection of propeller. Jia et al. [23] proposed a novel deep network constructed by multi-layer extreme learning machine based auto-encoder to solve the problem of UAV actuator fault diagnosis under imbalanced data, the deep network has the advantages of strong feature mining ability, high accuracy and fast speed.
It can be seen that the research on the fault diagnosis of quad-rotor UAVs based on data-driven has attracted more and more attention. This method is widely used and computationally efficient, taking into account the uncertainty of the model and the sensor noise. However, in previous studies, most of the fault diagnosis methods of quadrotor UAV need to extract fault features through some signal processing methods before fault diagnosis, and cannot directly realize the fault diagnosis of UAV through the original signal. Therefore, end-to-end fault diagnosis based on convolutional neural networks still has a lot of research space. Among them, one-dimensional convolution neural network is often used in sequence model, which can analyze the time series of sensor data (such as gyroscope and acceleration meter data), and realize self-extraction of fault features in time domain [24,25]. It integrates feature extraction, feature dimension reduction and pattern recognition to achieve end-to-end fault diagnosis.
In previous studies, research on quadrotor fault diagnosis is mostly aimed at obvious faults. In this case, quadrotor has shown an obvious fault phenomenon, and the relevant research has been relatively mature. However, there are few studies on the early minor faults of the quadrotor actuator system, and most of the methods currently have low diagnostic accuracy. In view of the above problem, this paper mainly focuses on the early minor faults of the actuator when the quadrotor can still fly, and tries to diagnose and locate these faults, so as to avoid serious accidents caused by the accumulation of minor faults.
The propeller minor damage fault is one of the common fault types of quadrotor. In addition, different fault locations have different effects on the quadrotor. To solve the problem of low accuracy of propeller minor damage fault diagnosis for quadrotor caused by the multiple interference information and the difficulty in feature extraction of quadrotor, this paper takes a certain type of quadrotor used for disaster investigation as the object, proposed a diagnosis algorithm based on 1D deep residual shrinkage network with a wide convolution layer (1D-WIDRSN). The algorithm integrates feature extraction, feature dimension reduction and pattern recognition to achieve end-to-end fault diagnosis. The innovative contributions of this paper are summarized as follows: (1) An end-to-end fault diagnosis method is presented. The original flight data containing disturbance information can be directly input into the fault diagnosis model to obtain the diagnosis result. The fault diagnosis model based on residual shrinkage network integrates soft threshold and attention mechanism, which can realize the fault diagnosis of quadrotor under the influence of external environment interference and UAV fault tolerant control. (2) In order to retain more fault characteristic information, we improved the traditional residual shrinkage network: a new shrinkage function named progressive semi-soft threshold function is proposed to replace the original soft threshold function.The traditional soft threshold will eliminate the effective data features beyond the interference information, and the new progressive semi-soft threshold can retain the continuity of the soft threshold and eliminate the signal distortion caused by the soft threshold, which further improves the diagnostic accuracy of the model. (3) In order to enhance the feature learning ability of the model, a suitable wide convolutional layer is introduced as the first layer of the fault diagnosis model. The wide convolutional layer can effectively extract the short-term features of the original flight data, and further improve the feature extraction capability of the fault diagnosis model. (4) In order to prevent the model from over-fitting, the DropBlock layer is introduced when training the model. By randomly discarding part of the feature blocks, the model can be extracted several times to get the optimal features, which improves the training speed and anti-over-fitting ability of the fault diagnosis model.
The rest of this paper is organized as follows: Section 2 describes the construction process, modeling algorithm and optimization strategy of 1D-WIDRSN model. Section 3 explains the data acquisition, related the model structure determination experiment. Section 4 introduces the evaluation and discussion of model performance. Section 5 presents the conclusion and future work directions.
Overview
To confront the challenge of fault diagnosis caused by minor fault, a fault diagnosis model based on 1D-WIDRSN is proposed. This section introduces the design principles and architecture of the 1D-DRSN in detail. The framework of 1D-WIDRSN model is shown in Figure 1, which consists of a wide convolution layer and multiple residual shrinkage modules.
Deep Residual Shrinkage Network (DRSN) Model
In order to solve the degradation problem of multilayer convolutional neural network model, He et al. [26] proposed a residual network, which is composed of a series of residual blocks. It is an improved convolutional neural network. By introducing cross layer links, the residual network can still learn new features based on the input features, thereby solving the degradation problem caused by the increase of network depth. The residual block is shown as follows: where a l represents the input of the l-th residual module, F represents the residual function. Zhao et al. [27] proposed the concept of deep residual contraction network (DRSN) in 2020. DRSN is an improved ResNet by integrating soft threshold function and attention mechanism into residual network. Its working principle is to automatically find the interference features of the input samples with the help of the attention mechanism, and set them to zero through the soft threshold function, so as to realize the ability of extracting data features from the interference information.
The feature selection process of deep residual shrinkage network can be understood as follows: Firstly, through convolution layer and pooling layer, important information such as fault features are converted to larger absolute values, and irrelevant features corresponding to interference information such as noise are converted to smaller absolute values. Then, the boundary between the two is obtained through the attention mechanism, and the interference information is zeroed by soft thresholding. Finally, only the important information is output. Deep residual shrinkage module (DRSM) as shown in Figure 2. a l is the input of the module. After passes through the first convolution layer, a l+1 is obtained through ReLU function, and then a l+1 is input into the second convolution layer. The second convolution layer constructs a sub grid to obtain the threshold. Then, the output a of the second convolution layer is quantized by taking the absolute value and global average pooling (GAP) to obtain a one-dimensional vector β, β represents the mean parameter, then the characteristics of the channel are mined through two full connection layers, and finally the attention weight z is obtained through the activation function Sigmoid, Each attention weight parameter α ∈ (0, 1) acts on the feature vector of the corresponding feature channel. α and β are multiplied to obtain the threshold τ, so each characteristic channel has an independent threshold. Finally, the obtained threshold τ is used for soft threshold processing to obtain a s , a s is added to the residual term f (a l ) to obtain module output a l+2 . The calculation formula of the specific process is as follows: where, w l+1 and w l+2 is the weight of l + 1 layer and l + 2 layer, b l+1 and b l+2 is the bias of l + 1 layer and l + 2 layer.
Progressive Semi-Soft Threshold
The soft threshold in the depth residual shrinkage network realizes the denoising of the input data by setting the interference information to zero. Soft threshold denoising is a simple and traditional denoising technology. The data processed by the soft threshold function has good continuity, but when |a | > τ, there is always a constant deviation between a s and a , so the soft threshold may eliminate some effective features, cause a certain degree of distortion to the data and reduce the accuracy of diagnosis [28].
In response to the above defects, this paper introduces a progressive semi-soft threshold function. Compared with the soft threshold function, the progressive semi-soft threshold function not only retains the continuity of the soft threshold function, but also eliminates the distortion caused by the soft threshold [29]. The calculation formula for realizing progressive semi-soft threshold is as follows: The progressive semi-soft threshold process is shown in Figure 3. As shown in Figure 3, the progressive semi-soft threshold function is continuous, and there is little constant deviation between the processed signal and the real signal, which retains the effective features in the original data as much as possible.
Wide Convolution Layer-Enhancement of Feature Learning Ability
In this paper, the wide convolution layer is taken as the first layer of the 1D-WIDRSN, that is, the first convolution layer of the model is set as the wide convolution kernels, and the other layers are set as the small convolution kernels. The wide convolution kernels in the first layer can better suppress the interference information, expand the receptive field area, extract short-term features, adaptively learn the important features of fault diagnosis objects and speed up model training. The small convolution kernels of other layers can reduce the network parameters, deepen the number of network layers and suppress overfitting [30,31]. Therefore, the configuration of the first layer wide convolution kernels and the other layers small convolution kernels can better improve the network performance and training speed.
DropBlock Module-Enhancement of Anti-Overfitting Ability
Ghiasi et al. [32] proposed the DropBlock regularization module for convolutional neural network. The traditional Dropout regularization method randomly discards some independent neurons to reduce the number of intermediate features, but the convolution layer will still learn interference features through the interaction between features. Therefore, Ghiasi believes that the effect of applying Dropout method in convolution layer is plain. However, by setting the block size, the DropBlock module will randomly set a block area of the feature to zero, so that the model can learn the features of other areas, so as to realize the regularization of the convolution layer and improve the robustness and anti-overfitting ability of the model. When the area size is 1, the DropBlock method is the traditional Dropout method. In this paper, a DropBlock module is added after the wide convolution layer, which cannot only improve the anti-over-fitting ability, but also improve the generalization ability of the model.
Fault Diagnosis Process
The 1D-WIDRSN model in this paper is introduced for adaptive feature learning, which can extract key features from interference information and accurately identify fault location of quadrotor. The fault diagnosis process of 1D-WIDRSN model is shown in Figure 4, and the specific steps of the fault diagnosis process are as follows: (1) Collect the data of UAV in normal state and fault state. After preprocessing and data enhancement, the original data is randomly divided into training sample set and test sample set, accounting for 70% and 30%, respectively. The training part of the 1D-WIDRSN model will be performed offline due to the need to label data, but the diagnosis part can be performed online. During the actual mission of the quadrotor, the accuracy of 1D-WIDRSN diagnosis model will be affected if environmental factors (such as excessive wind) are enough to change the attitude of the quadrotor. Therefore, in order to ensure the accuracy of the model and avoid increasing online computational cost, this fault diagnosis model is suitable for the flight-test inspection of quadrotor before or after use.
Data Acquisition
In order to verify the fault detection method proposed above, it is necessary to collect actual UAV flight data for verification experiments. This part is aimed at a certain type of quadrotor, by destroying the propellers to reduce the output power of the UAV, so as to obtain the experimental data under different fault conditions. The experimental platform is mainly composed self-assembled quadrotor, computer and several propellers damaged in different degrees.
The UAV used in the experiments is an intelligent inspection quadrotor based on Pixhawk system assembled by our national science key laboratory. As shown in Figure 5, the power system of the quadrotor consists of T-Motor 2216 motor, 20 A four-in-one electronic speed controller, propellers and battery. There are four propulsion units on the quadrotor frame, and two coaxial rotor-motor pairs on each arm. Pixhawk open-source flight control system is adopted in the flight control system. The hardware structure of Pixhawk mainly includes gyroscope, accelerometer, geomagnetic module, barometer and GPS module. The software structure of Pixhawk is mainly divided into 4 layers, the top layer is the application API, and this layer is used by developers. The second layer is the application framework, under which basic flight control nodes can be operated. The third layer is the relevant library layer, which provides functions that interact with system libraries. The last layer is the operating system (Linux and RTOS), which provides hardware drivers.
In this paper, by artificially destroying the propeller in a small area, the output power of the corresponding motor will change, and then the flight attitude of the whole UAV will also change accordingly, thereby simulating the minor failure of the UAV's actuator. The normal propeller and damaged propeller are shown in Figure 6, The length of normal propeller is about 26 cm, and the length of damaged propeller is about 23 cm and its end has been cut off by 3 cm irregularly. Experiments have verified that the failure simulation program has a certain and small impact on the flight stability of the UAV, which belongs to the category of minor failure of UAV. Five different propeller configurations are set up as shown in Table 1, including one group of normal condition and four groups of single propeller faults with the i-th (i = 1,2,3,4) propeller damaged. In the experiment, after a short preparation time on the ground, the aircraft is launched and immediately started its flight with a horizontal rectangular flight pattern. Figure 7 recorded the flight trajectory of the UAV in the healthy state and fault state in the experiments. It can be seen from Figure 7 that when there is a minor fault, it will be difficult to identify only through the flight trajectory curve. However, potential faults like this should not be neglected. Therefore, it is necessary to propose an effective fault detection method. In each group of experiments, the pitch rate, roll rate, yaw rate, pitch angle and roll angle of UAV in level flight are collected and saved. The yaw angle is greatly affected by manual operation, so it is not used as fault information in this paper. In actual flight, affected by external interference factors such as airspace environment and noise, the above parameters alone cannot reflect the failure of the UAV, so the output speeds Out1~Out4 of four motors are introduced. Finally, the values of the above nine parameters are taken as the characteristic elements to characterize the fault information of real UAV.
Tests Propeller Sets Labels
Configuration1 four propellers in good condition 0 Configuration2 The propeller of No.1 motor is damaged 1 Configuration3 The propeller of No.2 motor is damaged 2 Configuration4 The propeller of No.3 motor is damaged 3 Configuration5 The propeller of No.4 motor is damaged 4 In order to ensure the diversity of collected data, this paper conducted 10 flight experiments for each group based on the above five propellers configuration modes, finally collected about 500,000 pieces of real flight data and labeled the experimental data.
Data Preparation and Data Augmentation
After obtaining the original data, remove the first 5 s and the last 5 s of the collected data according to the nature of the sensor, so as to eliminate the influence of abnormal data during takeoff and landing. The final intercepted data is used as the input of sample data.
The data of the UAV at a certain time can only represent the instantaneous state of the UAV, not its health state. Therefore, in order to further obtain the sample data, the data in this paper is divided into short data segments. According to the previous experience [33], the length of the divided data segments is set to 80, that is, the step size of each sample segment is 80, The collected data is expressed as Data = [(x 1 , y 1 ), · · · , (x i , y i )] T , Data is the segmented data set, x i is the single sample data, each sample contains 80 pieces of sampling data, and y i is the fault category label of the sample data.
The best way to enhance the generalization ability of deep neural networks is to use more training samples [34]. When dividing sample data, in order to extract more effective features from original sequence data and obtain as many sample data as possible, this paper uses the equally spaced sliding window to overlap sampling data. The calculation formula for realizing overlapping sampling is as follows: where, N is the number of samples after overlapping sampling; L 1 is the length of original data; L 2 is the length of a single sample, namely the width of the window; S is the moving step of the sliding window, namely the sampling interval; in this paper, L 2 = 80, S = 30. Finally, in order to unify the data dimension, the data is normalized. Among them, the positive and negative of the attitude-related data represents the direction of the UAV, so the attitude-related data is normalized to the interval [−1, 1], and the motor speed data is normalized to the interval [0, 1].
Model Determination Experiment of Quadrotor Fault Diagnosis Based on 1D-WIDRSN
In this paper, 1D-WIDRSN quadrotor fault diagnosis algorithm based on progressive semi-soft threshold is constructed. The input of the algorithm is the nine characteristic parameters of the UAV, and the fault type of the UAV is output after calculation.
We uses the Pytorch framework to build the designed 1D-WIDRSN fault diagnosis model in the Python3.8 environment. To verify the effectiveness of the model, a series of experiments are carried out on the computer.
In the training process, the batch-size is 128, the epochs is 80, the learning rate of Adam algorithm is 0.001, and the cross entropy loss function was used as the objective function to conduct model training. The input layer of the model is the data samples mentioned above, and the input size is (None,80,9). None is the number of input samples, 80 is the time step of the samples, and 9 is the nine characteristic variables of UAV.
How to find suitable parameters for the model is very important. In this paper, a series of experiments are carried out to determine the values of relevant parameters of the model. Each group of experiments is carried out for 5 times, and the average accuracy of 5 experiments is taken as the evaluation standard of the model [35]. The parameters considered in the experiment are: the size of the first wide convolution kernels, the number of the wide convolution kernels, the number of residual modules, and the block size discarded by the DropBlock layer.
The wide convolution kernels can better extract short-time fault features for time series data, thereby enhancing the model's anti-noise performance. For the quadrotor sample data in this paper, the convolution kernels used should be n × 9, which n is the size of the convolution kernels in the time domain, and 9 is the number of feature types. On the issue of convolution kernels selection, a smaller convolution kernels may not be able to fully express the characteristics of the input data, while a larger convolution kernels may increase the complexity of the model, and the number of convolution kernels will also affect the experiment results.
To determine the size of the wide convolution kernels, set the experiment as follows: the number of convolution kernels at the first layer is set to 24, the size is set to 8, 12, 24, 32, 48 the number of residual shrinkage modules is set to 1, and the size of the convolution kernels of the residual shrinkage modules is set to 3. The final parameter value is determined by analyzing the influence of convolution kernels size on model performance in the wide convolution layer. As show in Table 2, the accuracy increases with the increase of the size of convolution kernels. When the size of wide convolution kernels reaches 32, the accuracy tends to stabilize, so the size of wide convolution kernels is set to 32. After determining the size of the convolution kernels of the wide convolution layer, further experiments are carried out to select the number of wide convolution kernels. The size of the wide convolution kernels is set to 32, and the number is set to 16, 24, 32, 48, and 64. As show in Table 3, the accuracy increases with the number of convolution kernels. When the number of wide convolution kernels reaches 32, the accuracy tends to stabilize, so the number of wide convolution kernels is set to 32. Further experiments were carried out on the number of residual shrinkage modules (RSMs), and the number of RSMs are set to 1, 2, 3, 4, 5. As show in Table 4, when the number of RBUs is greater than 4, the accuracy becomes stable. However, the increase in the number of RSMs will reduce the network training speed, so the number of RSMs in this model is set to 3. Adding DropBlock layer after wide convolution layer, we need to set the block size in DropBlock. As show in Table 5, the model has the highest accuracy when the block size is 5, so the size of block size in DropBlock is set to 5. In the network model designed in this paper, the input sample will first pass through a wide convolution layer. The convolution kernels size of the wide convolution layer is 32 × 9, and the number of convolution kernels is 32. After the wide convolution layer, the DroopBlock technology is adopted to avoid overfitting problem. After the ReLu processing, it enters three residual shrinkage modules. Each residual shrinkage module has two convolution layers. In order to increase the receptive field and reduce parameters at the same time, the convolution kernels size of each convolution layer is 3, and the number of convolution kernels in each module is twice that of the previous module. After the average pooling layer processing, the output features are expanded into one-dimensional vectors and are input into the full connection layer. Finally, the fault diagnosis results are obtained through the Softmax layer. The five fault types correspond to the five outputs of the Softmax layer, respectively.
The specific structural parameters of the quadrotor fault diagnosis model are shown in Table 6.
Experiment Results
After determining the model parameters, input the data samples into the fault diagnosis model. The training process is visualized as shown in Figure 8, which records the curve of loss and accuracy of training set and test set. After about 20 iterations, the accuracy rate of the model gradually stabilizes and maintains at about 98%.
Evaluation of Model Feature Extraction Capability
In order to verify the feature extraction ability of the algorithm, t-distributed stochastic neighbor embedding (t-SNE) [36] can be used to visually analyze the original flight data and the data after model calculation. T-SNE is a popular dimensionality reduction algorithm. This paper uses the t-SNE algorithm to map the data to a two-dimensional scatter graph to evaluate the effectiveness of features. The visualization results are shown in Figure 9. It can be seen that most of the failure categories of the unprocessed raw flight data are overlapping and disorderly. After model feature extraction, the four categories can be almost completely separated. Therefore, the model has strong ability of fault feature extraction.
Diagnostic Performance of Different Models
In order to verify the effectiveness of the algorithm, this paper compares 1D-WIDRSN method with other fault diagnosis methods. In the comparative experiment, DNN, CNN, Vgg [37], Resnet, 1D-WDRSN are selected, respectively. As show in Table 7, compared with other fault diagnosis models, 1D-WIDRSN has higher accuracy.
Then, this section uses the Confusion Matrix to measure the accuracy of the fault diagnosis model. The Confusion Matrix can evaluate the performance of the classification model by counting the number of correct and wrong classifications. Figure 10 is the confusion matrix of different fault diagnosis models, in which the abscissa is the diagnosis result of the diagnosis model and the ordinate is the actual fault category label. As show in Figure 10, compared with other fault diagnosis models, 1D-WIDRSN can identify various faults more effectively. From the above results, it can be concluded that compared with other comparison methods, the 1D-WIDRSN proposed in this paper has a better effect. The reason can be explained as follows: Firstly, the residual shrinkage module of 1D-WIDRSN uses the induction mechanism to find the interference information of the input sample, and uses the progressive semi-soft threshold function to set the interference information to zero, so as to reduce the influence of the interference information on the fault diagnosis accuracy. Secondly, the wide convolutional layer of the 1D-WIDRSN model can effectively extract the short-term features of the original data, which helps the model find the key information of different fault categories. Finally, the DroupBlock algorithm is introduced to avoid the dependence of the model on specific neurons and prevent the model from overfitting by randomly discarding some characteristic elements.
Conclusions
In engineering applications, inevitable environmental interference information and UAV own fault-tolerant control will seriously affect the accuracy of fault diagnosis when the early fault of quadrotor actuator occurs. We proposed 1D-WIDRSN fault diagnosis algorithm, which can adaptively extract essential features from the raw flight data, developed to realize end-to-end fault diagnosis. This diagnostic algorithm is used in the flight-test inspection of quadrotor before or after use.
1D-WIDRSN replaces soft threshold in DRSN with progressive semi-soft threshold function, which preserves as many effective features as possible in the original data. As the shrinking layer of residual shrinkage module, the progressive semi-soft threshold can reduce the influence of interference information in the original data on fault diagnosis. In addition, the wide convolution layer can better extract the short-time features of faults and further reduce the influence of interference information. The DroupBlock layer uses the method of randomly discarding feature blocks to suppress over-fitting of the model. Finally, a certain type of quadrotor is selected for the experiment, and the speed, angular velocity, and angular acceleration are used as the experimental data to complete the training and testing of the model, so as to realize the early failure detection and location of the actuator of the quadrotor. Experimental results show that the fault diagnosis method based on 1D-WIDRSN can effectively identify the quadrotor propeller faults under interference, and has good sensitivity in the early stage of minor faults. Compared with other fault diagnosis methods, this method has better performance. Therefore, the quadrotor fault diagnosis based on 1D-WIDRSN is a good solution for the field of UAV health detection.
In the future research, we will focus on the compound fault diagnosis of quadrotor aircraft to improve the practicality of the 1D-WDRSN fault diagnosis algorithm. Some issues concerning the implementation of the 1D-WDRSN algorithm in the onboard controller of UAV should also be addressed to improve the effectiveness of the algorithm in real-time fault detection during flight. | 8,335.6 | 2021-11-08T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
Tristetraprolin Posttranscriptionally Downregulates TRAIL Death Receptors
Tumor necrosis factor (TNF)-related apoptosis-inducing ligand (TRAIL) has attracted attention as a potential candidate for cancer therapy. However, many primary cancers are resistant to TRAIL, even when combined with standard chemotherapy. The mechanism of TRAIL resistance in cancer cells has not been fully elucidated. The TRAIL death receptor (DR) 3′-untranslated region (3′-UTR) is reported to contain AU-rich elements (AREs) that are important for regulating DR mRNA stability. However, the mechanisms by which DR mRNA stability is determined by its 3′-UTR are unknown. We demonstrate that tristetraprolin (TTP), an ARE-binding protein, has a critical function of regulating DR mRNA stability. DR4 mRNA contains three AREs and DR5 mRNA contains four AREs in 3′-UTR. TTP bound to all three AREs in DR4 and ARE3 in DR5 and enhanced decay of DR4/5 mRNA. TTP overexpression in colon cancer cells changed the TRAIL-sensitive cancer cells to TRAIL-resistant cells, and down-regulation of TTP increased TRAIL sensitivity via DR4/5 expression. Therefore, this study provides a molecular mechanism for enhanced levels of TRAIL DRs in cancer cells and a biological basis for posttranscriptional modification of TRAIL DRs. In addition, TTP status might be a biomarker for predicting TRAIL response when a TRAIL-based treatment is used for cancer.
Introduction
Tumor necrosis factor (TNF)-related apoptosis-inducing ligand (TRAIL), which was independently identified in both 1995 and 1996 and is also known as Apo-2 ligand (Apo2L), is a member of the TNF cytokine superfamily [1,2]. By binding to death receptor (DR) 4 or DR5, TRAIL induces tumor cell apoptosis without causing toxicity in normal cells. The cancer-specific action of TRAIL has attracted attention as a potential candidate for cancer therapy. Extensive preclinical studies conducted on recombinant human TRAIL (rhTRAIL) and TRAIL receptor agonists (TRAs) against TRAIL-receptors
Quantitative Real-Time PCR (qRT-PCR) Analysis for RNA Kinetics
For RNA kinetic analysis, we used actinomycin D and assessed DR4 and DR5 mRNA expression by quantitative PCR. Total RNA was isolated using a PureLink™ RNA Mini Kit (Thermo Fisher Scientific, Waltham, MA, USA), and cDNA was subsequently synthesized using a first-strand cDNA synthesis kit by reverse transcription-polymerase chain reaction (iNtRON Biotechnology, Seoul, Korea). SYBR ® Green master mix (Applied Biosystems, Foster City, CA, USA) was used for qRT-PCR with a PRISM ® 7500 sequence detection system (Applied Biosystems). All reactions were performed in triplicate in 96-well plates, and the mean values were used to calculate mRNA expression. The primer sequences were as follows: DR4 forward, 5 -AGC CTG TCA TCT GTG GGA TT-3 and reverse, 5 -CTC AAG TAC ACA CTC CAA AG-3 ; DR5 forward, 5 -CTT TGT GGC CTT CTT TGAAG-3 and reverse, 5 -CCA CAC AGT TGC TCC ACAT-3 ; GAPDH forward, 5 -ACA TCA AGA AGG TGG TGAAG-3 and reverse, 5 -CTG TTG CTG TAG CCA AATTC-3 .
Plasmid, siRNAs, Transfection, and Dual-Luciferase Assay
SW480 cells that overexpressed human TTP were generated using the pcDNA6/V5 vector (Invitrogen, Carlsbad, CA, USA). Full-length human cDNA of TTP was cloned by RT-PCR from the RNA of KM12C cells using forward primer 5 -CCG TGA ATT CAT GGA TCT GAC TGC CAT-3 and the reverse primer 5 -CAC TCT CGA GCT CAG AAA CAG AGA TGC-3 . The product was subcloned into the pcDNA6/V5 vector. Roughly 1.5 × 10 7 cells were electrophorated with 20 µg of pcDNA6/V5-TTP at 500 V, 975 lF with a Gene Pulser electroporator II (Bio-Rad Laboratories, Hercules, CA, USA). After transfection, SW480/ pcDNA6/V5-TTP cells stably transfected with human TTP were selected by adding Blasticidin (10 ug of Blasticidin/mL; Invitrogen) 3 days after transfection. Stable polyclonal transfectants were maintained in bulk culture without further clonal purification. Stable polyclonal transfectants were tested for overexpression of human TTP by RT-PCR and Western blots using anti-human TTP polyclonal antibody (ab33058; Abcam, Cambridge, UK). A control cell line, SW480/pcDNA6/V5, was generated by transfection with the pcDNA6/V5 vector. KM12C was transfected with TTP-siRNA (sc-36760; Santa Cruz, Dallas, TX, USA) or control siRNA-A (sc-37007; Santa Cruz) using LipofectamineTM RNAiMAX (Invitrogen). Cells were seeded in six-well plates at a concentration of 3 × 105 cells/mL. The concentration of siRNA was 45 nM. Cells were harvested 24 h after transfection. Typically, cells were analyzed for loss of TTP mRNA and protein expression 24 h after transfection using either RT-PCR or immunoblotting, respectively. A variety of deletion mutants of the DR4 and DR5 3 -UTR were PCR-amplified from cDNA of SW480 cells using Taq polymerase and primer sets as follows: DR4 Frag-ARE-1, CCG CTC GAG TCC AAT AAG TCC CAT TTC ATA and ATA AGA ATG CGG CCG CAC TCA AGG TAA TAA ATT. DR5 Frag-ARE-1-4, CCG CTC GAG CCT AAT GTA AAT GCT and ATA AGA ATG CGG CCG CAA TTT GGTC; Frag-ARE-1, CCG CTC GAG CCT AAT GTA AAT GCT and ATA AGA ATG CGG CCG CGT TCA TATC; Frag-ARE-1/2, CCG CTC GAG CCT AAT GTA AAT GCT and ATA AGA ATG CGG CCG CCT CAT ATGT; Frag-ARE-1-3, CCG CTC GAG CCT AAT GTA AAT GCT and ATA AGA ATG CGG CCG CCC AAA AACT. PCR products were inserted into the XhoI/NotI sites of a psiCHECK2 Renilla/firefly Dual-Luciferase expression vector (Promega, Madison, WI, USA). Mutant oligonucleotides in which AUUUA pentamers were substituted with AGCA were also synthesized. The oligonucleotides were ligated into the XhoI/NotI sites of the psiCHECK2 vector. For luciferase assays, SW480 cells were co-transfected with various psiCHECK-DR4 and DR5 3 -UTR constructs and pcDNA6/V5-TTP using TurboFectTM in vitro transfection reagent. Transfected cells were lysed with lysis buffer (Promega) and mixed with luciferase assay reagent (Promega), and the chemiluminescent signal was measured in a Wallac Victor 1420 multilabel counter. Firefly luciferase was normalized to Renilla luciferase in each sample. All luciferase assays reported here represent at least three independent experiments, each consisting of three wells per transfection.
Apoptosis by Annexin V/PI Analysis
Human colon adenocarcinoma cells were seeded on a 60 mm dish, incubated with TRAIL (30 ng/mL) for 24 h, washed twice with ice-cold PBS (pH 7.0), and incubated with fresh medium for 5 days. Senescence-induced cells were washed twice with ice-cold PBS (pH 7.0) and then resuspended in binding Buffer (500 µL). FITC-Annexin V 5 µL was added to 5 µL PI followed by incubation for 15 min at RT in the dark. Samples were analyzed using a fluorescence-activated flow cytometer (FACScan; Becton Dickinson, Franklin Lakes, NJ, USA).
In Vivo Antitumor Activity
Either SW480/pcDNA6/V5 or SW480/pcDNA6/V5-TTP (2 × 10 7 cells) were injected into the flanks of 6-week-old nude (nu/nu) mice (Orient Bio Inc., Seongnam, Korea). Prior to treatment with TRAIL, tumor size was measured two to three times per week until the volume reached approximately 200 mm 3 . Tumor volume was calculated as W 2 × L × 0.52, where L is the largest diameter and W is the diameter perpendicular to L. After establishing tumor xenografts, mice were randomized into four groups of five mice per group. Mice were fed ad libitum and maintained in environments with a controlled temperature of 22-24 • C and 12 h light and dark cycles. Mice in each treatment group were treated with TRAIL at a dose of 200 ng/kg by intra-tumoral injection twice per week for two weeks. All animal experimental procedures were approved by the Institutional Animal Care and Use Committee of the University of Ulsan Laboratory Animal Research Center. All animal experiments were performed in accordance with Institutional Animal Care and Use Committee (IACUC) guidelines. Ethical code number: 0118-06 (C1-0), date of approval: 6 December 2017.
Statistical Analysis
All statistical analyses and calculations were performed using Microsoft Excel spreadsheets and GraphPad Prism v.5 (GraphPad Software, San Diego, CA, USA). Group differences were determined with Student's t-test or Mann-Whitney U test. Data are expressed as mean and standard deviation. All statistical tests were two-sided, and p values less than 0.05 were considered statistically significant.
DR4/5 Expression is Inversely Correlated with TTP Expression in Human Colon Cancer Cell Lines
To determine whether DR4/5 expression is inversely correlated with endogenous TTP expression, the expression of DR4/5 and TTP was analyzed in four human colon cancer cell lines: HT29, KM12C, HCT116, and SW480. Cell lines with high TTP expression levels (HT29 and KM12C) exhibited low expression levels of DR4/5, and those with low TTP expression levels (HCT116 and SW480) showed relatively high DR4/5 expression levels ( Figure 1A). These results suggest an inverse correlation between TTP expression and DR4/5 expression in human colon cancer cell lines. To determine differences in TRAIL sensitivity among cell lines, all cell lines were treated with TRAIL. An MTS assay showed that KM12C and HT29 cells were significantly more viable than HCT116 and SW480 cells after exposure to TRAIL ( Figure 1B). Cells 2020, 9, x; doi: FOR PEER REVIEW www.mdpi.com/journal/cells treatment with control siRNA (scRNA) did not decrease the expression level of endogenous TTP and did not change DR4/5 expression on qRT-PCR and Western blot assay. An MTS assay in KM12C cells treated with siTTP showed increased sensitivity to TRAIL-mediated apoptosis ( Figure 1D). Annexin V-FITC/PI staining, performed to confirm the cell viability measured by MTS assay, demonstrated that late apoptosis and cell death were significantly higher (12.55%, p < 0.01) in KM12C with siTTP cells than in KM12C with scRNA cells ( Figure 1E). We next examined whether overexpression of TTP reduces DR4/5 expression. For this purpose, SW480 cells with low TTP expression and high DR4/5 expression were selected. A TTP expression vector (pcDNA6/V5-TTP) was transfected into SW480 cells. As a negative control, SW480 cells were transiently transfected with the pcDNA6/V5 empty vector. Overexpression of TTP in pcDNA6/V5-TTP transfected SW480 cells was confirmed by qRT-PCR and Western blot analysis. Expression of DR4/5 was significantly reduced in pcDNA6/V5-TTP transfected SW480 cells compared to empty vector-transfected cells (p < 0.01) (Figure 2A). An MTS assay and annexin V-FITC/PI staining assay in TTP-overexpressed SW480 cells showed reduced sensitivity to TRAIL-mediated apoptosis (16.37% vs. 6.38%, p < 0.01) ( Figure 2B,C). In a 20-day in vivo study, the average tumor volume in mice with TTP-overexpressing SW480 cells increased significantly under TRAIL treatment. TTP overexpression To test whether down-regulation of TTP affects DR4/5 expression, siRNA against TTP was used to reduce the expression level of TTP in KM12C cells. Down-regulation of TTP by treatment with siRNA significantly increased the expression level of DR4/5 (p < 0.01) ( Figure 1C). However, treatment with control siRNA (scRNA) did not decrease the expression level of endogenous TTP and did not change DR4/5 expression on qRT-PCR and Western blot assay. An MTS assay in KM12C cells treated with siTTP showed increased sensitivity to TRAIL-mediated apoptosis ( Figure 1D). Annexin V-FITC/PI staining, performed to confirm the cell viability measured by MTS assay, demonstrated that late apoptosis and cell death were significantly higher (12.55%, p < 0.01) in KM12C with siTTP cells than in KM12C with scRNA cells ( Figure 1E).
We next examined whether overexpression of TTP reduces DR4/5 expression. For this purpose, SW480 cells with low TTP expression and high DR4/5 expression were selected. A TTP expression vector (pcDNA6/V5-TTP) was transfected into SW480 cells. As a negative control, SW480 cells were transiently transfected with the pcDNA6/V5 empty vector. Overexpression of TTP in pcDNA6/V5-TTP transfected SW480 cells was confirmed by qRT-PCR and Western blot analysis. Expression of DR4/5 was significantly reduced in pcDNA6/V5-TTP transfected SW480 cells compared to empty vector-transfected cells (p < 0.01) (Figure 2A). An MTS assay and annexin V-FITC/PI staining assay in TTP-overexpressed Cells 2020, 9, 1851 7 of 14 SW480 cells showed reduced sensitivity to TRAIL-mediated apoptosis (16.37% vs. 6.38%, p < 0.01) ( Figure 2B,C). In a 20-day in vivo study, the average tumor volume in mice with TTP-overexpressing SW480 cells increased significantly under TRAIL treatment. TTP overexpression changed TRAIL sensitive SW480 cells to TRAIL-resistant SW480 cells ( Figure 2D). Taken together, these results indicate that the TTP expression mediates TRAIL sensitivity via modulation of DR4/5 expression.
Cells 2020, 9, x FOR PEER REVIEW 7 of 14 changed TRAIL sensitive SW480 cells to TRAIL-resistant SW480 cells ( Figure 2D). Taken together, these results indicate that the TTP expression mediates TRAIL sensitivity via modulation of DR4/5 expression.
TTP Destabilized DR4/5 mRNA
To determine whether decreased expression of DR4/5 resulted from changes in the stability of DR4/5 mRNA, the half-life of this mRNA was measured by quantitative real-time PCR in pcDNA6/V5-TTP transfected SW480 cells and empty vector-transfected cells. The half-life of DR4/5 mRNA in empty vector-transfected cells was more than 2 h. However, in TTP-overexpressed SW480 cells, the half-life of DR4/5 mRNA was less than 1.5 h after actinomycin D treatment. These results indicate that increased expression of TTP contributes to decreased DR 4/5 levels through the destabilization of DR4/5 mRNA ( Figure 3A).
TTP protein regulates mRNA stability by binding to AREs within the mRNA 3′-UTR [17]. Analysis of the full base pair of DR4/5 3′-UTR revealed the presence of three AUUUA ARE motifs in DR4 and four ARE motifs in DR5 ( Figure 3B). A luciferase reporter gene linked to the full DR4/5 3′-UTR containing all three AREs was used in the psiCHECK plasmid to determine whether
TTP Destabilized DR4/5 mRNA
To determine whether decreased expression of DR4/5 resulted from changes in the stability of DR4/5 mRNA, the half-life of this mRNA was measured by quantitative real-time PCR in pcDNA6/V5-TTP transfected SW480 cells and empty vector-transfected cells. The half-life of DR4/5 mRNA in empty vector-transfected cells was more than 2 h. However, in TTP-overexpressed SW480 cells, the half-life of DR4/5 mRNA was less than 1.5 h after actinomycin D treatment. These results indicate that increased expression of TTP contributes to decreased DR 4/5 levels through the destabilization of DR4/5 mRNA ( Figure 3A). TTP protein regulates mRNA stability by binding to AREs within the mRNA 3 -UTR [17]. Analysis of the full base pair of DR4/5 3 -UTR revealed the presence of three AUUUA ARE motifs in DR4 and four ARE motifs in DR5 ( Figure 3B). A luciferase reporter gene linked to the full DR4/5 3 -UTR containing all three AREs was used in the psiCHECK plasmid to determine whether downregulation of DR4/5 expression by TTP was mediated through the DR4/5 mRNA 3 -UTR. SW480 cells were co-transfected with 500 ng of psiCHECK luciferase reporter construct containing full AREs in DR4/5 and pcDNA6/V5-TTP or empty vector pcDNA6/V5. When SW-480 cells were transfected with a plasmid overexpressing TTP, luciferase activity from the full DR4/5 3 -UTR was significantly inhibited ( Figure 3C).
TTP Binds to All Three ARE in DR4 and Only the 3rd ARE in DR5 mRNA 3 -UTR
The next goal was to determine regions within the DR4/5 3 -UTR that are important for the TTP inhibitory effect. Luciferase genes linked to various deletion mutants of the DR5 3 -UTR were used ( Figure 4A, left panel). Whereas TTP decreased the luciferase activity of the luciferase reporter gene cloned DR5-ARE-1-4 (containing all four AREs) by 65.3%, DR5 3 -UTR fragments DR5-ARE-1/2 (containing ARE1 and ARE2) and DR5-ARE-1 (containing ARE1) abrogated the inhibitory effect of TTP on luciferase activity (2.5% and 7.0% inhibition, respectively) ( Figure 4A, right panel). However, the DR5 3 -UTR fragments DR5-ARE-1-3 (containing ARE1, ARE2, and ARE3) responded similarly to TTP compared with the whole DR5-ARE-1-4 construct, suggesting that ARE3 within the DR5 3 -UTR is responsible for the inhibitory effect of TTP.
Because of the close location of all three AREs in DR4, single mutants of each ARE motif were prepared to determine which ARE is responsible for the response to TTP ( Figure 4B, left panel). Each single mutant of DR4-ARE showed a similar TTP inhibitory effect on reporter gene activity compared with DR4-ARE-full ( Figure 4B, right panel), suggesting that all three AREs are involved in TTP binding and its inhibitory activity. A single ARE3 mutant in DR5 was confirmed to prevent the TTP inhibitory effect ( Figure 4C). Although these results were obtained using ectopically overexpressed TTP protein, the significance of DR4 AREs and DR5 ARE3 for TTP binding was demonstrated.
To demonstrate the association between endogenous TTP and ARE in DR4/5 3 -UTR, RNA EMSA was conducted using a biotinylated RNA probe containing wild-type or mutant ARE in DR4/5. Cytoplasmic extracts were prepared from SW480 cells transfected with pcDNA6/V5-TTP to overexpress TTP and were incubated with the biotinylated RNA probe containing wild-type or mutant ARE3 in DR4/5. When the wild-type DR4 ARE3 probe was mixed with cytoplasmic extracts from TTP-overexpressed SW480, a dominant probe-protein complex was observed ( Figure 4D, left panel). However, the mutant DR4 ARE3 probe failed to form this complex. The formation of the DR4 ARE3 probe-protein complex was reduced by preincubation of the reaction mixture with anti-TTP antibody but not with the control antibody. This result was also demonstrated in DR5, where the probe-protein complex was reduced compared to DR4. Collectively, these data strongly suggest that the expression of DR4/5 occurs through direct interaction of TTP with ARE3 of DR4/5 mRNA. Mapping of sequence in DR5 and DR4 mRNA 3′-UTR required for TTP inhibition of luciferase activity. SW480 cells were co-transfected with 500 ng of psiCHECK luciferase reporter construct containing fragmented or full AREs in DR4/5 and pcDNA6/V5-TTP or empty vector pcDNA6/V5. TTP-induced inhibition of luciferase activity observed with each construct was compared to that obtained with empty vector pcDNA6/V5. Cells were harvested, and luciferase activity was normalized to firefly activity. Luciferase values obtained from cells transfected only with luciferase construct full AREs were set to 1. Results represent the mean ± SD of three independent experiments (* p < 0.05; *** p < 0.001). ORF, open reading frame; ns, not significant. (D) RNA EMSA; RNA EMSA was performed by mixing cytoplasmic extracts containing 4 μg of total protein from pcDNA6/V5-TTP-transfected SW480 cells with 20 fmol of biotinylated wild-type (wt) or mutant (mut) probe. Control antibody or anti-TTP was added to reaction mixtures. Binding reactions were then separated by electrophoresis on 5% polyacrylamide gel under nondenaturing conditions. TTP indicate the position of the TTP-containing band. Mapping of sequence in DR5 and DR4 mRNA 3 -UTR required for TTP inhibition of luciferase activity. SW480 cells were co-transfected with 500 ng of psiCHECK luciferase reporter construct containing fragmented or full AREs in DR4/5 and pcDNA6/V5-TTP or empty vector pcDNA6/V5. TTP-induced inhibition of luciferase activity observed with each construct was compared to that obtained with empty vector pcDNA6/V5. Cells were harvested, and luciferase activity was normalized to firefly activity. Luciferase values obtained from cells transfected only with luciferase construct full AREs were set to 1. Results represent the mean ± SD of three independent experiments (* p < 0.05; *** p < 0.001). ORF, open reading frame; ns, not significant. (D) RNA EMSA; RNA EMSA was performed by mixing cytoplasmic extracts containing 4 µg of total protein from pcDNA6/V5-TTP-transfected SW480 cells with 20 fmol of biotinylated wild-type (wt) or mutant (mut) probe. Control antibody or anti-TTP was added to reaction mixtures. Binding reactions were then separated by electrophoresis on 5% polyacrylamide gel under nondenaturing conditions. TTP indicate the position of the TTP-containing band.
Discussion
The role of TTP as a key factor in posttranscriptional gene regulation has been established, in malignant tumors, TTP participates extensively in gene regulatory networks for tumor suppression, including oncogenes and cancer-related cytokines [19]. In this study, we focused TTP on DR expression and demonstrated TTP bound to the ARE motif of DR4/5 mRNA and enhanced the decay of DR4/5 transcripts. An inverse correlation was demonstrated between TTP and DR expression. TTP overexpression in cancer cells changed TRAIL-sensitive cancer cells to TRAIL-resistant cells, and down-regulation of TTP increased TRAIL sensitivity via restoration of DR4/5 expression. Our results suggest that TTP is a key negative regulator of DR4/5 expression and inhibition of TTP led to increased DR4/5 levels in cancer cells. These results, combined with the finding that a wide variety of cancer cells generally express high levels of cytoplasmic DRs, and TTP expression is significantly suppressed in many cancer cells, might be strong evidence for the dynamic expression of DRs [20][21][22].
Dynamic expression of TRAIL DRs is one of the most widely investigated mechanisms for TRAIL-based therapy resistance because lack of surface DRs is sufficient to render cancer cells resistant to TRAIL-induced apoptosis regardless of the status of other apoptosis signaling mechanisms. However, in functional aspects of DRs until now, two issues need more investigation.
First, although many transcriptional factors and posttranslational modulations have been revealed to involve in DR4/5 expression, mRNA expression of DRs does not necessarily reflect their functional protein expression, and there is no correlation between total receptor protein expression levels and the sensitivity of tumors to TRAIL-based treatment [14]. Previous studies have demonstrated a variety of transcriptional factors such as p53, CHOP, NF-kB, FOXO3a, and AP1, as well as posttranslational regulations, which include protein glycosylation, trafficking, endocytosis processes, and autophagy, have been revealed to be associated with dynamic expression of DRs, [14,23,24]. Our results, the first report as far as we know, suggest a posttranscriptional modification of TTP may be another mechanism for modulation DR expression.
Second, besides their canonical locations in the plasma membrane and in intracellular membranes of the secretory pathway as well as endosomes and lysosome, the biologic relevance of noncanonical intracellular compartmentalization of DRs needs to be further defined. Previous numerous studies to validate the highly expressed cytoplasmic DR as an independent prognostic marker showed conflicting results in various cancers [25][26][27]. After a connection of the nuclear DRs with an apoptosis resistance became to know [28], DR expression in the nucleus or the cytoplasm was demonstrated to be associated with an unfavorable prognosis, therefore, it was suggested that DR-based risk stratification should be interpreted on their intracellular compartmentalization with consideration of co-expression of other TRAIL receptors in addition to DR4/5 [27,29]. Additional studies showed that DRs in noncanonical intracellular locations are likely to neither contribute to canonical apoptosis signaling nor to non-apoptotic signal transduction regardless of whether they are soluble within cytosol, trapped within the trans-Golgi network, in autophagosomes or in the nucleus. More recently, nuclear DR5 was discovered to result in increased levels of malignancy-promoting factors HMGA2 and Lin28B and enhanced tumor cell proliferation in vitro and in vivo [30]. Similarly, cytoplasmic DR4/5 was revealed to induce cell death in response to unresolved ER stress [31,32]. With regard to TTP in this specific function of noncanonical location of DRs, at what stage occur remains unknown, TTP may also act on DRs mRNA that are involved in these pathways because TTP regulate mRNA targets at various stages during carcinogenesis and modulate tumor cell apoptosis by directly regulating the apoptotic mediators within both intrinsic and extrinsic pathways [33].
In this study, the analysis of the full base pair of DR4/5 mRNA 3 -UTR revealed the presence of three ARE motifs in DR4 and four ARE motifs in DR5. Analyses using mutant ARE substrates indicate that that TTP binds to all three ARE in DR4, and only the 3rd ARE in DR5 mRNA 3 -UTR. This affinity difference may be originated from the fact that AREs are grouped into three classes based on the number and distribution of AUUUA pentamers, and that TTP is reported to have a different binding affinity for the three classes. Thermodynamic basis permitting high-affinity RNA recognition was established in RNA substrate specificity by the RNA-binding domain of TTP [34]. Various other explanations, including prior occupancy of the AREs by protective proteins, were also suggested for the mechanisms of gene and cell-type specificity of mRNA decay [35].
As induced DRs can easily bind to TRAIL to induce apoptosis, there is considerable interest in increasing DR expression with clinically used or developing anticancer compounds. TTP may be another novel molecular target for therapeutic intervention that improves the sensitivity of TRAIL-induced apoptosis by increasing expression of TRAIL or its death receptors. However, the safety of altered TTP concentration must be carefully considered for clinical applications [36]. Because tumor necrosis factor (TNF) alpha mRNA stability was first established TTP target, TRAIL is the member of TNF superfamily, and TRAIL also had AREs in its mRNA 3 -UTR, additional work will be required for the role of TTP in the expression of TRAIL as well as other death receptors besides DR4/5.
In conclusion, we demonstrated that TTP is important for the posttranscriptional regulation of DR4/5 gene expression. As a result, TTP-induced downregulation of death receptors led to decreased TRAIL-induced apoptosis. This study provides a molecular mechanism for the enhanced levels of TRAIL DRs in cancer cells. TTP-mediated enhancement of DR mRNA degradation expands our understanding of the regulation of TRAIL DR expression in cancer cells. Additional work is required to obtain an in-depth understanding of TTP in cancer biology, especially in the regulation of TRAIL and prior to the clinical use of TRAIL as a new therapeutic agent for cancer treatment. However, this study provides a biological basis for post-transcriptional modification of TRAIL DRs and may provide a novel strategy for predicting and restoring cancer cell sensitivity to TRAIL-induced apoptosis. In addition, TTP status may be a biomarker for predicting TRAIL response when a TRAIL-based cancer treatment is used.
Conflicts of Interest:
The authors declare no conflicts of interest. | 5,838 | 2020-08-01T00:00:00.000 | [
"Biology",
"Medicine"
] |
Predictors of compulsive cyberporn use: A machine learning analysis
Highlights: • Limited research exists on factors predicting or related to CCU.• According to the subjects' CCU scores, 21.96% showed signs of CCU.• ML analysis identified the most important determinants of CCU scores.• The most important predictor is the users’ strength of craving for pornography experiences.
People who engage in cyberporn may have a range of profiles.Adults, teens, men, and women may adopt this activity (De Alarcón et al., 2019;Efrati & Amichai-Hamburger, 2019;Emmers-Sommer, 2018;Levin et al., 2012).The mean age of Pornhub users in 2022 was 37 years, 18-to 24-year-olds being the most prevalent.The United States and the United Kingdom have the most Pornhub site users (Pornhub, 2019).
The literature shows inconsistent conclusions about CCU predictors and related factors.Numerous studies have been conducted among people who are not specifically known to be cyberporn users.Other studies collected data on a variety of cybersex activities, but not specifically on cyberporn or compulsive use (e.g.Floyd & Grubbs, 2022).In addition, different variables are almost never compared within a single sample.More studies on a wide range of variables and a large sample of participants are needed to understand CCU.Even though, some studies advanced associations between compulsive cybersex and avoidant attachment (Efrati et al., 2021;Varfi et al., 2019), and between some media formats (especially violent pornography and general pornography) and rape myth acceptance (Hedrick, 2021).The importance of assessing this later association is suggested by the frequency of coercion related scenes among porn movies (Bridges et al., 2010;Carrotte et al., 2020) as well as by the reported association between sexual behaviors and trauma related sexual experiences.As far as we know, there is lack of information is available on the connection between CCU and attachment, arousal for certain porn styles, or coercive sexual relationship factors such as acceptance of the rape myth.
The present study objective.
In this study, we aimed to (a) assess CCU in a broad sample of people who had used cyberporn and (b) determine, among a diverse range of predictor variables, which are most important in CCU scores.
Two research questions (RQs) were addressed.RQ1: What is the distribution of CCU scores among people with cyberporn use?RQ2: What are the most important sociodemographic, sexual, psychological, and psychosocial variables that determine CCU scores?
As we used an exploratory study design, no hypotheses were made that were associated with the RQs.
Participants and procedure
The study included 1584 online questionnaire respondents; of these, 26 % were American and 45.6 % British.Appendix 1 shows the details of the participants' nationalities.An anonymous SphinxOnline survey was used.We searched for adults over 18 years who had watched pornography at least once in the past 6 months.They were recruited through Prolific (https://www.prolific.ac),an academic crowdsourcing service that provides high-quality data (Palan & Schitter, 2018;Peer et al., 2022).Meeting our selection criteria was possible since Prolific offered to pick pornography consumption over the past six months from its categories of hobbies.The study was advertised for using this ad "This study concerns the porn use by adult people.Its aim is to better understand the links with sexual attitudes and representations, sexual motivations, sexual desire, previous experiences (problematic or not), relations with his/her partner(s), etc.There are no right or wrong answers; Only your own answer counts.The procedures were approved by a university ethics committee".The [anonymized] Research Ethics Committee (2020-04-05) approved the study, and all participants gave informed consent online.Recruitment occurred in May 2021.
Materials
Study variables included five predictor categories and one outcome variable.We evaluated 56 predictors related to each measure's dimensions and subscales.
Outcome variable
CCU was measured with the eight-item short form of the Compulsive Internet Use Scale (CIUS) (Gmel et al., 2019;Meerkerk et al., 2009).CCU scores were measured on a 5-point scale, with a higher score indicating greater CCU.Gmel et al. (2019) noted that this unidimensional short form had good internal consistency.In 2019, Varfi et al. adapted the CIUS for cybersex; in our study, the CIUS was adapted to cyberporn (Ben Brahim et al., 2023), with "Internet" indicating pornographic sites.
Cyberporn use patterns.
The cyberporn use patterns included were as follows: weekly cyberporn use duration (CUD) (range: 0 to 40 h), frequency of cyberporn use (FCU) over the past year (11-point scale from "Never" to "More than 7 times a week"), whether participants were paying specific items (6-point scale from "Never" to "Every day") or a subscription (yes or no) for their cyberporn use, negative moral perception of pornography (7-point scale from "Strongly disagree" to "Strongly agree") with the following specific item from Grubbs et al. (2019) "I believe that pornography use is morally wrong.",whether participants' romantic and sexual lives improved since starting cyberporn use (5-point scale from "Not at all" to "Definitely yes" for each separate item), the degree of arousal for 10 pornographic styles (domination, humiliation, submission, romantic love, soft porn, groups with many males, groups with many females, young people, older people, stories, and dialogues) (4-point scale from "Very arousing" to "Not arousing at all"), and cyberporn use variations since the COVID-19 period started (7-point scale from "significantly increased" to "significantly decreased").
We measured pornographic craving experience with the Strength of Pornography Craving Experience (PCE-S) scale (Ben Brahim et al., 2023).This measure adapts the "strength" of the Craving Experience Questionnaire to porn consumption (May et al., 2014).It uses the intrusion theory and comprises three dimensions (imagery, intensity, and intrusion), 10 items, and an 11-point scale from "Not at all" to "Extremely."Higher scores suggest stronger porn cravings.
The Pornography Use Motivations Scale (PUMS) was used to evaluate motives for porn use (Bőthe et al., 2021).This 24-item (7-point scale) measure contains eight dimensions (sexual pleasure, sexual curiosity, fantasy, boredom avoidance, lack of sexual satisfaction, emotional distraction or suppression, stress reduction, self-exploration).Each participant receives eight scores, one for each dimension of the scale.Higher scores indicate greater endorsement of the relevant motive.
Sexual dimensions.
We also investigated sexual motives by using the Sexual Function Scale (SFS) (Nelson, 1978).This sevendimensional instrument asks respondents why they perform sexually and how essential each reason is.In line with authors of previous studies (Abbey et al., 2006;Browning et al., 2000;Fortier, 2018), we deployed the items assessing dominance (eight items) and submission (eight items) on a 4-point scale.Each participant received two dimension-F.Ben Brahim et al. based scores.Higher scores suggest endorsement of the sexual motive.
The Sexual Desire Inventory (SDI) (Mark et al., 2018;Spector et al., 1996) measures solitary and dyadic sexual desire with 14 items (a 7point scale and an 8-point scale).Each subject received dyadic and solitary sexual desire scores.Both dimensions of sexual desire increase with higher scores.
The number of sexual partners and frequency of intercourse for participants in the last 30 days was also examined.Their past-year sexual satisfaction was rated on a 9-point scale.Higher scores indicate sexual satisfaction.We also assessed participants' sexual self-esteem on a 4-point scale.
2.2.2.4.Psychosocial and psychological dimensions.The Experiences in Close Relationships -Short Form (ECR-S), a brief variant of the Experiences in Close Relationships -Revised questionnaire (Fraley et al., 2000), examined attachment type with 12 items and a 7-point scale for anxious and avoidant attachment.Each subject received two scores: anxious attachment style and avoidant attachment style.Higher scores indicated a more anxious or avoidant attachment style.
The Short UPPS-P Impulsive Behavior Scale was used to assess impulsivity (Billieux et al., 2012;Lynam, 2013).Only eight of this measure's 20 items were used to assess positive and negative urgency (4point response scale), the two characteristics most often connected with addictive disorders.Each participant received two scores.It must be noted that the scale items were reverse-coded prior to the calculation of the two scores.High scores indicated more impulsivity.
In addition, intimate relationship satisfaction level was measured over the past year (9-point scale), higher scores reflecting greater satisfaction.The Short Happiness and Depression Scale (SDHS) (Joseph et al., 2004) measured participants' mood by using six 4-point items.One item with a 5-point response scale indicated loneliness (Rönkä et al., 2014), with greater loneliness indicated by higher scores.Selfesteem was measured on a 5-point Single-Item Self-Esteem Scale (SISE) (Robins et al., 2001).High scores reflect higher self-esteem.
Participants were also asked about childhood emotional or physical abuse."When I was growing up, I believe that I was emotionally abused" was one of two questions for each form of abuse.Each abuse score was calculated from a 5-point scale ranging from "Never true" to "Very often true".
Violent and coercive sexuality (attitudes and experiences).
The short form of the Acceptance of Modern Myths about Sexual Aggression (AMMSA) (Helmke et al., 2014) measured rape myths and sexual aggression acceptance and is drawn from the 30-item tool of Gerger et al. (2007).This 11-item (7-point scale) measures participants' tolerance for rape myths and female sexual violence (e.g., "When a woman starts a relationship with a man, she must be aware that the man will assert his right to have sex"; "Many women tend to exaggerate the problem of male violence").Overall scores were calculated for each participant.Higher ratings indicate greater myth acceptance.
The Sexual Experience Survey (SES) reported sexual perpetration and victimization in children older than 14 years (Koss et al., 2007;Testa et al., 2004).The 11-item victimization form determines whether a person was victimized (e.g. by touching, kissing, or rape).The 11-item perpetration form determines whether a person committed the same unwanted sexual actions.For this study, we assessed each participant's total perpetration and victimization scores.Coercive relationships and rape were also explored for perpetrators and victims.
Data analysis
First, for RQ1, we performed descriptive statistics (range, M, SD, frequency) on all research variables.Second, to answer RQ2, we performed a bivariate correlation analysis between the predictor and outcome variables.We also performed an analysis of variance (ANOVA) to test the effect of the nominal (no-ordered) predictor variables on the outcome variable.For both correlation and the ANOVA analyses, the significance level was set at < 0.05.In addition, since the data included in the current study had relatively high number of variables ( 56) and that we wanted to rank-order their predictive importance, we chose to conduct machine learning (ML) multivariate regression analysis (using the Extreme Gradient Boosting algorithm [XGBoost, R package]), instead of traditional linear regression, to solve RQ2.ML models are essentially predictive (for detail on how ML works see (D'Agostino, 2022;Sarker, 2021).They are constructed in two phases: the learning stage where the model analyzes and "learn" from the variables associations/relations; and the second stage where the model uses the "learned knowledge" to predict (D'Agostino, 2022;Sarker, 2021).The rationale for using ML algorithms rather than standard statistical methods relies on the fact that ML algorithms have hyperparameters allowing us to build and test different models in terms of prediction capabilities and to choose the best prediction models according to specific metrics.Furthermore, in contrast with standard linear regression models, most ML algorithm (including the one we used) are nonparametric-they do not impose a particular structure on the data.As such, they can capture nonlinear relationships, including interactions among the predictors themselves.Finally, compared with traditional regression, the machine learning algorithm we used is considered robust for high-dimensional data scenarios (the current study includes a relatively important number of predictors [56]), due to its ensemble nature (separately bootstrapping thousands of decision trees, then averaging their results) (D'Agostino, 2022;Sarker, 2021).The use of ML to predict health behavior or health outcomes have been growing in recent years (Weissler et al., 2021).For instance, in past three years, researchers used ML to predict fibromyalgia diagnose (Vera Cruz et al., 2021), to predict the use of smartphone health applications (Aboujaoude et al., 2022;Vera Cruz et al., 2023), to predict smoking cessation/reduction (Etter et al., 2023;Vera Cruz et al., 2023), to predict subjective well-being (Vera Cruz et al., 2023), and to predict online dating apps problematic use (Vera Cruz et al., 2023).Regarding the specific ML algorithm used to conduct the current study data analysis (XGBoost, Chen & Guestrin, 2016), it is based on decision trees.Decision trees (Loh, 2014) are statistical algorithms that create predictions based on particular conditions (see Loh, 2015, for an extensive and easy-to-understand PowerPoint explanation).Thus, the XGBoost algorithm processes data by aggregating predictions from numerous decision trees by using majority voting.After building the initial model from a set of decision trees and calculating the residuals (errors) for each observation in the dataset, XGBoost generates a new model to anticipate those errors, learns from them, and builds a better model.XGBoost iteratively adds weight to instances with incorrect predictions, learns from prior mistakes, develops new models, and combines them into an ensemble model with improved prediction skills.XGBoost is an ensemble learning regression and classification tool.Many configurable hyperparameters in XGBoost can improve model fitting.It is robust and can handle multiple data types and complex distributions.Most data scientists use XGBoost, which has won multiple data analysis competitions (Chen & Guestrin, 2016;Morde & Setty, 2019).XGBoost models output the relevance of each predictor variable by using Gain.Gain is the relative contribution of each feature (in this case, each predictor variable) to the model, calculated by taking its contribution for each tree.Gain values run from 0 to 1, which can be thought of as a percentage.A greater value for this metric when compared with another feature indicates that it is more significant for creating a prediction.The present analysis included only 49 predictor variables.Indeed, after multicollinearity testing (using the Random Forest algorithm), 7 of 56 predictors were excluded.The list of excluded variables is presented in the Appendix, Table B. ML requires splitting the dataset into at least two sets: one to train the model (typically 70-80 % of the sample) and the other to assess the model's prediction performance (20-30 %).In the current study, we divided the dataset as follows: train-set = 70 %; test-set = 30 %.The XGBoost F. Ben Brahim et al. parameters that we grid-tuned are: nrounds = c(500,1000,1500); max_depth = c(2,4,6); eta = c(0.025,0.05,0.1,0.3);gamma = c(0, 0.05, 0.1, 0.5, 0.7, 0.9, 1.0); colsample_bytree = c(0.4,0.6, 0.8, 1.0); min_-child_weight = c(1,2,3); subsample = 1.The result of this analysis is shown in Table 3.
Descriptive statistics
Table 1 shows descriptive statistics for all study variables.The participants' age distribution (SD = 10.84) is off center from the mean (M = 33.18),showing a diverse age range of 18-75 years.Male participants (63.1 %) outnumbered female participants (35.2 %), and nonbinary participants represented less than 2 % of the sample.Most participants were heterosexual (77.6 %) and in a relationship, whether married or not (67.4%).
For the 25 predictor variables with "a" exponent (see Table 1), the mean scores were above the middle of the scale on their respective measures.
The overall descriptive and inference results by sex are presented in Table 2.
Correlation and ANOVA statistics
Table 3 shows the bivariate correlation statistics between all predictor factors and outcome.To interpret the correlations coefficients (r) values, a threshold widely used in behavioral sciences is the one proposed by Cohen (1988): r < 0.1, very small; 0.1 <= r < 0.3, small; 0.3 <= r < 0.5, moderate; r >= 0.5, large.Thus, based on these indices, Table 2 shows that most predictor variables are not strongly associated with the outcome variable.Eleven predictor variables had a moderate association with the outcome.These variables were the strength of pornography craving experiences (r = 0.50), suppression of negative emotions porn use motive (r = 0.49), stress reduction porn use motive (r = 0.42), frequency of cyberporn use (FCU) over the past year (r = 0.42), boredom avoidance porn use motive (r = 0.41), fantasy porn use motive (r = 0.39), lack of sexual satisfaction porn use motive (r = 0.37), selfexploration porn use motive (r = 0.34), dominance sexual motive (r = 0.33), sexual pleasure porn use motive (r = 0.31), and acceptance of rape myths and sexual aggression (r = 0.30).Nine predictor variables had small (r = 0.20 -0.30) association with the outcome (see Table 2).All of these indicators were positively associated with the outcome; thus, as their values rose, so did the CCU scores.
Machine learning multivariate regression results
Table 4 shows the ranking of the most relevant predictor variables of participants' CCU scores with a machine learning multivariate regression model.The train-set model performed as follows: percentage of the outcome explained by the predictors (R2) = 81.5 %; mean squared error (MSE) = 0.33.The test-set model performance was as follows: R2 = 74.6 %; MSE = 0.69.
The most important predictor was the strength of pornography craving experiences.The less important predictor was the arousal degree of "romantic love" pornographic scenes.Among the 49 predictors included, the 20 most important in decreasing order were as follows: strength of pornography craving experiences, suppression of negative emotions porn use motive, FCU over the past year, acceptance of rape myths and sexual aggression, anxious attachment style, boredom avoidance porn use motive, age, sexual pleasure porn use motive, submission sexual motive, evolution of cyberporn use since the COVID-19 pandemic started, dyadic sexual desire, self-exploration porn use motive, avoidant attachment style, depressive mood, solitary sexual desire, curiosity porn use motive, fantasy porn use motive, sexual victimization experiences, dominance sexual motive, and positive urgence impulsivity.Age was negatively correlated with the outcome, whereas all other factors were positively correlated.
Discussion
In this study, we assessed CCU in a wide cohort of people who use cyberporn and determined the most essential CCU score predictors from a wide variety of characteristics.
Descriptive results
For remainder, all participants declared to be cyberporn users.In this regard, it is noteworthy that the percentage of women on the study sample is 35.2 %, which is close to the 36 % of women who visited Pornhub last year (Pornhub, 2022).The number of male participants (63.1 %) was twice that of female participants, confirming that male cyberporn users are overrepresented (Camilleri et al., 2021;Kumar et al., 2021;LeBlanc & Trottier, 2022;Studer et al., 2019).
On a 5-point scale, 21.96 % of subjects reported CCU scores ≥ 3.13 (fourth quartile), suggesting a tendency toward compulsive use.This figure exceeds most recent research findings (Ballester-Arnal et al., 2017;Camilleri et al., 2021;Kumar et al., 2021;LeBlanc & Trottier, 2022;Mennig et al., 2020).The variances may be partially due to the range of approaches.In some studies (e.g.Ballester-Arnal et al., 2017), the authors examined all Internet-based sexual activities, materials, and behaviors without focusing on cyberporn.Camilleri et al. (2021) used the same metric as we did to examine cyberporn use among students at an American university.In our study, we recruited a sample of people who recently used porn and provided statistics for a more general and diversified group from various cultures and countries.
The most important predictors of CCU scores
The most important predictors of the participants' CCU scores can be synthetically regrouped into the following six categories.
Craving and frequency of cyberporn use.This category is predicted by the strength of pornographic cravings and the past-year Frequency of Cyberporn Use (FCU).CCU scores are higher in participants with stronger pornographic cravings and more frequent use.This is not surprising, as these characteristics are linked to compulsive porn viewing (Bőthe et al., 2019;Weinstein et al., 2015).To our knowledge, our study is the first to use the elaborated intrusion theory to measure pornography craving, revealing a more specific relationship between CCU and Notes.N = number of participants, r = correlation coefficient; CI = confidence interval.Significance level = <0.05.
To interpret r values, it must consider that: r < 0.1, very small; 0.1 <= r < 0.3, small; 0.3 <= r < 0.5, moderate; r >= 0.5, large.*Non-ordered categorical independent variable.For these variables we conducted ANOVA (analysis of variance).Results of the ANOVA a presented in Result section.craving.This is in coherence with a recent revision of the Interaction of Person-Affect-Cognition-Execution (I-PACE) model, incorporating desire thinking theory and craving experience as cognitive processes contributing to CCU (Brandtner et al., 2021).Indeed, this updated model aims to explain internet-use disorders such as porn use disorder.Pornography cravings and the FCU may indicate a loss of control and increasing priority as basic components of compulsive behaviors.Prospective research may be necessary to examine craving and CCU scores.
Negative emotions, feelings, and experiences.This category has five predictors: suppression of negative emotions and boredom avoidance porn use motives, anxious attachment style, avoidant attachment style, depressive mood, and sexual victimization experiences.CCU scores were linked to these negative emotions, feelings, and experiences in this study, suggesting that the use of cyberporn during vulnerable times is linked to compulsive use.In addition to the FCU, the purpose for this use appears to be important to compulsive consumption, especially when this motive reflects negative feelings.Participants who consume pornography as a coping strategy seem to be more likely to use it compulsively.Previous research has presented cybersex as a coping mechanism (Ben Brahim et al., 2019).These findings are consistent with studies that have linked coping motives with addictive behaviors (Melodia et al., 2022;Rochat et al., 2024;Zanetta Dauriat et al., 2011).After some personal experiences, subjective reward expectations may vary across individuals and contexts, from gratification for porn use to rather negative reinforcement processes as suggested by the coping and escape motives life (Brand et al., 2019;Laier et al., 2018) in coherence with the I-PACE model; Brand et al. (2019).In addition, sexual "addiction" is associated with greater rates of mental health issues (Cleveland Clinic, 2022).Camilleri et al. (2021) and Levin et al. (2012) also linked problematic pornography consumption with mental health issues such as depression, anxiety, and stress.According to Varfi et al. (2019), addictive cybersex is a "function" of depression and avoidant attachment style.
In the present study, victims of violent sexual experiences seem to have a greater tendency toward CCU.Barrault et al. (2016) linked problematic cybersexual attitudes to traumatic events such as physical and sexual abuse before the age of 17. Negative physical and sexual experiences may raise the risk of negative feelings, which may increase the use of cyberporn as a coping method.
Age.This category is represented by one predictor: age.Results suggest that younger cyberporn users present higher CCU scores than older users do.This may be because younger people have more sexual desire and craving, perhaps partly because younger adults secrete more testosterone, the hormone that drives sexual desire (Van Anders, 2012).
Violent, submissive, and dominant sexual attitudes.Three predictors represent this category: acceptance of rape myths and sexual aggression, submission, and dominance sexual motives.Higher CCU scores were observed among respondents who sexual aggression against females and rape myths.Perhaps pornography viewers are drawn to content that depicts violence against female partners and reinforces male-female dominance-submission stereotypes.This may reinforce sexual aggression myths, likewise creating a compulsive need for pornographic scenes.The sexual scripts (Vera Cruz, 2020; Vera Cruz & Sheridan, 2022) that shape users' sexual behavior may be crucial.Submission and dominance sexual motives also seem to predict CCU scores in the present study.Some people may use cyberporn to fulfill their sexual dominance fantasies, and people who desire to be sexually submissive could use cyberporn to meet their needs.Sexually submissive or dominant participants are more likely to have high CCU scores.Future research may reveal that this link involves craving or tolerating "hard" porn scenes that depict acts ranging from submission and dominance to violence.Future research should look into potential moderators' impacts related to sexual behaviors, trauma history, and comparable factors.
Sexual pleasure and exploration.This category has four predictors: sexual pleasure porn use motives, self-exploration, curiosity, and fantasy.Participants who score higher on these motives may consume more cyberporn.The reward system (Lembke, 2021) and the gratification role in addictive behaviors (Brand et al., 2019) may explain this relationship, in which meeting the initial "need" (pleasure, curiosity, fantasy, self-exploration) leads to "more need" and so on.
COVID-19 effect.Only one predictor represents this category: evolution of cyberporn consumption since the COVID-19 pandemic started.We found that this tenth-ranked predictor predicted CCU scores for study participants, with a rise in cyberporn consumption since the COVID crisis started meaning higher current CCU scores for participants.The COVID-19 period, especially during confinement, has been linked to boredom, stress, and anxiety (Xie et al., 2022).This may increase the need for pleasure in order to cope with psychological issues, and thus more craving for pornography consumption.
The effect of sociodemographic variables
In this study, single participants had considerably higher CCU scores than did those in relationships.Single participants may feel solitary because they lack sexual partners, which may increase their cyberporn use.An earlier study (Kumar et al., 2021) reported that undergraduate medical students, in any form of relationship, presented higher problematic porn use.These results may be specific to the population and these variations may be due to sample characteristics.
The least important predictors of CCU scores
Some variables, such as impulsivity, predicted CCU scores less reliably.The literature disagrees on the role of this variable.According to Billieux et al. (2012), impulsivity helps induce and maintain addiction.Higher attentional impulsivity may lead to uncontrolled use of cyberporn (Antons et al., 2019).Impulsivity may not be as relevant, however, in problematic pornography consumption (Bőthe et al., 2019).Our study revealed how positive and negative urgency specifically affect cyberporn consumption.General versus domain-specific impulsivity may impact differently hypersexuality (Reid et al., 2015).Recent results failed however to indicate the impact of domain-specific impulsivity (e.g.sexual impulsivity) in hypersexuality (Carvalho et al., 2021).Further studies are still needed to asses such potential differences especially for compulsive cyberporn use, a domain influenced by specific stimuli-related reactivity (Love et al., 2015).
Sexual pleasure and sexual encounter numbers predicted CCU scores to a lesser extent.This would reduce the link between sexual satisfaction, encounter numbers, and CCU risk.Sexual activity and satisfaction seem to have little impact on the development of CCU.
A moderate association was found between pornography negative moral perception (moral incongruence) and CCU scores in this study.It is one of the least important CCU score predictors, ranking 24th out of 49.Lewczuk et al. (2020) noticed that moral incongruence is associated to compulsive porn consumption across religiosity.The present research did not assess religion and comprised a diverse sample from different countries and cultures.These considerations may help explain our nuanced results compared to earlier investigations (e.g.Grubbs et al., 2019;Lewczuk et al., 2020).
Limitations
One limitation of the study is that the recruitment approach does not reveal how much of the sample represents people with cyberporn use activities.Thus, generalizing the findings requires caution.Even though Prolific seems to provide Internet-representative samples (Antons et al., 2019), recruiting the sample on the basis of prior pornographic use may F. Ben Brahim et al. potentially limit interpretation and generalization.Although this recruitment criterion may aid in understanding recent pornographic use, it may not apply to everyone.
In addition, this cross-sectional exploratory investigation identified CCU score predictors from numerous factors.Longitudinal and hypothesis-testing research is needed to understand how psychological and psychosocial variables interfere with CCU.
Finally, the machine learning (ML) analysis could not be separately conducted for each sex.This can be considered as limitation.However, it is important to note that such analyses were note carried out because of the limitation regarding the number of participants.Indeed, to perform well, the machine learning algorithm we used need a consequent amount of data (D'Agostino, 2022;Sarker, 2021).If we had run two models separately (one for males = 1000 participants; one for females = 557 participants) each model would comprise a relatively "small" sample (as for machine learning standards); the male ML model would comprise almost twice the number of participants of the female model.Thus, comparing results from such imbalanced models would be problematic, given the machine learning "technicalities" (D'Agostino, 2022; Sarker, 2021).
Conclusions
Most previous research (Camilleri et al., 2021;Kumar et al., 2021;LeBlanc & Trottier, 2022;Studer et al., 2019) reported a lower percentage of participants with CCU scores than was found in this study.The five strongest predictors of CCU scores are strength of pornography experiences, suppression of negative emotions porn use motive, cyberporn use frequency over the last year, acceptance of rape myths, and anxious attachment style.This study adds to the CCU literature and may help clinicians treat and prevent CCU.Comparable survey design that targets different types of compulsive sexual behavior may help to enhance knowledge for other addictive behaviors. Declarations.
Conflicts of Interest: The authors do not have any conflicts of interest to report.
Not applicable.Ethics approval and consent to participate: Participants gave digital informed consent for their survey contribution.Participation was voluntary and restricted to those aged ≥ 18 years.All data were anonymously collected.The survey followed the [anonymized] Research Ethics Committee (2020-04-05).
Funding.Availability of data and materials: The material used in this study and data supporting these findings can be obtained from the corresponding author upon request.
Declaration of competing interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
(continued on next page) F.Ben Brahim et al.
Table 2
The study continuo and ordinal variables: Descriptive and inference statistics by sex.
Table 3
Bivariate correlations between the 49 independent variables and the participants' CCU scores.
Table 4
Predictors of CCU, ranked in decreasing order of importance (XGBoost machine learning regression model).Pornography Use Motivations Scale; SISE = Single-Item Self-Esteem Scale; AMMSA = Acceptance of Modern Myths about Sexual Aggression; SES-P = Sexual Experience Survey -Perpetration; SES-V = Sexual Experience Survey -Victimization; UPPS-P = Urgency, Premeditation, Perseverance, Sensation Seeking, Positive Urgency Impulsive Behavior Scale; ECR-S = Experience in Close Relationships -Short form; SDHS = Short Depression-Happiness Scale.
F.Ben Brahim et al. | 6,738.2 | 2024-03-20T00:00:00.000 | [
"Psychology",
"Computer Science"
] |
Structural basis for antibody recognition of vulnerable epitopes on Nipah virus F protein
Nipah virus (NiV) is a pathogenic paramyxovirus that causes fatal encephalitis in humans. Two envelope glycoproteins, the attachment protein (G/RBP) and fusion protein (F), facilitate entry into host cells. Due to its vital role, NiV F presents an attractive target for developing vaccines and therapeutics. Several neutralization-sensitive epitopes on the NiV F apex have been described, however the antigenicity of most of the F protein’s surface remains uncharacterized. Here, we immunize mice with prefusion-stabilized NiV F and isolate ten monoclonal antibodies that neutralize pseudotyped virus. Cryo-electron microscopy reveals eight neutralization-sensitive epitopes on NiV F, four of which have not previously been described. Novel sites span the lateral and basal faces of NiV F, expanding the known library of vulnerable epitopes. Seven of ten antibodies bind the Hendra virus (HeV) F protein. Multiple sequence alignment suggests that some of these newly identified neutralizing antibodies may also bind F proteins across the Henipavirus genus. This work identifies new epitopes as targets for therapeutics, provides a molecular basis for NiV neutralization, and lays a foundation for development of new cross-reactive antibodies targeting Henipavirus F proteins.
nature portfolio | reporting summary
March 2021
Data analysis
For manuscripts utilizing custom algorithms or software that are central to the research but not yet described in published literature, software must be made available to editors and reviewers. We strongly encourage code deposition in a community repository (e.g. GitHub). See the Nature Portfolio guidelines for submitting code & software for further information.
Data
Policy information about availability of data All manuscripts must include a data availability statement. This statement should provide the following information, where applicable: -Accession codes, unique identifiers, or web links for publicly available datasets -A description of any restrictions on data availability -For clinical datasets or third party data, please ensure that the statement adheres to our policy
Human research participants
Policy information about studies involving human research participants and Sex and Gender in Research.
Reporting on sex and gender Population characteristics
Recruitment
Ethics oversight Note that full information on the approval of the study protocol must also be provided in the manuscript.
Field-specific reporting
Please select the one below that is the best fit for your research. If you are not sure, read the appropriate sections before making your selection.
Life sciences Behavioural & social sciences Ecological, evolutionary & environmental sciences
For a reference copy of the document with all sections, see nature.com/documents/nr-reporting-summary-flat.pdf
Life sciences study design
All studies must disclose on these points even when the disclosure is negative.
Sample size
Data exclusions
Replication Randomization Blinding Negative-stain EM: Manual correction using EMANS, reference-free 2D classifications were performed with Relion 1.4, FACS: FlowJo software version 9.9.4 (Tree Star, Inc), Neutralization assay: IC80 is calculated by curve fitting and nonlinear regression (Log(agonist) vs normal response (variable slope) EC) using GraphPad Prism v8,Biolayer interferometry: Octet Data Analysis v12.0.2.3, Percent competition (PC) of analyte mAbs binding to competitor-bound NiV prefusion F was determined using the equation : PC = 100 -[(analyte mAb binding in the presence of competitor mAb) / (analyte mAb binding in the absence of competitor mAb)] x 100, CryoEM: Movies collected using SerialEM, Motion correction and CTF-estimation performed in WARP or cryoSPARC Live, Micrographs imported into cryoSPARC for particle picking, 2D classification, ab initio 3D reconstruction and 3D refinement, Homology models for Fabs were generated using ABodyBuilder, Initial models were docked into the cryo-EM maps using Chimera, Complementarity-determining loops were built manually in Coot, Models were iteratively refined using Coot, Phenix and ISOLDE, SPR: Data were double reference-subtracted and fit to a 1:1 binding model using Biacore Evaluation Software Structural models are deposited in the protein data bank (PDB, https://www.rcsb.org/) and are scheduled to be released upon publication of this paper. NA NA NA NA 10 female CB6F1/J mice were immunized, spleens from 4 randomly selected mice were pooled for B-cell sorting/antibody isolation.
No data was excluded.
Initial screening/characterization of identified antibodies included binding analysis, neutralization assays and competition binding studies. All studies or relevant portions of studies (i.e. competition binding studies with identified neutralizing antibodies competing with other neutralizing antibodies) were repeated with comparable results at least twice.
Animals were randomly allocated to immunization groups at start of study.
No blinding to immunization was used. This is a non-clinical study with data collection and analyses relying on objective measures.
Reporting for specific materials, systems and methods
We require information from authors about some types of materials, experimental systems and methods used in many studies. Here, indicate whether each material, system or method listed is relevant to your study. If you are not sure if a list item applies to your research, read the appropriate section before selecting a response. VSV G antibody was validated in infection assays of VSV"G-G-luc stock and NiV F/G VSV"G-luc stock preparations, 5B3 antibody was validated by binding assays (to pre-F and post-F designs), pseudovirus neutralization assays and negative stain EM bound to Nipah F protein, antibody panel used for B-cell sorting were titrated using FACS as described in the methods and shown in Supplementary Figure 1 with final dilution for sort selected highlighted in Supplementary Figure 1B.
Materials
Vero E6 cells were purchased from ATCC (VERO C1008, clone E6, catalog number -CRL-1586) Cells lines were not authenticated. They were purchased directly from vendor and maintained and frozen according to manufacturer's instructions.
Cell lines were not tested for mycoplasma contamination.
Cell lines were not tested for mycoplasma contamination.
CB6F1/J mice from Jackson Laboratory, female, all mice were 6-8 weeks old at start of vaccination. Mice were maintained at 72°F +/-5°F, relative humidity of 30-70% (typically 33-40%) on a 12h light/dark cycle with food and water ad libitum.
No wild animals were used in this study.
Only female mice were used in this study. Male mice are more aggressive than female mice. Subsequent studies in ferrets showed no variability in immunogenicity between the sexes. | 1,381.8 | 2022-06-14T00:00:00.000 | [
"Medicine",
"Biology"
] |
A critical review of whole theory: Stationenlernen learning technique and German language learning outcomes
This critical review aims to describe the integrity of paraphrasing the theory or references and scientific journals that become references. The contribution of this research is as a reference and input to develop the creativity of critical thinking of the quoters. The method used is listening and analyzing the journal's contents as a source of data; critically identifying the parts of the journal containing quotations, direct quotations, and paraphrasing of the theories being referenced. The results of the critical review show several weaknesses, including several quotes that are not accompanied by relevant data and references, errors in citing reference sources. In addition, quotations or paraphrases have paid attention to the integrity of meaning with several techniques, namely complete and partial paraphrasing techniques. Partial is divided into two; according to the research formulation, the theory is paraphrased and several points of the combined theory are paraphrased because they are closely related and complementary. This critical review implies that it is a follow-up study to analyze the advantages and disadvantages of a scientific article based on relevant theories, studies, and previous research results.
INTRODUCTION
Higher-order thinking skills have special characters or characteristics, namely the ability to think critically and think creatively. Both of these abilities are needed by someone in writing an article. Brookhart discovered that higher order thinking skills (HOTS) have three meanings: transfer, critical thinking skills, and problem-solving techniques [1]. Critical thinking has a close relationship with the brain in solving problems [2]. In this case related to article writing because critical thinking and cognitive growth appear to be essential themes in higher education [3]. Critical thinking is characterized by a person's ability to analyze and consider things carefully, based on logic, before making decisions or conclusions. Regarding critical thinking, the importance of using and utilizing resources effectively is also very influential in scientific work. This can be done by critically distinguishing between our ideas and ideas from other sources. The issue that frequently emerges is that the papers that have been gathered have not followed the signs that should have been followed and tend to be clones of theories, sometimes with ambiguous references. There was one of the issues discovered that technology that makes it simple is frequently abused, leading writers to seek quick solutions, such as copying and moving compositions into articles while breaking the standards of proper and correct citation.
In writing scientific papers, every writer has a moral obligation to comply with all the scientific procedures undertaken, including avoiding plagiarism. Despite the anti-plagiarism software now in use, plagiarism can occur in any scientific journal [4]. In other words, every researcher or writer is prohibited from plagiarizing or taking the ideas, findings, or conclusions of other authors without citing the source because such behavior is categorized as a severe violation of the scientific world [5]. This idea is in line with Sukaesih's opinion that plagiarism is not only an act of harm to writers whose works are plagiarized, but also describes a lack of creativity that worsens the mentality of the perpetrators [6]. Plagiarism is a scientific disaster [7]. Plagiarism should be punished severely, and safeguards for whistleblowers should be put in place and enforced. As a result, bad and lazy authors who circumvent the system would be penalized, while good authors would be serviced [8]. When using words, ideas, or any information from other sources than knowledge and experience, must appreciate the owner of the quote by informing the source of the quote. If you don't give academic credit to the referenced section, it is called an act of plagiarism [9]. To avoid plagiarism, individual conduct relating to the author's duty and ethics in citing someone else's work is essential [10]. In addition, the author also avoids direct quotations without being processed as much as possible because these actions can also describe the author's low creativity or understanding of the theory referred to.
Therefore, every theory that becomes a reference needs to be paraphrased without changing the meaning of the idea or ideas quoted by the author's source. Paraphrasing should be done in its entirety, not only from the source side but also from the substance side. This opinion means that all elements in the theory of reference must be considered. Thus, two things need to be considered in paraphrasing: the integrity of the source and the integrity of the content. The integrity of the reference source describes the author's appreciation of the original author's ideas or thoughts. At the same time, the integrity of the content refers to the quoter's skill to appreciate the original author's ideas or ideas comprehensively while at the same time describing the critical thinking creativity of the quoter.
This critical review focuses on the two things namely, the integrity of the source and the integrity of the content or theories quoted by the author. With reference to the question of whether the theory cited refers to a clear source, as well as whether paraphrasing the theory describes or represents the theory being paraphrased. Or in other words, whether the paraphrase retains the material or aspects of the original theory as a whole.
RESEARCH METHOD
A qualitative research approach with a literature review was performed. A literature review refers to a written summary of journal articles, books, and other publications that provide historical and contemporary data relevant to the subject of a research project [11]. The literature review is used to criticize a scientific article, which analyzes the advantages and disadvantages of a scientific article based on relevant theories, studies, and previous research results. This research's critical review focuses on the integrity of the source and the integrity of the content or theories cited by the author.
The technique used in the critical review is to listen and analyze the contents of scientific articles as a data source. Critically identify parts of the article containing quotations, direct quotes, and paraphrases from the referenced theories. Then the sections are grouped, tabulated, and described about the original idea. The data source for this research is the scientific journal article by Litualy and Serpara [12]. In general, the purpose of this article is to determine if the use of the Stationenlernen learning strategy improves German learning results. Meanwhile, the study's precise objective was described, which was to assist students at State High School 1 (SMA Negeri 1) Saparua in Central Maluku Province, Indonesia in improving their low German learning outcomes.
RESULTS AND DISCUSSION
This critical review aims to describe the integrity of paraphrasing the theory or references and scientific journals that become references. Based on the methods and techniques that have been applied to critical review the scientific article "Stationenlernen Learning Technique and German Language Learning Outcomes" it can be found that the article has used easy-to-understand language and coherent writing systematics. The data is presented in a table. That makes it easier for the reader to understand the content of the study. For each statistical test, the formula used is explained. Tables in the form of a summary of statistical test results are interpreted in detail. The interpretation of statistical test results is discussed and linked to supporting theories and relevant research results. It was also found that the quotations or paraphrases in this article have taken into account the integrity of the meaning or theory with two techniques, namely complete and partial paraphrasing techniques. Behind the advantages, of course, there are weaknesses
Discussion of scientific articles
Based on a review of the subject matter of the scientific article, explained that the introductory section of the article presented the reasons for conducting the research. Because based on the existing theory, although understanding German is crucial, the outcomes of learning German in high school/vocational high school (SMA/SMK), particularly in Saparua (Central Maluku, Indonesia), have not been noteworthy. This condition is caused by a lack of acceptable learning material, which is complemented by the employment of appropriate German language learning procedures based on preliminary research team observations of student learning results.
The theoretical study also includes the most recent and relevant data on increasing student learning outcomes using Bloom's cognitive domain. The major source is emphasized, namely the notion of learning outcomes, which is a transformation in the individual. The planned transformation encompasses not just information but also abilities, attitudes, and comprehension. However, the article is also based on various theories. Furthermore, the report offered statistics on student learning outcomes based on the study team's early findings. Research reviews through preliminary observations help to determine the urgency of the article, namely the importance of learning German, therefore students in high school need to also improve learning outcomes in class through appropriate learning techniques that teachers can apply.
This article employs quasi-experimental research, in which the experiment is conducted on a single set of students with no comparison group. The article's summary indicates that the researcher used or experimented with the Stationenlernen learning strategy in the German language learning process in the hopes of helping students enhance the quality of their learning results when studying German. In addition, the research identifies the weaknesses, benefits, and areas for improvement in learning German.
Discussion of whole theory
Some of the theoretical citations that have been questioned in relation to the integrity of the theory stated and paraphrased by the author can be described; the first theory cited by the author is in Indonesia, German is taught as a Foreign language in high school as well as vocational school and university, indicating the critical importance of mastering the language [13]. The original theory is during this period, there were three main kinds of languages spoken in Indonesia: i) Regional or vernacular languages; ii) National languages (Indonesia languages); and iii) Foreign languages, such as English, German, and Arabic. [14], [15]. Mastering another language can be what Turner and Allen describe 'self-identitya sense of knowing or belonging'. For Indonesian being able to speak a foreign language, they can be identified as a knowledgeable individual [16]. Criticts for source integrity is the paraphrases in this discussion have quoted the source or reference in full. The quote paraphrased by the author does not fully represent the original two theories. This paraphrase emphasizes more on foreign languages taught in Indonesia but ignores the reasons for the importance of learning foreign languages, as stated by Turner and Allen. In other words, paraphrasing the theory should also mention the benefits of learning a foreign language as proposed by Turner and Alen.
The second theory paraphrased is about five important reasons to learn German [17]. These referenced sources are inaccurate. This fact means that no original theory is found. The author's second paragraph cites five reasons that German is an important language to learn in the world. However, no original theory was found against the referenced reference. Based on the data obtained, the reference is an e-book learning German, which discusses learning the German language and culture. The author tries to make conclusions from the materials and practice questions contained in the e-book to be paraphrased into five reasons why German is mandatory and important to learn. As input for the author, additional references and theories can be used from the ideas of Krumm which explains that in terms of the number of speakers, German is the language with the most significant number of speakers in the European Union region, which has the most substantial economic influence in the European Union. This power impacts eastern European countries, so that German is also studied because it provides economic benefits for every speaker [18].
Mastering German is important, but results of learning German in Saparua sub-district, have not improved significantly. As a result of the lack of adequate resources, German educational techniques are not being used. There is a need to apply appropriate media and learning techniques to improve and increase student learning achievement [19]- [21]. Criticts for Source integrity is no original theory was found; the author referred to that because it took ideas or ideas from the results of previous research. The paraphrased quote is irrelevant to the research problem because the research results were used as a reference theory from Peters
395
It is expected that the Stationenlernen learning technique will develop students' learning outcomes in German by being open, independent and interactive [22]. There are three original theories that are paraphrased by the author, namely: i) In terms of open instruction, stationenlernen can be applied to almost any field of study, including foreign languages [23]; ii) Stationenlernen's student-centered learning activities are creatively designed so that they can work more independently, intensely, efficiently, and at their own pace [23]; iii) The communicative environment created through Stationenlernen helps students to become creators and receivers of the meanings of the text [22]. Criticts for Source integrity is based on the data obtained, there are two different sources of reference with the same author. The two references are quoted from three other original theories. Two theories are derived from a study entitled "Stationenlernen", and one idea from a study entitled "An Interactive Reading". But in the reference section, the author only cites one reference, so that the source is quoted incompletely. First theory is entirely paraphrased by the author because it explains that Stationenlernen is an open learning technique. This fact means that it can be applied to all fields, including in language learning. While second theory is not quoted in its entirety because it only takes one aspect relevant to the author's aims and objectives, namely about student-centered independent learning. The interactive element cited by the author in third theory, it is also adapted to the author's needs related to the Stationenlernen learning technique so that it is not quoted intact. Overall, the combination of the three theories has been quoted and paraphrased well by the author to become a concise and clear idea.
In order for Stationenlernen to be successful, teachers need to be supportive of it, especially students [24]. Reference sources did not find any theory relevant to the ideas that have been paraphrased. The paraphrased theory may be the result of the author's observations. Teachers are expected to be highly creative and aggressive when setting up or managing learning stations with all the necessary materials and when directing the flow of teaching and learning activities so that they go without stumbling blocks [25]. While the original theory is the ability to be creative as a teacher is crucial in the realm of education. By working with someone who is qualified and knowledgeable in their area, performing classroom action research, and reflecting on the strategies used in their classrooms, teachers are expected to assume the role of students and foster their own creativity [26]. The paraphrases in this discussion have quoted the source or reference in full. In this section, the theory is quoted and paraphrased by the author based on the teacher's creativity aspect. In summary, the author relates the research results relevant to the Stationenlernen learning problem and the creative abilities that an educator must possess. The theory is incompletely paraphrased. That is, the author only cites one aspect that is relevant to the research and then paraphrased it.
Students must be able to support teaching and learning activities by giving their time, effort, and thoughtful consideration. They must also be prepared to collaborate with other students in groups [27], [28]. The original theories are: i) To accomplish learning goals, the Jigsaw approach is one of the active learning methods. This method depends on good communication between students in a group, including wellorganized study materials, clear learning objectives, and enough time; ii) Throughout the research project, students gave their complete cooperation and engagement in the group that was formed and committed to collaborative learning. The sources referred to have been quoted in their entirety, and the references are explicit. Based on these two theories, the writer quotes and paraphrases briefly and clearly to support the relationship between the theories in the previous sentence. The idea is quoted incompletely because it focuses on a learning technique that is applied to students. Students can interact in groups and are responsible for the learning that follows. Although the technique used to the original theory does not explain the Stationenlernen learning technique, it can represent that applying an appropriate learning technique can stimulate students to work in teams. In addition, the author also combines the two original theories, processing them so that they become a new idea.
Individuals change as a result of their learning experiences [29]. It found an original theory that institutions can make big changes by working together, but it's possible that the people who are most affected by these changesstudentsmay not be involved in the process. The sources referred to have been quoted in their entirety, and the references are explicit. This paragraph begins with a quote related to the theory of learning outcomes. The author implicitly paraphrases the meaning of the reference theory so that in detail, it can be concluded that personal changes might be determined as learning outcomes. The citation style used by the author is a partial citation, where the author determines one of the dominant and relevant words so that it is paraphrased according to the needs of the author. Related to the above theory, the author continues his quote that changes in abilities, attitudes, understandings, and self-esteem are all part of the planned transformation [30]- [33]. Based on the paraphrased theory, several original theories were found. Students' expertise in independent learning includes identifying and selecting sources, carrying out the learning process, and assessing learning outcomes. According to several evaluations, there are few evaluative methodologies available to determine how diverse school learning environments affect secondary student attitudes toward learning and their academic accomplishment. Conceptual knowledge and technical skills are separated as cognitive outputs. Conceptual knowledge is examined in all 13 publications, while technical skills are assessed in two of them [34], [35]. Conceptual knowledge referred to understanding and knowledge of the issues covered in the laboratories. Only one article detailed the questions presented to the learners [36]. Through intrinsic goal motivation, self-esteem was revealed to predict learning techniques indirectly. The paraphrases in this discussion have quoted the source or reference in full. The sentence in this section is a continuation theory from the previous sentence and the idea is quoted partially. This fact means that the author in detail paraphrases any new findings of earlier studies related to learning outcomes. The four theories are combined into one complete sentence where there are keywords in it so that a new grouping of paraphrased ideas is found. In addition, each concept of research results is paraphrased so that it becomes a relevant reference.
It was also emphasized that learning outcomes are a description of someone's talents or skills developed in thinking, acting, and doing [37]. The original theory is created a more comprehensive evaluation system for assessing cognitive, emotional, and psychomotor activities independently. The paraphrases in this discussion have quoted the source or reference in full. In addition, the paragraph referred to is still an explanatory connection from the previous sentence. Overall, the theories quoted and paraphrased are concise and clear. In this section, there are three aspects raised by the author and paraphrased in different language styles. This situation means that the three aspects are quoted in their entirety using synonyms.
Learning outcomes are changes in abilities, attitudes and habits, comprehension, knowledge, and study that are synonymous with the categories cognitive, affective, and psychomotor as a result of the act of learning [38]. It found an original theory that psychomotor results, such as efficiency, accuracy, and response magnitude; cognitive outcomes, such as comprehension, knowledge, application, and analysis; and emotional consequences, such as satisfaction, attitude, and appreciation for the learning experience, are examples of learning outcomes [39]. This source referred to have been quoted in their entirety, and the references are clear. Furthermore, the theory in this section is also related to the previous sentence. Based on the three important aspects described previously, the writer looks for relevant data and facts to confirm the ideas of the prior theory. Facts and data obtained through relevant research results. Then the theory is quoted and paraphrased. There are many points in the original theory, but the author describes it in detail in his own words to get a concise and clear conclusion. The theory is not quoted in its entirety but focuses on important aspects of cognitive, affective, psychomotor, and German learning outcomes. Learning outcomes are said to be flawless if three factors must be met: cognitive, affective, and psychomotor [40]. For the cited theory, the original theory is found Bloom's Taxonomy is a commonly used paradigm that distinguishes three types of learning: cognitive, psychomotor, and affective [41]. The sources referred to have been quoted in their entirety, and the references are clear. To be more complete, the authors add references that are relevant to three aspects, namely cognitive, affective, and psychomotor. Overall, the author cites the theory in its entirety because these three aspects form the basis for the ideas that have been presented.
Learning outcomes are a transitional process that applies to individuals learning and is related with changes in knowledge, understanding, and competencies [42]. While the original theory is Relationships with people and the growth of selfknowledge shape an individual's self-concept. An individual's vision of the world and behavioral habits are heavily influenced by self-concept [43]. Criticts for Source integrity is the sources referred to have been quoted in their entirety, and the references are clear. Implicitly the meaning of the original theory is paraphrased by the author by linking the formation of self-concept through collaboration with others, development of knowledge, and the influence of perceptions on the environment and behavior of role models. Thus, paraphrasing does not represent the whole theory, so some parts are omitted. When a person has completed the learning process, their behavior will transform. Modifications can be seen in the following ways: Knowledge, Cognitive skills, Motor skills, Affective learning outcomes, and Communicative learning outcomes [44]. Related to the above theory, the author continues his quote that when a person has completed the learning process, their behavior will transform. Modifications can be seen in the following ways: knowledge, cognitive skills, motor skills, affective learning outcomes, and communicative learning outcomes [44]. The original theory is cognitive learning outcomes are further classified as knowledge and cognitive skills, physical abilities, emotional learning outcomes, and communicative learning outcomes [45]. The sources referred to have been quoted in their entirety, and the references are clear. In this paragraph, the author cites many relevant theories relating to three aspects, namely cognitive, affective, and psychomotor. Each element is described in detail, supported by data and facts from research results and opinions from experts so that the ideas conveyed by the author are scientifically acceptable. In addition, the author uses the style of citing theory in its entirety and paraphrased briefly and clearly according to the author's needs.
The results of teaching and learning German are the modifications in knowledge, understanding, and attitudes that students experience [46]. The theory cited by the author refers to a conclusion. If it is related to the original theory in the article quoted by the author, namely code-switches (CS) gives a unique perspective into the structural effects of language encounter," [47]. The use of event related potentials (ERP) to examine CS processing can give information regarding the effects of various sorts of language contact scenarios, as well as the unique cognitive processes at work in bilingualism. The sources referred to have been quoted incompletely, but the references are clear. The author quotes and paraphrases this theory to conclude the overall idea that has been conveyed previously related to learning outcomes. In this section, the author only cites related situations in language learning to stimulate one's cognitive processes to improve learning outcomes. The paraphrased theory is incomplete. This fact means that it only focuses on points that are relevant to the research problem.
CONCLUSION
In general, reference sources are given academic awards by being quoted in their entirety. However, some reference sources are still less relevant to the research problem, and there are slight errors in citing reference sources originating from the same author. As a whole, the quotation or paraphrase has taken into account the integrity of the meaning with several techniques, namely the complete and partial techniques. Whole in the sense that the theory is paraphrased without losing the meaning. At the same time, the partial is divided into two, namely: i) Paraphrasing tends to quote important aspects according to the needs and objectives of the writing; ii) Combining several theories and then processing them briefly and clearly to become a new idea. This review is a reference and input for writers to remain professional in citing sources and paraphrasing theories, ideas, or ideas in writing scientific articles to avoid plagiarism. | 5,807.2 | 2022-08-22T00:00:00.000 | [
"Education",
"Linguistics"
] |
Analysis of Linguistic Complexity in Professional and Citizen Media
Structural linguistic characteristics are an important aspect of written communication. Previous research shows that linguistic complexity plays an important role in how people process information. With increasing popularity and readership of citizen journalism, questions of how structurally different this medium is from its professional counterparts and how this difference potentially affects readers become salient. Using automated content analysis methods, the present study investigates the differences in linguistic complexity across various citizen and professional journalism outlets. The analysis shows that the patterns of presenting political information across various media are different. These findings have direct implications for various branches of communication and journalism studies such as the knowledge gap hypothesis, language expectancy theory, and credibility research.
Introduction
Today "citizen journalism," "grassroots journalism or "participatory journalism" is an extremely widespread phenomenon. The proliferation of digital technologies, rapid growth of internet penetration, and the ease of access to a huge corpus of information, enables people of various professions to analyze, produce, and share political news content without the necessary condition of working at a traditional news media outlet. Citizen journalism is broadly defined as the participation of citizens in the political process by means of generation and dissemination of political information (e.g., Bowman and Willis 2003). However, the practice of citizens engaging in journalism may take various forms and mean different things depending on the context (Lasica 2003). A broader definition involves any action that disseminates political information, sometimes going as far as to term re-posts and social media "shares" as citizen journalism, while the more strict notions of citizen journalism refer to the creation of political content, such as commenting, writing a blog, etc. (e.g., Goode 2009). However, almost all of the definitions, regardless of how "strict" they are, share a few common threads. Citizen journalism is viewed as being an alternative to the mainstream media (Goode 2009), having a usercentric (as opposed to corporate) nature (Lewis, Kaufhold, and Lasorsa 2010), and being, at least to a larger extent, created by non-professionals.
The formal linguistic aspect of such communication, however, does not receive a lot of attention from journalism research in general, and definitely not in comparative accounts of traditional and newer forms of journalism. Language, nevertheless, plays a central role in politics, since it constituted simultaneously the ultimate medium through which political processes are communicated and the message of this communication. The structure of language in general, and language complexity in particular, affects the perceived credibility of a message (Jucks and Paus 2012). Political actors' credibility, for example, is very much dependent on how they structure their utterances (Wodak 1989, 115-118). Previous research has also shown that different linguistic characteristics of a message affect the way it is perceived by the audience (e.g., Kleinnijenhuis 1991). Structural characteristics of the message are thus a potentially important-but often overlooked-aspect of the product of journalists. The decrease of trust in the media for the past few decades (Moy and Scheufele 2000;Peters and Broersma 2013), together with the constantly multiplying modes of information dissemination and citizen engagement in the production process, make the question of analysis of various media, their structure, and effects on political processes as relevant as ever.
Political blogs are chosen as a prime example of citizen journalism to compare with traditional newspaper coverage. Political blogs are mostly user-centric and viewed in opposition to the professional, corporate media (e.g., Goode 2009). Although there are many manifestations of citizen journalism, political blogs resemble the conventional newspaper columns much more than any other form of citizen journalism, which makes them an appropriate candidate for a comparative structural analysis. It is important to understand structural differences in language complexity to then further understand the credibility competition between traditional and new news media.
Linguistic complexity is not extensively analyzed in the social sciences. The studies that have been interested in this topic, however, show evidence of its effects on political processes. One of the pioneering studies that investigated both textual and news complexity was Kleinnijenhuis' (1991) research that focused on the knowledge gap hypothesis (e.g., Tichenor, Donohue, and Olien 1970). This study concluded that newspaper complexity plays an important role in explaining the knowledge gap hypothesis. Other political processes, like political information recall and factual knowledge, are also affected by information complexity (e.g. Eveland and Cortese 2004). Thus, complexity plays an important role in how people process information. The current study attempts to comparatively analyze journalistic outlets, specifically, citizen journalism, quality newspapers, and tabloid newspapers, from a structural perspective, and tries to uncover the differences in linguistic complexity within these types of media. This project aims to answer the following research question: RQ1: Do professional newspapers (including quality and tabloid newspapers) differ in terms of structural linguistic complexity from citizen journalism media?
Theoretical Framework
Citizen journalism evokes a substantial amount of interest from journalism and political communication scholars since it effectively started a paradigm shift in media processes, and changes the framework of how information generation and dissemination occur. Citizen journalism blurs the lines between the creators and the consumers of political information. Traditional media rely on a hierarchical, top-down model, where information is created by an organization and "passed down" to the consumers. Citizen journalism removes the vertical distance between the creators and the consumers of information. Comparative research is a very popular approach when investigating the underlying mechanisms of citizen journalism. Since most of the theoretical tenets of journalism studies were ANALYSIS OF LINGUISTIC COMPLEXITY 1787 developed analyzing traditional media (e.g., gatekeeping research, credibility research, perceived roles, etc.), researchers nowadays apply these ideas to citizen journalism and crossreference the differences and the similarities of these media models (see, e.g., Hanitzsch 2007;Reese et al. 2007).
Credibility Research, Traditional and Citizen Media
Research on the perceived credibility of (citizen) journalism is a relatively new branch and is gathering momentum. Credibility research is concerned with two aspects of communication: source credibility and medium credibility (Kiousis 2001). Source credibility research focuses on the characteristics of the communicator, and how they can affect the processing of the message (e.g., Hovland and Weiss 1951;Pornpitakpan 2004). The role of the communicator can be taken by an individual (e.g., journalist) or a group or institution (e.g., newspaper, publishing house). Conversely, medium credibility analyzes and focuses on the channels that are used to transmit information (e.g., print media, television or internet). The concept of a source in citizen journalism is somewhat of a debate; since citizen journalism is decentralized, it is often hard to pinpoint where exactly the information is originated, or who is actually responsible for the dissemination of this information. In traditional media models, journalists (who are themselves perceived as sources by the public) often rely on limited amounts of trusted sources to obtain information, therefore reducing the amount of resources to verify information (Williams and Delli Carpini 2004). Citizen journalism alters the established journalistic routines-ordinary citizens are more and more accepted as information sources by professional journalists and are considered among the most important sources of information (De Keyser, Raeymaeckers, and Paulussen 2011).
Credibility, however, is associated not only with who is the source, or how the message was transmitted, but also with the content and the structure of the message itself. There are two main arguments about how language complexity may influence credibility: expertise, on the one hand, and comprehensiveness, on the other. Studies on the effects of these two distinct dimensions produce mixed results (Blobaum 2016, 230-232). Technicality in language, for example, may come off as being complex and therefore increase the perceived credibility of the message (Jucks and Paus 2012). Moreover, students perceive academics that use complex language as more credible and their complicated explanations a function of the academic's expertise (Thiebach, Mayweg-Paus, and Jucks 2015). Contradictory to the aforementioned studies, however, Scharrer et al. (2012) have shown that written arguments are perceived as being more believable when they use less technical language; thus less complex-more credible.
These studies, while ending up with contradictory results, do agree on one thingthere are social expectations and assumptions as to what kind of language a certain social group should use, and when these assumptions are violated, the perceived credibility of the information changes. The social groups that are required to meet expectations include, but are not limited to, academics (Thomm and Bromme 2012), doctors (Blobaum 2016, 230), political actors (Pfau, Parrott, and Lindquist 1992;Dillard and Pfau 2002), and media actors (Pfau, Parrott, and Lindquist 1992;Burgoon, Denning, and Roberts 2002).
Language expectancy theory (LET), developed by Burgoon, Denning, and Roberts (2002), addresses the effects of the linguistic structure on the persuasiveness of the message. The main idea behind LET is that people develop social and cultural expectations regarding the use of language, and these expectations further lead to classifying the message as being persuasive or not (Burgoon and Miller 1985). The linguistic patterns that are expected from a social group may be of a different nature: it could be aggressiveness (Pfau, Parrott, and Lindquist 1992), humor, irony, praise (e.g., Averbeck 2010) or the complexity of the language. A study of the credibility of online reviews, investigating the effects of lexical and semantic complexity, hypothesized that more complex messages would commit positive expectancy violations and, therefore, increase the credibility of the review (Jensen et al. 2013). The authors argued that, since the average semantic and lexical complexity in the review messages is relatively low, having a message with high complexity would seem more competent and, by extension, more credible in the eyes of the reader. Other studies have shown that the strong normative expectations of language use are not based on an individual level, but rather on a group or organizational level, for example, scientists and doctors are expected to conform to different linguistic patterns than manual labour workers (e.g., Buller et al. 2000;Jensen et al. 2013;Paus and Jucks 2011). Despite its advantages, LET is not applied extensively in journalism research. Interestingly enough, Burgoon initially thought of media analysis as being one of the main applications of LET (Burgoon, Denning, and Roberts 2002).
Research on journalistic roles is a branch that could greatly benefit from the incorporation of LET and linguistic complexity. For example, analyzing the difference in how professional and citizen journalisms see themselves, it was found that both groups take on a role of an interpreter, providing analysis and interpretation of complex problems and translating them for the public to read (Nah and Chung 2012). While both groups think of themselves virtually the same in this regard, knowing the linguistic complexity of their "interpretations" may provide further answers about their perceived roles, and how they fulfill these roles. Journalists also differ in what they think is credible-professional newspaper journalists often rate online news as being less credible than print newspapers because they are concerned that the proper journalistic norms and routines are not as rigorous (or absent altogether) from the new online environment (Singer 2004;O'Sullivan and Heinonen 2008).
Professional newspaper journalists, however, differ in their role perceptions among themselves as well. Tabloid media journalists, being more market-oriented, are placing higher values on the coverage of private sphere topics, entertainment, and human-interest stories, and are making less emphasis on investigative journalism, while quality journalists regard public sphere topics and investigative journalism as having high value (Beam 2003). Thus, market-oriented journalists do not necessarily see themselves as the ones who should portray the news as objectively as possible, but rather to provide the reader with the most interesting story. Tabloid journalists often see themselves employing a skill-set different from that of their colleagues in the quality media, as well as having a different view on what "quality" and "truth" in journalism actually mean (Deuze 2005).
It could be argued that linguistic structure (including textual complexity) has either a direct or an indirect connection with professional journalist routines-e.g., rigorous formatting of language, adherence to predefined style, barring the use of emotional words, colloquialism, etc. As much as journalists build up expectancies towards the structure of the news (Singer 2004), readers may also develop some preconceived notions as to how different media should be structured-e.g., they may think that professional quality newspapers should employ a sterile, objective language and articles should be rich in information, tabloid newspapers should employ captivating language and story structure, while ANALYSIS OF LINGUISTIC COMPLEXITY 1789 citizen journalism news should be easy to read and approachable. Also, as was already discussed, these expectations have direct implications for the trustworthiness and the credibility of news.
Complexity, Traditional and Citizen Media
Linguistic complexity is an important factor across different disciplines, and more importantly, across various branches of communication research. Understanding how complexity affects different media may open a way for a plethora of new research advancing already exciting theories. Combining LET with previous research on credibility and research that shows the effects of information complexity on various political processes, like political factual knowledge (e.g., Eveland and Cortese 2004), political socialization, acquisition of political information or political information retainment (e.g., Kleinnijenhuis 1991;Eveland and Dunwoody 2001), comparing the linguistic complexity of citizen journalism and traditional media may have implications for future research. The branch that seems most likely to benefit from knowing the effects of text complexity is credibility research. As discussed previously, complexity directly influences credibility. Yet, despite this fact, there is an evident lack of research concerned with complexity and journalistic/news credibility. First of all, does the language employed by different media actually differ? Does more complicated, technical, convoluted political language increase the perceived credibility of the medium, thereby contributing to more trust in the journalistic product? Or does explaining politics in layman's terms, therefore rendering it more comprehensible to citizens, imply a better understanding of a subject, and thus a better approach to formulate a credible message?
Moreover, textual complexity may be of interest to researchers occupied with LET, especially in journalism research. If there is empirical evidence that certain expectations exist for social groups to conform to linguistic patterns, it is not unreasonable to assume that these expectations extend to media organizations, models, and journalists. An interesting extension to LET is the fact that various media are not only expected to conform to certain linguistic and stylistic standards, they also define these standards for themselves. While not directly related to citizen journalism studies, research on newspaper language has shown that broadsheet newspapers use a completely different style from that of tabloid newspapers, even when covering the same topic (Fowler 2013). What is even more important here is that the language that one medium considers to be appropriate is frowned upon by the other medium (Bagnall 1993). It is often the case that media have very distinct audiences-professional broadsheet newspapers, for example, feeling threatened by the proliferation of digital information, are beginning to cater to a shrinking circle of "elite" readership (Meyer 2008). Citizen journalism, on the other hand, strives to spread information as freely as possible (Bruns, Highfield, and Lind 2012). Some researchers argue that different audience characteristics of the media force these media to employ different structures of language (e.g., Fowler 1991). For example, quality newspapers could be expected to use more complex language than blogs, etc. And if they do, how do expectancy violations by these media affect their credibility? These are potentially important avenues for journalism research in times of declining trust in traditional news outlets and increasing success of alternative forms of news distribution. To begin systematically addressing these issues, one would need to have empirical data showing the difference (or lack thereof) in linguistic patterns in traditional and citizen journalism. This study is intended to be a first step in bridging this empirical and theoretical gap.
Such a comparative study of the structural difference of language between different news media formats seems even more important if we consider the fact that most people select the medium they consume, and the fact that social stratification and education are a very strong predictor of media choice (Chan and Goldthorpe 2007). In the same study, the authors argue that the choice of the medium may be highly associated with the reader's information-processing capacity. This argument ties back to the knowledge gap hypothesis and consequently the question of linguistic complexity is even more relevant.
The definition of complexity and its operationalization differs across studies, and most of the studies focus on a single aspect of linguistic complexity. Three different dimensions, however, can be outlined-semantic complexity, syntactic complexity, and information entropy. Semantic complexity is the hardest to both define and measure. It is most often operationalized by proxy of lexical richness and lexical diversity, that is, in the simplest terms, it measures how many unique words are used in a selected text (e.g., Malvern and Richards 2002). Syntactic complexity, on the other hand, is related to the formal structure of the text: length of its words, sentences, clauses, etc. One of the first scales developed to measure text complexity was the Flesch Reading Ease Test (Flesch 1948), which effectively measures syntactic complexity as well. Finally, complexity can be measured as information entropy (Shannon and Weaver 1964). This measure was used in Kleinnijenhuis' study to determine news complexity, however it can also be applied to estimate semantic characteristics of the text (Dale, Moisl, and Somers 2000, 551). The present study aims to use all of the aforementioned dimensions of text complexity and apply them in a comparative design to investigate whether professional newspapers differ between themselves and from citizen journalism media in terms of complexity.
Even though there is only a limited body of research on the structural differences in different media, drawing from previous studies on citizen journalism and traditional media a series of hypotheses can be formulated. First, citizen journalism, quality newspapers, and tabloid newspapers are expected to differ regarding the structure of the language they use. There are clear structural differences between the three media to expect such variation. Traditional newspaper articles go through a long process of editorial selection that most of the citizen journalism outlets either lack or these routines are not as rigorous (e.g., Goode 2009). The labor-intensive work of correcting bad grammar and adjusting the language to the newspaper's standard is viewed as a service to the reader and a pride of a newspaper (Thurman 2008). Editing changes not only the content, but the structure of language used in a final article, the process that sometimes is referred to as the "creation of language" (Bell 1991, 80-85). The first hypothesis, therefore, is: H1: Quality newspapers, tabloid newspapers, and citizen journalism articles will be significantly different on scores of text complexity.
For reasons discussed above, and because technical, dense texts are expected to be highly syntactically complex (Miller and Miller 2011, 65), it expected that quality newspaper articles will be more syntactically complex than citizen journalism articles and tabloid newspaper articles.
H2: Quality newspaper articles will be more syntactically complex than both tabloid newspaper articles and citizen journalism articles.
Expectations regarding semantic differences between the three formats are more tentative. While quality newspapers undergo a strict editorial process during their ANALYSIS OF LINGUISTIC COMPLEXITY 1791 production, which quite possibly standardizes the language used in the articles, citizen journalism and tabloid media articles may use a greater variety of styles (Borjars and Burridge 2013), jargon, and emotional language, hence allowing for a broader, less sterile use of language (Timuçin 2010). This would result in higher semantic variety. We thus cautiously expect that: H3: Citizen journalism and tabloid newspaper articles will be more semantically complex than traditional newspaper articles.
Method
Addressing RQ1, a series of analyses were performed on a large sample of Englishlanguage newspapers and English-language political blogs. Additionally, a sample of German-language quality and tabloid newspapers were included to determine whether the findings regarding newspaper differences would hold across linguistic and journalistic environments. 1 The newspaper samples were obtained from digital newspaper repositories. To gather political blog articles' data, a series of Python scripts was written to scrape articles from their respective websites. For subsequent data preparation and data analyzes, scripts in Python and R programming languages were written. Natural language processing was performed with the spaCy Python module.
Sample
Two quality newspapers (New York Times and Washington Post), three tabloid newspapers (New York Post, USA Today, and Los Angeles Daily News), and five political blogs (FiveThir-tyEight, The Daily Beast, Breitbart, Politico, and The Wall Street Journal Blog) were used in the English-language sample. The German sample comprised of two quality newspapers (Frankfurter Allgemeine Zeitung and Süddeutsche Zeitung) and a tabloid newspaper (Bild). The newspapers were chosen by largest average daily circulation and blogs were chosen by popularity. Both for the newspaper sample as well as for the blog sample only texts pertaining to politics were analyzed, while every other topic was filtered out. A total of 927,593 texts were analyzed. The sample represented five years of political coverage, from 2011 to 2015.
Variables
Syntactic complexity. A few simple measures, such as the average word length and the average sentence length, were used as rudimentary metrics of syntactic complexity. A more advanced metric was the syntactic depth measure (e.g., Yngve 1960). This measure determines the length of a parsed syntactic tree for the base word to the terminal word. The syntactic dependency metric takes a different approach to generating syntactic structures. Only words are used as nodes in a dependency tree (unlike verb and noun phrases in a syntactic tree) (e.g., Nivre 2005). Syntactic measures also included the automated readability index (ARI)-a readability measure that is independent of languagespecific linguistic features (such as the number of syllables), and therefore is better suited for performing analyses on multiple languages.
Semantic complexity. Type/token ratio is the simplest way to determine the lexical characteristics of the text. The metric represents the ratio of the unique words in the text to all words (text length). While the simple type/token ratio is the most common approach to measuring the lexical diversity of a text, some argue that its applicability is limited only to short texts because in a longer text the words inevitably start to repeat (e.g, Fergadiotis, Wright, and West 2013). MTLD (Measure of Textual Lexical Diversity) is a measure that is maximally independent of the text size (Covington and McFall 2010). The algorithm calculates the type/toke ratio sequentially and counts the number of sequences that do not fall under the specified threshold. Another measure of textual semantic diversity was proposed by a mathematical statistician Yule (2014). Yule's I metric is an attempt to fit a hypergeometric distribution to word frequencies, and it shows a probability of two randomly sampled elements from a set being the same. Finally, semantic entropy is a logarithmic measure based on the information theory entropy (Dale, Moisl, and Somers 2000, 551). Entropy is a measure of uncertainty. For example in a string "AAAAAAA", entropy is 0, since there exists no uncertainty as to what the next symbol would be. However, if the string was composed of a random sequence of "A"s and "B"s, entropy would be 0.5. Content complexity. This measure was used to determine the newspaper complexity as well as frame complexity in Kleinnijenhuis (1991) and Kleinnijenhuis, Schultz, and Oegema (2015). As semantic entropy, this metric also measures uncertainty. The uncertainty here is associated with the information in the text. Named entity recognition algorithms were used to extract entities from a text (people, geographic locations, and organizations). High content complexity score indicates a uniform distribution of named entities throughout the text, implying a higher complexity of the content presented in the text.
To facilitate easier understanding of the results and further analyses, all scores for the different measures were normalized (mean = 0; SD = 1).
Descriptive Statistics
As evident from Figure 1, variables' mean scores differ depending on the medium (blogs versus tabloids versus quality newspapers). Variables measuring syntactic text complexity-mean length of sentence, mean sentence depth, syntactic dependency depth, and ARI-appear to be much higher for quality newspapers than for citizen journalism blogs and tabloid newspapers. However, the reverse is true for most scores corresponding to the semantic complexity-lexical diversity, semantic entropy, and Yule's I are all higher for the tabloid newspapers, followed by the citizen journalism blogs and quality newspapers. Finally, the variation in the content complexity between the different types of media is not as striking as in the previous dimensions. These preliminary observations suggest that, while having a more complex structure, political articles found in quality newspapers are less lexically and semantically diverse.
Cluster Analysis and Factor Analysis
Data reduction techniques are applied to come to a more general conclusion regarding medium differences. For theoretical reasons, the content complexity variable was not included in the dimension reduction model, but was treated as a separate dimension.
ANALYSIS OF LINGUISTIC COMPLEXITY 1793
Content complexity is not a measure of linguistic characteristics in the sense that the other presented variables are. It is bound to mathematically co-vary with the other variables, since it is based on word counts, however, it measures how complex the information encased in the text is, rather than how complex the language in the text is. First, a hierarchical cluster analysis was performed to be able to visually inspect how different variables load on separate complexity dimensions. Conforming with expectations, the analysis yielded two large clusterscorresponding to semantic and syntactic dimensions of textual complexity. The visualization of the hierarchical cluster analysis is presented in Figure 2.
Factor analysis was then performed to further investigate whether the measured variables correspond to the expected two-dimensional model. A two-factor solution was the optimal model fit-only two factors had eigenvalues higher than 1, 4.90 and 1.92, the eigenvalues dropped steeply after-with third and fourth factors scoring only 0.45 and 0.32, respectively. The data satisfied the statistical assumptions of sampling adequacy (KMO = 0.75) and homogeneity of variances-Bartlett test of sphericity (p < 0.001). 2 To provide an additional face validation to the method, a model with Germanlanguage data was also estimated. After separating the sample by language (English and German)-the results stay very similar-which indicates that the dimensionality of textual complexity is not an isolated incident, but rather a more general model of text. Descriptives for the full factor analysis model, as well as German and English models are provided in Table 1.
FIGURE 1
Descriptive boxplots for all variables entered in the data reduction model. All variable scores are normalized (mean = 0; SD = 1). Black boxplots: citizen journalism blogs; dark grey: quality newspapers; light grey: tabloid newspapers
Cross-media Comparison
After conducting the factor analysis, the extracted factors, as well as the content complexity variable, were then used to analyze the differences in complexity dimensions across the three different media (citizen journalism blogs, professional newspapers, tabloid newspapers). A boxplot for normalized factor scores (mean = 0; SD = 1) is provided in Figure 3.
As can be seen from the figure, professional newspapers score higher on both syntactic and content complexity. Citizen journalism blogs, however, are semantically more
FIGURE 2
Cluster dendrogram for descriptive cluster analysis complex than their professional counterpart. Professional newspapers are more complex regarding the content of the articles. For clarification purposes a table with sample sentences from the analyzed outlets is provided in Table 2. One-way ANOVA models were estimated to compare the complexity dimension means across media. All ANOVA models for complexity dimensions were significant at p < 0.001: F(2, 927,590) = 49,413 for the semantic model, F(2, 927,590) = 61,504 for the syntactic model, and F(2, 927,590) = 5538 for the content complexity model.
It is important to note that these patterns hold for separate language models as well. German-language quality newspaper articles are significantly more syntactically complex (mean = 0.30; SD = 0.99) than German tabloid articles (mean = −0.77; SD = 1.01), F(2,
FIGURE 3
Descriptive boxplots for complexity dimensions. All variable scores are normalized (mean = 0; SD = 1). Black boxplots: citizen journalism blogs; dark grey: quality newspapers; light grey: tabloid newspapers The semantic and syntactic complexity scores are scaled (mean = 0; SD = 1). For content complexity a score of 0 indicates no entropy (e.g., only one named entity), while a score of 1 indicates maximum entropy (e.g., multiple named entities, without repletion). The sample paragraphs may not be completely representative, since the automated content analysis method was designed to work with full articles. For example, content complexity of 1 or 0 is very improbable in the full articles.
ANALYSIS OF LINGUISTIC COMPLEXITY 1797
156,853) = 47,399, p < 0.001. German tabloid articles are significantly more semantically complex (mean = 0.48; SD = 1.06) than quality media (mean = −0.19; SD = 0.91), F(2, 156,853) = 15,406, p < 0.001. Content-wise, German tabloid articles (mean = 0.09; SD = 1.01) are more complex than quality newspapers (mean = −0.04; SD = 0.99), F(2, 156,853) = 588.01, p < 0.001. Thus, all three hypotheses in this study were confirmed. The three newspaper text types are significantly different from each other (H1); professional newspapers are more syntactically complex than both the tabloid newspapers and citizen journalism articles (H2); and citizen journalism texts and tabloid newspapers score significantly higher on the semantic complexity metric than professional newspapers (H3). Additionally, it was established that similar complexity patterns are occurring across English and German languages.
Conclusion and Discussion
In the present study, the structural characteristics of political journalistic texts from three different media types were compared, specifically professional newspaper articles, tabloid newspaper articles and political weblog articles. The aim of this research was to investigate whether textual complexity differs between these media, and in particular whether citizen journalism texts would diverge from the other two professional types of texts. First, a range of textual complexity measures was chosen from fields ranging from linguistics to social sciences and applied to ∼930,000 articles. It was determined that these measures actually gauge three different dimensions of textual complexity: syntactic, semantic, and content complexity. The content complexity measure was not entered with the other variables into a data reduction model because it is, unlike the rest, not a measure that determines the linguistic characteristics of a text.
A series of comparisons determined whether professional quality and tabloid newspapers differ in complexity dimensions from political blogs. Quality newspapers had higher syntactic complexity than the other two text types, but citizen journalism and tabloid newspaper articles scored higher on the semantic and content dimension. One possible explanation for these findings is the fact that professional newspaper articles go through a rigorous process of filtering (journalistic routines, gatekeeping, standardization of language, etc.) before actually being printed. For the blogs, this might not be the case. An individual writer of a blog article may not need to think about a "standard language template", or an emotional tone of the article, therefore utilizing various synonyms, colloquialisms, and jargon that is not available to the professional journalist, making the text lexically richer. Additionally, it has been noted that tabloid newspapers often employ a different style of language than that of quality media, even when covering the same topic (Fowler 2013), and also simplify their text to accommodate the audience (Zelizer et al. 2000). These characteristics of tabloid journalism may explain the high semantic complexity (e.g., very emotional language, non-standard colloquialisms, etc.) and low syntactic complexity (a relatively simple style). These patterns hold separately in German and English languages, indicating that the media landscape is quite similar in terms of language complexity across different languages.
An important conclusion from these results is that quality newspapers have the most complex syntactic structure, while citizen journalism blogs and tabloid newspapers include semantically rich text. This indicates that a syntactically complex text is not a necessary prerequisite for a complex political story (high semantic and content complexity). The relevance of these findings is significant, since many branches of journalism research could benefit from the addition of complexity as a factor. For example, researchers may be interested in how complexity affects credibility, whether these effects are different across different media, or whether different dimensions of complexity have a different effect on credibility, among others.
The present study also included a number of limitations. For example, the measures chosen to determine the linguistic complexity, although covering a wide range of linguistic nuances, are in no way exhaustive. Experimenting with other complexity metrics and investigating how they are related to political texts would add precision to the research on complexity. The second major limitation is the fact that the present study did not discriminate between various topics within the broad category of political texts. It could be the case that different political topics vary widely in terms of text complexity even within one particular publication. Future researchers should also strive to include more languages into the comparative design. This would help to understand how journalism is affected by text complexity across both the linguistic and cultural environments. Addressing these issues, further research would be able to come even closer to understanding the intricate interplay between linguistic complexity and journalism.
Admittedly, the present study was interested solely in the structural linguistic differences between various types of journalistic outlets. The nature of the automated content analysis method is such that, while it allows the analysis of amounts of data that would be unimaginable for humans to deal with, thereby uncovering meaningful patterns that would otherwise be hard to see, it pushes the human dimension of communication to the background. This study suffers from this limitation as well. However, even though automated methods do not fully capture all aspects of analyzed text, they might provide an insight for future research, giving us the ability to investigate the interplay between the content and the structure.
In summary, notwithstanding the limitations associated with this study, it provides valuable insights into the role of text complexity in journalism and how quality newspapers, tabloid newspapers, and citizen journalism articles differ in terms of text complexity and across its dimensions. It is now evident that in order to have a holistic understanding of textual complexity, future research must incorporate a range of measures to correctly determine the structural characteristics of text.
Moreover, the structural linguistic complexity aspect of the journalistic media also plays a very important role in bridging various branches of communication research. Credibility research, knowledge gap research, and research on LET are all related via the linguistic characteristics of media. Previous research indicates that the structure of the texts matters in these cases, now that we have empirical evidence that different media vary in the complexity of their texts, one could start asking questions about whether people perceive complex texts to be an indication of expertise or comprehensiveness (credibility research). For example, are higher levels of complexity perceived to be a positive characteristic since they imply expertise, while a different medium actually benefits from lower complexity levels, since people expect this medium to be comprehensible (LET); and how does that pan out for different types of complexity? Or maybe the readership expects different media to be more or less complex depending on the educational background (LET and the knowledge gap hypothesis). Application of automated text analysis in journalism research allows the incorporation of computationally intensive ANALYSIS OF LINGUISTIC COMPLEXITY 1799 problems, such as the analysis of linguistic complexity, and opens new directions for further investigations (see also Boumans and Trilling 2016). A longitudinal or a crossnational study investigating such phenomena as tabloidization (e.g., Esser 1999) on a very large scale would now be more feasible. These questions are now open for future research and, if pursued, would undoubtably enrich the theoretical body of communication and journalism research. The present study is but a first step towards finding one of the missing puzzle pieces-political news textual complexity-in the wide picture of political communication research.
DISCLOSURE STATEMENT
No potential conflict of interest was reported by the authors.
1.
In the German context, political blogs play so far only a marginal role and therefore the analysis is now restricted to newspapers only.
2.
Eight variables were used in the factor analysis model: mean length of sentence, mean sentence depth, syntactic dependency depth, ARI, lexical diversity, semantic entropy, Yule's I, and MTLD. A null factor analysis model with eight factors and no rotation extracted two factors with Eigenvalues above 1 (4.90 and 1.92, respectively). A two-factor model with oblimin rotation was estimated. Oblimin rotation is appropriate in this case because crossfactor correlations may be discovered. The model explained approximately 83 percent of the variance in the data. The extracted components correspond to the expected twodimensional complexity model-clearly showing semantic and syntactic constructs. | 8,500.8 | 2017-03-29T00:00:00.000 | [
"Linguistics"
] |
Funding sustainable cities A comparative study of Sino-Singapore Tianjin Eco-City and Shenzhen International Low-Carbon City
: China has gone through a rapid process of urbanization, but this has come along with serious environmental problems. Therefore, it has started to develop various eco-cities, low-carbon cities, and other types of sustainable cities. The massive launch of these sustainable initiatives, as well as the higher cost of these projects, requires the Chinese government to invest large sums of money. What financial toolkits can be employed to fund this construction has become a critical issue. Against this backdrop, the authors have selected Sino-Singapore Tianjin Eco-city (SSTEC) and Shenzhen International Low-Carbon City (ILCC) and compared how they finance their construction. Both are thus far considered to be successful cases. The results show that the two cases differ from each other in two key aspects. First, ILCC has developed a model with less financial and other supports from the Chinese central government and foreign governments than SSTEC, and, hence, may be more valuable as a source of inspiration for other similar projects for which political support at the national level is not always available. Second, by issuing bonds in the international capital market, SSTEC singles itself out among various sustainable initiatives in China, while planning the village area as a whole and the metro plus property model are distinct practices in ILCC. In the end, the authors present a generic financing model that considers not only economic returns but also social and environmental impacts to facilitate future initiatives to finance in more structural ways.
Introduction
Hundreds of millions of people have migrated from rural areas to cities in China since the implementation of the reform and opening-up policy, which is unprecedented in human history [1], and this trend continues. It is estimated that approximately one billion Chinese people will live in cities in 2030 [2]. This trend challenges both central and local governments to mobilize limited financial resources to provide public goods and services, such as sustainable energy and green infrastructure, to their citizens. The rapid demographic and economic growth alongside this urbanization trend is also one of the causes for the environmental problems the world faces. As such, researchers and practitioners attempt to solve the problem by incorporating an environmental factor into urban development. In 2003, the 'U.K. Energy White Paper: Our Energy Future-Creating a Low Carbon two projects. Based on the comparative study, a generic financing model is developed to facilitate sustainable initiatives to deal with financial problems. Section 6 concludes the research.
Financial Instruments for Urban Development: Taking Stock
With the development of the economy, countries tend to take environmental issues more and more seriously and seek to transition their economy into a more sustainable direction. This trend thus poses new challenges for local authorities in funding various sustainable projects. Therefore, both researchers and practitioners have tried hard to explore new financial instruments that can be employed to expand the sources of finance. Merk et al. [12] argue that the main financial instruments in the principal green urban sectors include taxes, user fees, grants, Public-private Partnerships (PPPs), land-based income, loans, bonds, and carbon finance. These financial instruments are used to finance the development of transportation, buildings, water/waste, and energy. Inman [13] holds the view that local public services can be funded through user fees, resident-based taxation, and business-based land value taxes. Of these, user fees can be applied to both residential and business services, resident-based taxation is adopted to finance residential services, and business-based land value taxes are applicable to business services. Slack [14] presents some financial instruments for large cities, including user charges, tax, intergovernmental transfers, borrowing, PPPs and development charges. Bahl and Linn [15] divide financial instruments into own-source financing and external sources on the basis of financial sources. Own-source financing includes user changes and betterment levies, property taxation, and non-property taxes, while external sources encompass intergovernmental transfers, borrowing, PPPs, and international aid. Z/YenGroup [16] systematically explores financial instruments for financing sustainable infrastructures in cities. The research group identifies three instruments in general, namely, public finance, debt finance, and equity finance. To be specific, public finance instruments include land sales, land or infrastructure asset leaseholds, PPPs and Private-finance initiatives (PFIs), taxes, land value capture mechanisms, user charges and fees, grants and subsidies, building rights, and planning permits. Debt finance instruments encompass loans and bonds, de-risking and credit enhancement instruments, and debt refinancing instruments. Equity finance instruments consist of listed infrastructure equities, listed/unlisted equity funds, and equity-funded direct investments (e.g., special purpose vehicles and joint ventures) in infrastructure. Panayotou [17] takes stock of the available economic instruments for financing sustainable development, covering property rights, market creation, fiscal instruments, charging systems, financial instruments, liability systems, and performance bonds and deposit-refund systems. He [17] further identifies economic and financing instruments that can be employed for securing the global commons, including global environmental financing institutions, international environmental taxation, transferable development rights, internationally tradable emission permits, joint implementation and carbon offsets, and the clean development mechanism. Bäckstrand [18] stresses the important role of global partnerships in funding sustainable development. His study indicates that local governments can benefit from the Johannesburg partnerships by having a clearer connection to existing institutions and multilateral agreements and improving the effectiveness of local governance [18]. Olsen [19] proposes response strategies to key environmental challenges and divides them up into short-term, medium-term and long-term strategies. Of these, short-term strategies include dedicated investment funds, premium purchasing, mixed credits, and capacity development; medium-term strategies cover Green Investment Schemes, supporting unilateral clean development mechanism (CDM) and small-scale projects, and exploring ways to transfer climate change mitigation into sector programs; and long-term strategies should be negotiated in advance to reduce risks that bring about uncertainties. Instead of directly exploring financial instruments, Meltzer [20] discusses how to use concessional climate finance to facilitate the development of low-carbon resilient infrastructure projects. Methods include (1) developing an enabling environment and co-financing packages; (2) supporting local banks, the development of financial instruments, and low-carbon technology; (3) strengthening the monitoring of outcomes; and (4) improving cooperation between climate funds. Some researchers argue that whether general fiscal investment and innovative financing strategies show long-term effectiveness depends on the following criteria: 'adequacy, stability, efficiency, equity, ease of implementation, and political acceptability' [21].
Many researchers explore financial vehicles that can be used to finance the construction of sustainable cities, yet the identified financial instruments do not play equal roles in the amount of funds they bring to the table. Bahl and Linn [15] concluded that debt finance, PPPs, and land-based levies are effective instruments to finance urban construction; intergovernmental transfers and grant finance are of paramount importance; and user charges and property taxes are critical yet underused. In general, the financial instruments that large cities adopt should be in line with their responsibilities in providing infrastructure and services [14]. Many researchers shed light on mobilizing private capital in that the involvement of private sectors can alleviate local authorities' financial pressure [14,22]. Therefore, local authorities should pay attention to the needs and interests of private investors [23] and provide political support to enabling conditions that involve private parties [24,25]. Some researchers, such as Reichelt [26] and Sullivan et al. [23], hold the view that PPPs and bonds are two effective means to allow the private sector to participate in the development of climate-related projects. PPPs have been widely drawn upon to finance projects in many fields. However, practitioners need to overcome many difficulties when they apply PPPs, particularly in developing countries. In terms of bonds, the money raised through green bonds only accounts for a small percentage of the projected amount that is required to fill the gap that green projects cause [26]. Still, the green bond market is booming in China, for which the amount raised through green bonds has grown from $1 billion in 2007 to over $41 billion in 2015 [27]. To unlock the potential of green bonds, dialogues between policy-makers and stakeholders should be strengthened to clear away barriers and improve information transparency [28].
The reviewed literature has suggested various methods to bridge the financial gap, yet they are scattered. There is no general model taking into account the sustainability of financial vehicles for urban development. To fill this research gap, this study offers a model to bring different financial vehicles together by taking non-financial factors into account, making financial vehicles more resilient in future financing activities.
Methodology
We relied on both desk research and interviews for data collection. As for desk research, the information was retrieved from the academic literature, SSTEC and ILCC's websites, and other web-based reports, e.g., auditing reports and working papers that have been published by the World Bank and the United Nations Environmental Program. In addition, we interviewed 20 people in total whose work is closely related to the two projects. Of these, 11 interviewees were working in or with SSTEC in the period April-July 2015, including officials, developers, financial staff, and project managers. In February 2016, we revisited the SSTEC site and stayed there for one week to collect additional information. We also visited the ILCC site in the period February-March 2016 and interviewed nine people working in or with ILCC. The first author conducted the interviews, and the language was Chinese. The interviewee's names are not presented due to confidentiality.
In addition, the research drew on the authors' earlier work, including the two most recent and direct companion articles Zhan & de Jong [29] and Zhan & de Jong [30]. The two articles were about how sustainable cities were financed in Tianjin and Shenzhen, respectively. Based on the similarities and differences across the two cases, lessons were drawn to benefit other sustainable cities.
An Overview of the Tianjin and Shenzhen Projects
ILCC is a demonstration program and a collaboration between China and the European Union (E.U.) on sustainable urbanization, aimed at displaying China's achievements in low-carbon technology. ILCC was launched in 2012 and covers a planned area of 53.4 km 2 . It is located in the Longgang District, Shenzhen, China, at the border of Dongguan and Huizhou in Guangdong province [31]. Currently, the economy in Pingdi is still underdeveloped, while the carbon emission levels are high. As a flagship project of the China-E.U. Partnership on Sustainable Urbanization, the Shenzhen municipality is trying its best to develop ILCC into a pilot area to realize a great leap forward in urban developmental planning under the concept of integrating industry with the city, green urban management, and benefit sharing under the constraints of carbon indicators to eventually provide replicable pathways for low-carbon development in future urbanization [32].
Sino-Singapore Tianjin Eco-city is a project that was launched as a collaboration between the Chinese and Singaporean governments. In November 2007, the Framework Agreement between People's Republic of China and Republic of Singapore about Building an Eco-City in the People's Republic of China and Supplementary Agreement of this framework was signed. It was a new highlight and key project between the two countries following the establishment and development of the Suzhou Industrial Park. Sino-Singapore Tianjin Eco-City aims to develop itself into a new city that is economically vibrant, environmentally friendly, resource-efficient, and socially harmonious, and to provide a reference for other cities in China [33].
To give an overall picture of the two cities, Table 1 displays a profile for each of Tianjin and Shenzhen. From the table, we learn that the Tianjin project started in 2007, which was five years earlier than the Shenzhen project. SSTEC covers 30 km 2 , 23.4 km 2 less than ILCC. However, SSTEC is built in an area consisting of salt pans, saline-alkaline non-arable land, and polluted water bodies. Each component takes up one-third of the land. The construction of SSTEC has a symbolic meaning both in China and elsewhere, since the Tianjin project builds a city from scratch. In contrast, ILCC is built on an existing city, but makes the transition by upgrading its industries to lower-carbon-emission industries. The differences in these aspects require the central government to be involved in the construction of SSTEC to a larger extent than in the Shenzhen project.
Geographic conditions
An area consisting of deserted salt pans, saline-alkaline non-arable land, and polluted water bodies.
Nearly half of the Pingdi Avenue is mountain area, of which 40% is natural reserve land; the other half has been urbanized.
Goals
To establish a replicable eco-city that is resource-saving, environmentally friendly, economically robust, and socially harmonious. Sino-Singapore Tianjin Eco-city has a planning area of approximately 30 km 2 and will be established in 10-15 years with an estimated population of 350,000.
To build a low-carbon technology research and development center and a low-carbon technology integration application demonstration center, a low-emission industry gathering center, a low-carbon solution provider center, and a low-carbon development service center Industries Cultural creation, environmental protection, high technology, specific finance, information technology and related services, and green building.
Service industry, information technology (IT) industry, energy and environmental protection industry, modern agricultural industry, low-carbon economic new material industry.
Comparing the Financial Vehicles the Two Projects Employ
Financing sustainable urban development has become a major issue, especially in Asian countries where the size and scale of construction efforts are vast. Here, we compare the cases of ILCC and SSTEC to identify the similarities and differences in the financing vehicles that they employ (see Table 2).
Similarities in Financing Vehicles
ILCC uses bank loans and corporate bonds to provide funds for its construction, which are employed by SSTEC as well. Although SSTEC and ILCC both draw upon bank loans and corporate bonds to finance their construction, they differ from each other. For instance, bank loans and corporate bonds in Shenzhen are carried out in the name of the Shenzhen Special Zone Construction and Development Group Co., Ltd. (CDG), which is a financing platform of Shenzhen Municipality. In contrast, bank loans and corporate bonds in Tianjin's case are arranged through Tianjin Eco-city Investment and Development Co., Ltd. (TEID), which has been regarded as an innovation of the Tianjin project since TEID has six stakeholders and separates the functions of local authorities from the company [29]. Regarding bank loans, both projects have a close connection with banks, so they can obtain large loan sums. For example, TEID is strongly backed by the public sector, which is helpful for the company in obtaining bank loans because government-backed projects are regarded as more reliable [34]. In addition, TEID cooperates with 12 banks, diversifying the sources for obtaining bank loans. As for ILCC, CDG plays an instrumental role in acquiring bank loans. CDG, as a financing platform, helps the government raise funds for its construction. Shenzhen Municipality packed the prime assets of its state-owned corporations to found CDG, which is conducive to CDG's obtaining bank loans. Seen from this aspect, the two projects are similar.
Differences in Financing Vehicles
However, the corporate bonds issued by the two corporations are different. CDG issues bonds in the Chinese capital market, while TEID issues bonds in the Singaporean capital market except for in China. It was the first time that a Tianjin-based non-financial company issued bonds in the international capital market, which is one of ILCC's major contributions in funding sustainable cities [35,36]. Issuing bonds in the international capital market not only reduces financial costs but also sets an example for other non-financial companies to raise money for sustainable projects internationally by issuing bonds.
The two cases also differ from each in arranging PPP. They both make use of international funds and domestic private funds, yet they vary in the detail. Foreign capital in the Tianjin case is predominantly from Singapore, including the Singaporean consortium led by Keppel Corporation, other Singapore-based companies, and the public in Singapore. However, ILCC has more diversified international cooperation. It originally wanted to utilize the same strategy as the Tianjin project to finance its construction. In particular, Shenzhen Municipality wanted the Dutch government to invest money in ILCC, yet it did not succeed in introducing the strategic partner, since the Dutch party just wanted to play a consultancy role in the construction [37]. Shenzhen Municipality has, since then, tried to diversify its partners by introducing companies from Germany, the Netherlands, Japan, and America [38]. This was one of the reasons for Shenzhen Municipality to change the low-carbon city's name from Sino-Dutch Low-Carbon City into Shenzhen International Low-Carbon City [32].
As a component of its PPP arrangement, Shenzhen makes use of planning the village area as a whole (PVAW) and the 'metro plus property' model to fund the low-carbon city, which are regarded as two innovations of the Shenzhen case in financing its construction. On the one hand, PVAW is a new means to consolidate and reserve land taking into account the benefits of aboriginal residents, small enterprises, and other scattered landowners. PVAW does not merely give monetary compensation to landowners in ILCC but also allows them to participate in the construction by contributing their land. In the process, the benefits of different stakeholders have been balanced, and thus social conflicts have been alleviated. On the other hand, the 'metro plus property' model offers another option for local authorities to arrange financial issues. Local authorities grant the franchise to a subway company, allowing it to construct and operate the metro. Meanwhile, local authorities allow the subway company to develop real estate along the line to subsidize the loss that metro construction causes. This practice adds value to the real estate around metro stations due to the convenience of transportation while the prosperous real estate, in its turn, boosts the traveler flow and thus increases the revenue of the subway company. With the help of PVAW and 'metro plus property', private parties have been mobilized to participate in the construction of ILCC, which relieves the financial burden of local authorities. PVAW reduces local government's expenditure in expropriating lands while the 'metro plus property' model decreases local government's costs in building the metro.
The differences in finance between the two cases also include the assistance from national and international authorities and organizations. It plays an instrumental role in funding SSTEC, yet the amount in ILCC is so limited that it can be ignored.
Stakeholders Involved in the Two Cases
The literature includes an extensive discussion on how international, national, and subnational actors and the balance of their benefits in the construction of sustainable cities influence the sustainable financing in the two cities [29,30,39]. Since sustainable cities are long-term and huge investment projects, the risks associated with investing in them are also high. As such, it is of paramount importance to balance the interests of different stakeholders.
The key to success is the active participation of actors from financial institutions, including central banks, regulators and prudential official institutions, standard-setters, governmental departments (including the ministry of finance), and market-based rule makers (including stock exchanges and credit rating agencies). Other participants also play an instrumental role in the construction of sustainable cities. Market-based participants are banks, pension funds, and analysts. They participate in the construction through leadership, knowledge transfer, alliance building, and advocacy participation. Sustainable development communities are the Ministry of Environment, think tanks, civil society, and institutions (e.g., United Nations Environmental Programme). These participants bring professional knowledge, build alliances, and raise public awareness. International organizations are related to financial system development: policy reform, knowledge development, and standard setting and coordination. Individuals are consumers of financial services, employees of financial institutions, and participants in civil society. They bring unique skills on how to relate the financial system to human needs and aspirations. Most of the above participants need to join alliances to play their respective roles at the national, regional, and international levels. Table 3 lists the major stakeholders involved in Tianjin and Shenzhen, including primary direct, primary indirect, and secondary stakeholders.
Primary Direct Stakeholders
From a primary stakeholder's perspective, the Tianjin project involves three major players, namely local governments, the administrative committee, and Urban Investment and Financing Platforms (UIFPs) and their subsidiaries. These players directly participate in the construction of SSTEC, greatly contributing to SSTEC's development. Local governments in SSTEC are responsible for promoting the development of the local economy and preserving the environment. The administrative committee, as the representative of the local governments, has the same interests as local governments but predominantly focuses on implementation. Tianjin Eco-city Investment and Development Co., Ltd. (TEID) plays the role of master developer, being responsible for (1) land acquisition, consolidation, and reserve in the eco-city, and (2) investment, construction, operation, and maintenance of infrastructure and other public facilities in the eco-city.
The Shenzhen project does not have an administrative committee, in contradistinction to the Tianjin project. The primary stakeholders of the Shenzhen project include local governments and UIFPs. Local governments include both the municipal and district governments, namely Shenzhen Municipality, the low carbon office of Shenzhen Municipality, the low carbon office of Longgang District, and Pingdi Avenue. Each plays its role in the construction. The municipal government is responsible for overall planning of the low-carbon city, which includes making the overall development plan, developing innovative management mechanisms, and drafting standards for the construction and admittance of a newly entering industry. The district level governments stress the implementation function more. Their responsibilities cover overall planning, land acquisition, investment promotion, dealing with ILCC-based enterprises, and defending the interests of residents. The UIFPs in the Shenzhen project include both the municipal and district level UIFPs, i.e., CDG and Longgang District Urban Construction and Investment Co., Ltd. (DUCI). They are accountable for financing and investment, infrastructure development, investment promotion, operation, and management.
It should be noted that the two projects both have UIFPs, yet they serve different functions in each project. TEID distinguishes itself from other UIFPs in two respects. First, its ownership is diversified. Second, local authorities do not share profits from the company and are not responsible for its losses either. This means that TEID cannot be simply viewed as the local government's financial vehicle. TEID is set up as per the needs of SSTEC yet operates on the basis of the principles of marketization and professionalization [40]. Similarly, TEID's subsidiaries have diversified ownership as well, consisting of both Chinese and Singaporean firms. The objective of these subsidiaries is to generate profits through involvement in the construction with their expertise in fields such as waste management and water treatment. However, CDG and DUCI in the Shenzhen project are financial vehicles of local governments, which were founded earlier than the launch of ILCC and aimed at financing for local governments. CDG and DUCI are also responsible for the investment and development of other projects in Shenzhen, acting on behalf of the municipal and district government, respectively. A UIFP usually would found a project company when it fulfills its responsibilities for local governments. However, CDG did not set up a project company to meet the requirements of the construction of ILCC, which resulted in CDG playing a weaker role in ILCC compared with those UIFPs that set up project companies.
Primary Indirect Stakeholders
The policy support for the construction of SSTEC is characterized by 'strong national government support, paired with structured foreign involvement' [39]. Both the Singaporean and Chinese central governments are involved in the eco-city, having a great deal of influence on SSTEC at the national level; however, this impact is indirect. For example, the Chinese and the Singaporean central government together set the eco-city's goal to build a replicable eco-city in SSTEC. The extensive political collaboration contributes to the progress SSTEC has made. The Chinese central government stipulates the overall planning, but does not get involved in the implementation. The Singaporean government offers its experience in environmental protection, but also looks for more opportunities to transfer its capital, technology, and knowledge. The extensive participation of the Singaporean government in SSTEC is a solid guarantee of sustainable funding.
The Shenzhen project is a demonstration program and a collaboration between China and the E.U. on sustainable urbanization. It involves multiple transnational investors, but does not have countries acting like the Singaporean government in SSTEC. Different from SSTEC, ILCC set up a steering committee, which is the representative of the National Development and Reform Committee, consisting of relevant ministries and commissions and Shenzhen Municipality. The steering committee oversees the progress of the low-carbon city; however, its impact is very limited.
Secondary Stakeholders
Banks, private parties, the public in China and other countries, and aboriginal residents are viewed as the secondary stakeholders due to their roles in the sustainable city. These stakeholders cannot influence the policies in the eco-city or the low-carbon city, but they play an instrumental role in funding the construction.
Regarding the Tianjin case, bank loans, bonds, and PPPs are the major financing vehicles employed in SSTEC. Accordingly, the stakeholders cover banks, the public in China and Singapore, and both China-based and Singapore-based private companies. Banks contribute to the construction through loans, and are one of the most traditional and reliable players in offering money for various construction projects. The public from China and Singapore provides their funds through buying bonds in the capital market. In the Tianjin case, in addition to the strong political support, the private parties from both countries are involved in the eco-city, including transnational and domestic investors [4,22]. From the Singaporean side, a Singaporean consortium led by Keppel Corporation is heavily involved in the development of the project, investing CNY 4 billion in the eco-city. Additionally, the Singaporean government also encouraged Singapore-based companies to expand their business in the low-carbon city by providing subsidies for them. From the Chinese side, many companies have become involved in the construction by investing their money through TEID's subsidiaries.
Similarly, banks, the public, and private parties are also involved in the Shenzhen project. Banks merely play a role in providing loans for UCG to support the construction. In the Shenzhen model, local governments are the real debtors, especially when UCG cannot pay back its debts. The public provides funds for the project through buying bonds at the capital market. Additionally, multinational investors, such as ESI (a German company), and Japanese and American companies bring their capital to the project as well as their skills, technology, and other resources.
However, there are some differences in the involved stakeholders in the two projects, especially among the private parties. First, Tianjin issued bonds in the Singaporean capital market, which distinguishes its financing method from other projects, since TEID is the first Tianjin-based non-financial company to issue bonds in the international market. Second, SSTEC mainly cooperates with Singaporean investors, but ILCC cooperates with multiple transnational investors. Singaporeans are extensively involved in the construction of SSTEC, but there are no multinational investors playing roles as significant as Singaporean corporations in ILCC. The practice in ILCC makes the financing sources more diversified than SSTEC, reducing the risks the high dependency on a single country brings about. Third, original residents also play a different role in the construction process. Residents act as service payers in the Tianjin case, while residents in ILCC also act as investors by contributing their land use rights. The practice in Shenzhen changes the benefit distribution mechanism. Residents can share the fruits of ILCC's development as well as act as major investors. This practice takes the disadvantaged group into account and thus reduces the conflicts between residents and local governments.
A Generic Model for Funding the Construction of Sustainable Cities
The similarities in finance between the Shenzhen and Tianjin cases indicate the critical role of traditional financial tools in urban development, while the differences present the innovative practice in each case and show the importance of exploring new instruments to diversify financial sources and balancing different stakeholders' interests for sustainable finance. With the help of the analysis of the financial vehicles used in the two cases and the stakeholder analysis, we come to the following model for funding sustainable cities in China. Figure 1 is a generic model for funding sustainable cities, which brings various financial vehicles and stakeholders together to increase financial resilience. The model illustrates how stakeholders interact with each other and how they contribute to the development of sustainable cities. The key factors of the model are demonstrated below.
The Chinese central government and local governments are the initiators of the development of sustainable cities. Currently, various sustainable cities in China are developed under the supervision of ministries and commissions. Without national support, it is difficult to carry out projects successfully. For example, the failure of the Dongtan project was mainly due to the developer's failure in obtaining a land conversion permit from the central government [4,41]. Therefore, the policy imperatives and various resource inputs from public authorities are key to the development of sustainable cities in the Chinese context.
Transnational governmental collaboration is crucial for projects to gain renown in the international market, which is conducive to attracting transnational investors to the project. China is still a developing country and lags behind in many respects, such as water treatment and the transformation of high-carbon-emission industries to low-carbon-emission ones. The participation of international counterparts brings not only money but also technology and skills.
UIFPs are either state-owned or state-holding enterprises, playing an instrumental role in arranging various resources. They are representatives of local governments to raise money from banks and the capital market. Some UIFPs are listed on stock exchanges, making them responsible to the public rather than merely representing local governments. However, the role of UIFPs is changing, as the National Audit Office of PRC has reported that local governments are at high risk of dealing with implicit debts through UIFPs [42]. Currently, local governments try to operate UIFPs based on the principle of marketization, and propose to implement PPPs to reduce their financial burden [22].
Concession agreements with public authorities give permission to private parties to charge fees from the public. For example, the application of the 'metro plus property' model allows the involved players to be paid through selling the properties along the metro line and collecting fees from metro travelers.
The public is another important financial source for a construction project, while residents are the cash inflow for the development of sustainable cities to guarantee the revenues under the arrangements used with PPP.
This model embodies both traditional and new financial instruments. Traditional financial instruments, such as bank loans and bonds issued in the Chinese capital market, guarantee the stability of the sources. Innovative financing practices expand financing sources and are conducive to raising a higher amount of money [26]. Furthermore, different stakeholders are taken into account to ensure the rest of the sustainable attributes as defined by Sun et al. [21]: efficiency, ease of implementation, and political acceptance. The model also covers governments from other countries, multinational investors, and local investors. They are encouraged to participate in the construction of sustainable cities. The international collaboration between governments eradicates many barriers in the implementation and is beneficial for attracting more private investors [30], which increases political acceptability. The involvement of private parties brings in money. They also bring in new technologies and expertise. The participation of governments and private parties makes it possible to achieve the environmental goals. The Chinese government plays an instrumental role in raising funds for environmental purposes, especially the development of green bonds. It encourages issuing cross-border green bonds, which increases international investors' interests in investing in green projects [43]. From the investors' side, green bonds are attractive since they have the characteristics of high stability, excellent credit, and good liquidity [43]. The consideration of different stakeholder's interests guarantees social sustainability by offering each of them opportunities to share in economic development. Issuing green bonds guides the money towards green projects and facilitates cities to transition to a low-carbon economy and environmental sustainability. The involvement of private parties provides funds and expertise for the development of sustainable cities boosting the economy's development, which contributes to economic sustainability as well. Figure 1. A generic model for funding sustainable cities.
Conclusions
We compared SSTEC and ILCC to gain insight into the similarities and differences in funding sustainable cities and thus to provide references for future sustainable construction projects. The two cases both rely on bank loans, corporate bonds, and PPP to provide funds for their construction, which are traditional vehicles for funding sustainable cities. There is no doubt that other projects can resort to these tools, but these do not suffice to raise money for the development of sustainable cities.
Therefore, light was cast on the innovative practices in SSTEC and ILCC, diversifying the funding sources. The Tianjin project offers experience in issuing bonds in the international market, which is the first bond issued by a Tianjin-based non-financial institute. One the one hand, issuing
Conclusions
We compared SSTEC and ILCC to gain insight into the similarities and differences in funding sustainable cities and thus to provide references for future sustainable construction projects. The two cases both rely on bank loans, corporate bonds, and PPP to provide funds for their construction, which are traditional vehicles for funding sustainable cities. There is no doubt that other projects can resort to these tools, but these do not suffice to raise money for the development of sustainable cities.
Therefore, light was cast on the innovative practices in SSTEC and ILCC, diversifying the funding sources. The Tianjin project offers experience in issuing bonds in the international market, which is the first bond issued by a Tianjin-based non-financial institute. One the one hand, issuing bonds in the international capital market expands financing sources and facilitates the project developer to raise money with lower costs. On the other hand, it is an efficient way to raise large sums of money and enhances the project's influence domestically and globally [20]. Such practice would be especially important to climate finance due to its lower interests and wider influences, and thus should be encouraged by the Chinese authorities. However, we should not forget that the master developer in SSTEC successfully issued bonds in Singapore partly because of assistance from the Chinese central government. Therefore, it is critical to change institutional arrangements to eradicate the barriers for non-financial corporations for issuing bonds internationally. Regarding the Shenzhen project, planning the village area as a whole and arranging finance through 'metro plus property' provide a replicable example for other cities in funding urban renewal and community transformation and dealing with the issue of how residents can share the benefits of urban development with developers. The practice taking the interests of third parties into account guarantees social sustainability because it meets the needs of the disadvantaged group by increasing their access to economic opportunities [44]. These innovative financing practices from the Tianjin and Shenzhen projects can be applied to other similar projects in China and globally. Taken as a whole though, Shenzhen and ILCC have developed a model of sustainable finance with less extensive financial and other support from the central government and foreign governments than SSTEC, and thus may offer more practical lessons for other cities. This should certainly be seen as a significant institutional and organizational step forward in achieving the social, environmental, and economic goals in sustainable urbanization.
The generic model that was constructed on the basis of Shenzhen's and Tianjin's experiences provides valuable lessons for other sustainable construction projects in how to finance their development in structural ways by incorporating social, economic, and environmental factors into their financing practices. The contribution of this research is not to be exhaustive, but to introduce innovative practices to other projects both launched in China and globally. However, it should be noted that the model cannot be applied to other projects directly, since local conditions always vary, particularly projects launched in other countries, requiring policy-makers and practitioners to make adjustments when they take the model as a reference. For instance, the decision-making processes are different between China and many other countries, which influences the efficiency of project development and thus how costs are handled. The difference, coupled with different land property right institutions, determines that planning the village as a whole might not be feasible in those countries to implement a bottom-up approach, since a longer time-frame and more efforts are required in negotiating with other actors than in China. However, the concept of mobilizing private capital through international collaboration and the involvement of governments should be encouraged to implement PPPs by both Chinese and international sustainable initiators. Furthermore, this model is still conceptual. Many other detailed issues should be further explored, such as the choice of discount rates [45,46] and information disclosure [47]. Currently, the environmental externalities, particularly negative externalities, have not been taken into account when gauging the feasibility of a project, which makes the project financially feasible even if it might be environmentally infeasible. As such, researchers, such as Scholtens [46], propose that the environmental influence should be taken into account when calculating the return of investment (ROI). Internalizing negative externalities will increase an operator's costs and thus decrease the project's ROI, which might prevent enterprises from initiating projects with negative environmental externalities. Contrarily, the internalization of positive externalities will lead to an increase in the ROI by adding value to enterprises, which is conducive to having money flowing into the field of sustainable urban development. Additionally, the disclosure of information regarding accountability, governance, and implementation is notoriously poor [47], which is a barrier to evaluating ecological implications and risks for investors [46]. Therefore, more effort should be put into funding future projects sustainably, which requires cooperation among the involved actors and a balancing of their respective interests.
Author Contributions: C.Z. and M.J. designed the study. C.Z. collected and analyzed the data and wrote the core of the manuscript. M.J. contributed to the manuscript by eliciting its narrative and adding, revising and editing text. H.B. contributed to the conceptualization and supervision of the article.
Funding:
The authors are indebted to the Urban Knowledge Network Asia (UKNA) and the Delft Initiative for Mobility and Infrastructures for their financial support. | 8,821.4 | 2018-11-17T00:00:00.000 | [
"Environmental Science",
"Economics"
] |
Influence of Specific Features of Twin Arc Welding on Properties of Weld Joints
The present article covers the influence of standard and narrow gap twin arc welding on properties of weld joints from high-strength steels. While analyzing microsections we established that distribution of micro structure and phase terms, as well the distribution of micro-hardness, were more homogeneous under narrow gap twin arc welding.
Introduction
Regarding the manufacturing process of special-purpose machinery the most expensive and timeconsuming things to produce are thick-walled housings where welding takes over 50% of the total amount of work [1][2]. Why this process is so time-consuming might be explained by thickness of weld materials and the usage of joints with bevel preparation. Use of high tensile medium alloy steels with derated weldability for manufacturing of housings of special-purpose machinery requires the development of special welding technique that will allow us to get solid weld seams. It is important to note, that the straight welds and fillet or I-shaped welds are pretty extended [3]. All that predetermines worthwhileness of usage of automatic welding. Increase of productivity and decrease of defect formation, improvement of welding process might be achieved by gas-shielded multi arc welding [4][5][6][7][8].
Mechanical, physical-chemical and performance characteristics of weld seam metal and heataffected zone (HAZ) are determined by whole range of factors, such as structural phase composition of all layers of weld seam because of that a comprehensive research of their formation under different welding techniques appears to be a relevant applied research task.
Experimental method
To determine special aspects of the influence of multi arc welding on characteristics of weld joints we carried out a series of experiments. The welding was done as gas-shielded single / multi arc welding (Ar+CO2) with welding wire such as: ОК 12.51 -for the first arc, ER307Ti for the second arc. An angle of V-shaped preparation (groove angle) of welded sheets with thickness 20 mm was 60° (area of preparation 231 mm 2 ) or 12° (area of preparation 122 mm 2 ). As the material for samples we used high-tensile-strength steel 30CrMnSiN, which is commonly used for manufacture of housings for special-purpose machinery.
After welding of samples, the templates were cut out for micro structural analysis ( Figure 1). The analysis was carried out using raster electron microscopy of high resolution Quanta-200 (produced by Figure 1. Scheme of templates cutting out from the sample weld seam By spectrometer, produced by "EDAX", recorded characteristic X-ray excited chemical elements and measurement of their weight (and atomic) ratio in the test material, the elements are detected in a range from Borium to Uranium; energy resolution on line KαMn not less than 160 electrone-volts.
The welding mode is given in Table 1. Regulation of thermocycle under twin arc welding was carried out automatically due to variations of the distance between arcs from 50 to 200 mm. During welding the heating temperature of samples was recorded with 6 thermocouple-sensing elements, 3 of them were set in the middle of the sample in heat-affected zone (HAZ) within different distances from weld seam axis (4, 6, 8 mm) and at the end of the seam within equal distances from weld seam axis, the same as in point 1 (4 mm) in template positions 4, 5, 6. Example of thermal cycles of heat supply and cooling after the second pass in analyzing points, the allocation of thermocouple-sensing elements under twin welding of sample 07 (distance from 100 mm within arcs) are given on Figure 2. Indicative macrosections of weld seams are presented on Figure 3.
Results and Discussion
To analyze the influence of twin arc welding on the properties of weld seams we measured microhardness at the cross-section. Indicative allocation of micro-hardness is presented on Figure 4. HAZ in all examined cases is more distinctive within the width surface of a seam in comparison with HAZ, that is spread at a seam room narrower.
A higher value has micro-hardness of the upper part of HAZ than of the lower part of HAZ joins the root of a weld seam.
In a way micro-hardness scans are similar under single and twin arc welding. Less shown as for micro-hardness angle 12°.
Besides measurement of micro microstructure of seams, heat affected zones and main metal parts of cross sections is presented on Based on the analysis of microstructure we can come to the following conclusions: 1) In close to the weld seams areas, HAZ in all cases in prior austenite grain we observed colonies of laminar morphology particles, typical for initial steel structure of a plate and determined by intense overheating of steel in a close distance to the weld seam reverse change of perlite to austenite and its bainitic hardening.
2)
In microstructure of HAZ located between bainitic and far from the weld seam we can see domination of thin plate perlite with a certain amount of perlitic areas with globalized finely dispersed cementite occurred as a result of dissolution decay to thin plate perlite.
3)
As for the depth of HAZ where the root weld is locatedperlite changes: the thickness of bainetic paltes increases as well as the thickness of plates and equiaxed particles of cementite in феррита (on photos wide black layers fundamental state of steel perlite.
Typical microstructure of a sample in HAZ under twin arc welding Based on the analysis of microstructure we can come to the following conclusions: In close to the weld seams areas, HAZ in all cases in prior austenite grain we observed colonies of laminar morphology, but within borders there were no globular crystal typical for initial steel structure of a plate. Such structure might be assigned to bainitic phase and determined by intense overheating of steel in a close distance to the weld seam enite and its bainitic hardening. In microstructure of HAZ located between bainitic and main globular perlitic (matrix) far from the weld seam we can see domination of thin plate perlite with a certain amount of perlitic y dispersed cementite. This microstructure set alongside with bainite occurred as a result of dissolution of the initial under heating process and then under cooling in the As for the depth of HAZ (from the surface of a cross-section -the first line to the bottom the third line) thin structure of bainite and neighboring intermediate the thickness of bainetic paltes increases as well as the thickness of plates and es of cementite in perlite colonies. On the borders of grains we can see areas of on photos wide black layers). In total such morphology is typical more for main fundamental state of steel perlite.
Typical microstructure of a sample in HAZ under twin arc welding (distance Based on the analysis of microstructure we can come to the following conclusions: In close to the weld seams areas, HAZ in all cases in prior austenite grain we observed but within borders there were no globular crystalline α-phase Such structure might be assigned to bainitic phase and determined by intense overheating of steel in a close distance to the weld seam, that assured a main globular perlitic (matrix) that is far from the weld seam we can see domination of thin plate perlite with a certain amount of perlitic This microstructure set alongside with bainite of the initial under heating process and then under cooling in the the first line to the bottom thin structure of bainite and neighboring intermediate the thickness of bainetic paltes increases as well as the thickness of plates and perlite colonies. On the borders of grains we can see areas of α-In total such morphology is typical more for main
4)
Under twin arc welding with bevel angle of 60°, microstructure of bainite and intermediate perlite in HAZ is more organized.
5)
In case of single / twin arc welding with bevel angle of 12° in HAZ we can also see beinite layer and only then finely dispersed perlite. With that a perlite area hardly differs from neighboring matrix of steel out of HAZ, but bainite morphology in forms of packs was more structured and contained particles of finely dispersed cementite, in general it represents less distortion and lower initial stresses of microstructure. This correlates with data obtained from measurements of microhardness. 6) In general HAZ with narrow gap stood out by larger microstructure and phase homogeneity, that makes her closer to a main metal of steel plate. This results from the measurements of microhardness.
Conclusion
As can be seen from the above, according to the results of carried examinations we can come to a conclusion that usage of narrow gaps and twin arc welding influence in a positive way microstructure and characteristics of weld seams of high-tensile-strength steel. | 2,018.8 | 2016-04-01T00:00:00.000 | [
"Materials Science"
] |
Systematic Investigation of the Physical and Electrochemical Characteristics of the Vanadium (III) Acidic Electrolyte With Different Concentrations and Related Diffusion Kinetics
Owing to the lack of systematic kinetic theory about the redox reaction of V(III)/V(II), the poor electrochemical performance of the negative process in vanadium flow batteries limits the overall battery performance to a great extent. As the key factors that influence electrode/electrolyte interfacial reactivity, the physicochemical properties of the V(III) acidic electrolyte play an important role in the redox reaction of V(III)/V(II), hence a systematic investigation of the physical and electrochemical characteristics of V(III) acidic electrolytes with different concentrations and related diffusion kinetics was conducted in this work. It was found that the surface tension and viscosity of the electrolyte increase with increasing V(III) concentration, while the corresponding conductivity shows an opposite trend. Both the surface tension and viscosity change slightly with increasing concentration of H2SO4, but the conductivity increases significantly, indicating that a lower V(III) concentration and a higher H2SO4 concentration are conducive to the ion transfer process. The electrochemical measurements further show that a higher V(III) concentration will facilitate the redox reaction of V(III)/V(II), while the increase in H2SO4 concentration only improves the ion transmission and has little effect on the electron transfer process. Furthermore, the diffusion kinetics of V(III) have been further studied with cyclic voltammetry and chronopotentiometry. The results show that an elevated temperature facilitates the V(III)/V(II) redox reaction and gives rise to an increased electrode reaction rate constant (ks) and diffusion coefficient [DV(III)]. On this basis, the diffusion activation energy (13.7 kJ·mol−1) and the diffusion equation of V(III) are provided to integrate kinetic theory in the redox reaction of V(III)/V(II).
Owing to the lack of systematic kinetic theory about the redox reaction of V(III)/V(II), the poor electrochemical performance of the negative process in vanadium flow batteries limits the overall battery performance to a great extent. As the key factors that influence electrode/electrolyte interfacial reactivity, the physicochemical properties of the V(III) acidic electrolyte play an important role in the redox reaction of V(III)/V(II), hence a systematic investigation of the physical and electrochemical characteristics of V(III) acidic electrolytes with different concentrations and related diffusion kinetics was conducted in this work. It was found that the surface tension and viscosity of the electrolyte increase with increasing V(III) concentration, while the corresponding conductivity shows an opposite trend. Both the surface tension and viscosity change slightly with increasing concentration of H 2 SO 4 , but the conductivity increases significantly, indicating that a lower V(III) concentration and a higher H 2 SO 4 concentration are conducive to the ion transfer process. The electrochemical measurements further show that a higher V(III) concentration will facilitate the redox reaction of V(III)/V(II), while the increase in H 2 SO 4 concentration only improves the ion transmission and has little effect on the electron transfer process. Furthermore, the diffusion kinetics of V(III) have been further studied with cyclic voltammetry and chronopotentiometry. The results show that an elevated temperature facilitates the V(III)/V(II) redox reaction and gives rise to an increased electrode reaction rate constant (k s ) and diffusion coefficient [D V(III) ]. On this basis, the diffusion activation energy (13.7 kJ·mol −1 ) and the diffusion equation of V(III) are provided to integrate kinetic theory in the redox reaction of V(III)/V(II).
INTRODUCTION
Vanadium flow batteries (VFBs) have been widely developed as a green energy storage technology because of their high energy efficiency, flexible design, long life cycle, high safety, and low cost (Rychcik and Skyllas-Kazacos, 1988;Sun and Skyllas-Kazacos, 1992;Joerissen et al., 2004;Sukkar and Skyllas-Kazacos, 2004;Zhao et al., 2006;Rahman and Skyllas-Kazacos, 2009;Ding et al., 2013;Chakrabarti et al., 2014;Zheng et al., 2016). In general, VFBs are mainly composed of the electrolyte, electrode, ion exchange membrane, and a bipolar plate. They store energy through the chemical changes in electroactive species, which are separated by the ion exchange membrane (Wang et al., 2014;Xia et al., 2019;Ye et al., 2019Ye et al., , 2020aYu et al., 2019;Lou et al., 2020). The V(V)/V(IV) and V(III)/V(II) redox couples are used as the catholyte and the anolyte, respectively, and the sulfuric acid solution acts as the supporting electrolyte. The concentrations of vanadium and H + ions play an important role in the determination of the electrochemical reaction processes and the battery performance.
The most commonly used electrolyte in VFBs is an equivalent volume mixture of V(III) and V(IV) sulfuric acid solution.
Previous studies have noted that the concentration of V(IV) and acid as well as the operating temperature have important effects on the physicochemical properties and electrochemical activity of the positive electrode reaction (Sum et al., 1985;Kazacos et al., 1990;Zhong and Skyllas-Kazacos, 1992;Iwasa et al., 2003;Yi et al., 2003;Liu et al., 2011). However, few studies have reported on the negative process. Sun and Skyllas-Kazacos (Sum and Skyllas-Kazacos, 1982) investigated the electrochemical behavior of the V(III)/V(II) redox couple at glassy carbon electrodes using cyclic voltammetry (CV). They found that the oxidation/reduction reaction is electrochemically irreversible and the surface preparation is very critical in determining the electrochemical behavior. Yamamura et al. (2005) determined the standard rate constants of the electrode reactions of vanadium on different carbon electrodes. Most studies (Sum and Skyllas-Kazacos, 1982;Oriji et al., 2005;Lee et al., 2012;Aaron et al., 2013;Sun et al., 2016) found that the electrode reaction rate of V(III)/V(II) is much less than that of V(IV)/V(V); however, systematic investigations into the detailed mechanism remain scarce.
Owing to the sluggish kinetics of V(III)/V(II) and the significant hydrogen evolution reaction, the negative process contributes almost 80% polarization during the discharging process (Sun et al., 2016). Agar et al. (2013) further verified that the negative electrode process was the limiting factor in VFB performance by using an asymmetric cell configuration. As the key factors influencing electrode/electrolyte interfacial reactivity, the concentration and physicochemical properties of the V(III) acidic electrolyte as well as the temperature play an important role in the redox reaction of V(III)/V(II) (Xiao et al., 2016). For instance, the viscosity of the electrolyte affects the mass transfer kinetics and the conductivity directly influences the reversibility of the electrochemical reaction, which both depend on the concentration of vanadium and H 2 SO 4 (Zhang, 2014). In short, it is necessary to conduct a systematic investigation into the physical and electrochemical characteristics of V(III) acidic electrolytes using different concentrations and diffusion kinetics.
In our previous work (Wang et al., 2014), the temperaturerelated reaction kinetics of the V(IV)/V(V) redox couple on a graphite electrode in sulfuric acid solutions was investigated. Herein, we will investigate the physicochemical properties of the electrolytes with different concentrations of V(III) and sulfuric acid and conduct a systematic study of the diffusion kinetics of V(III). Our aim is to clarify the kinetic rules of the diffusion behavior of V(III) and further establish a diffusion equation, providing a better understanding of the V(III)/V(II) redox reaction in the negative half-cell of VFBs.
Preparations of the Electrode and Electrolyte
A spectroscopically pure graphite rod (SPGR) (Sinosteel Shanghai Advanced Graphite Material Co. Ltd, China) was used as the working electrode. The working area of the SPGR was ∼0.28 cm 2 . This was ground with silicon carbide papers (down to 2,000 grit in grain size) and thoroughly rinsed with deionized water and alcohol before use.
All chemicals used in this work were analytically pure agents and all solutions were prepared with deionized water. V(III) acidic solutions were initially prepared by the electrochemical reduction of VOSO 4 with an electrolytic cell and then diluted to produce solutions with the required H + and V(III) concentrations. In addition, the concentration of the electrolytes was measured with a ultraviolet spectrometer (TU-1900; Persee General Instrument Co. Ltd, Beijing, China).
Physical Characterization of the Electrolyte
The viscosity was measured by means of an Ubbelohde viscometer. The electrical conductivity was determined using a conductivity meter (Mettler Toledo) at 293 K. The surface tensions of the solutions were measured by the bubblepressure method.
Electrochemical Measurements
The electrochemical measurements were performed using a Reference 600 electrochemical workstation (Gamry Instruments, USA) with a conventional three-electrode cell with an SPGR as the working electrode, a platinum plate as the counter electrode, and a saturated calomel electrode as the reference electrode. A salt bridge was used to eliminate the liquid junction potential between the Luggin capillary and the working electrode. The electrolyte was purged with nitrogen for 10 min before the electrochemical test to reduce the influence of oxygen on the electrochemical oxidation of V(II). Temperature was controlled by a water bath.
Physical Characteristics of the V(III) Acidic Electrolyte at Different Concentrations
The physical parameters of the electrolyte, such as the surface tension, viscosity, and conductivity, significantly affect the ion transmission process and the electrochemical properties of the electrode/electrolyte interface (Jing et al., 2016). In general, higher surface tension will hinder the contact between the electrolyte and the electrode, leading to a decrease in the effective reaction area, while higher conductivity often means faster transmission of ions and higher viscosity usually leads to a lower diffusion rate. However, the three physical parameters are not all proportional to the concentration of the electrolyte, so it is not reasonable to simply increase or decrease the electrolyte concentration in engineering applications. In particular, vanadium ions often exist in a very complex form in the electrolyte, which may result in a significant difference in the physicochemical properties of the electrolyte at different concentrations (Sepehr and Paddison, 2016). Hence, it is necessary to investigate the influence of the electrolyte concentration on its physicochemical properties.
The influence of the concentration of V(III) and H 2 SO 4 on the surface tension of the electrolyte was investigated first. Figure 1A shows the surface tension of 2.0 mol·L −1 H 2 SO 4 solutions with different concentrations of V(III) (from 0.1 to 1.3 mol·L −1 ). Obviously, the surface tension of the electrolytes gradually increases with an increasing concentration of V(III). Actually, the higher the concentration of vanadium, the higher the surface tension and the greater the effect on the contact between the electrolyte and the electrode. However, in practice, we want to increase the concentration of vanadium to achieve high volumetric capacities or energy densities. We can resolve this contradiction by improving the hydrophilicity of the electrode surface to apply a higher concentration of vanadium. In contrast to Figure 1A, the variation in the trend in the surface tension was very slight when changing the H 2 SO 4 concentration ( Figure 1B). Such a different phenomenon might be attributed to the stronger hydration force of the V(III) compared with that of H 2 SO 4 . Therefore, the surface tension of the V(III) acidic electrolyte was mainly affected by the concentration of V(III).
The influence of the concentration of V(III) and H 2 SO 4 on the viscosity and conductivity of the electrolyte was also investigated. As shown in Figure 2, there was a fourfold increase in the viscosity as the concentration of V(III) changed from 0.1 to 1.3 mol·L −1 . However, the viscosity of the electrolytes changed very little with different concentrations of H 2 SO 4 , which was similar to the changing features of the surface tension described above. This could be ascribed to the more complex structure of V(III). It can be concluded that the concentration of V(III) was the main factor in determining the viscosity of the V(III) acid electrolytes, and a suitable concentration of V(III) had a positive effect on its mass transfer performance. Figure 3 shows the variations in the conductivity of the electrolytes with different concentrations of V(III) (a) and H 2 SO 4 (b). When the concentration of V(III) increased from 0.1 to 1.3 mol·L −1 , the conductivity decreased by ∼43.3% (from 480 to 272 mS·cm −1 ), while there was an obvious increase in conductivity (about 6-fold) with increasing H 2 SO 4 concentration. The significant difference should also be ascribed to the more complex form of V(III), which would result in a larger hydrated ionic radius and poorer mobility (Sepehr and Paddison, 2016). Therefore, it is necessary to investigate the optimum concentration of V(III) and H 2 SO 4 to obtain better electrochemical performance.
Electrochemical Characteristics of the V(III) Acidic Electrolyte at Different Concentrations
The CV test was a useful tool to investigate the electrochemical performance of the battery materials. For a CV curve, the value of the peak currents of the oxidation and reduction reactions (i pa , i pc , respectively) and their ratio (-i pa /i pc ), as well as the peak potential separation ( E p ), could be used to estimate the electrochemical activity. Generally, the lower the value of E p or the more similar the i pa and -i pc values usually implied better electrochemical reversibility, and a higher peak current often suggested higher reactivity (Bard and Faulkner, 2001;Ding et al., 2013). However, the peak current (i pc ) is closely related to the electrochemical surface area of the electrode, which is difficult to read directly from the CV curve, so E p and -i pa /i pc were more suitable for estimating the electrochemical properties.
Specifically, -i pa /i pc can be calculated from the CV curves by the following equation (Bard and Faulkner, 2001): where (i pc ) 0 is the uncorrected cathodic peak current density with respect to the zero current baseline and (i sp ) 0 is the current density at the switching potential.
Herein, CV tests on the SPGR in 2.0 mol·L −1 H 2 SO 4 with different concentrations of V(III) electrolytes (from 0.1 to 1.0 mol·L −1 ) were first carried out at a scan rate of 10 mV·s −1 (Figure 4A). The detailed electrochemical parameters are listed in Table 1. As expected, the value of -i pa /i pc gradually increased to 0.97 with increasing concentration of V(III), indicating a favorable electrochemical reversibility of V(III)/V(II) with a 1.0 mol·L −1 V(III) acidic electrolyte. In addition, E p gradually decreased with increasing concentration of V(III), also indicating increasing electrochemical activity of the electrolyte with higher concentrations of V(III). It can be concluded that a higher concentration of V(III) would facilitate the V(III)/V(II) electrochemical redox reaction.
Next, CV tests on the SPGR in 0.1 mol·L −1 V(III) with different concentrations of H 2 SO 4 electrolytes (from 0.5 to 3.0 mol·L −1 ) were carried out at a scan rate of 10 mV·s −1 . The corresponding CV curves are shown in Figure 4B; the detailed electrochemical parameters recorded from the CV curves in Figure 4B are also listed in Table 1. Compared with the CV curves in Figure 4A, the CV curves in the electrolytes at different H 2 SO 4 concentrations showed a smaller difference. Even so, the value of E p obviously decreased with increasing concentration of H 2 SO 4 , which should be ascribed to the rapid transfer rate of H + , resulting in favorable conductivity of the electrolytes with higher concentrations of H 2 SO 4 . In addition, the value of -i pa /i pc increased first and then decreased, and the maximum value was obtained with 2.0 mol·L −1 H 2 SO 4 electrolyte, which might be attributed to the coupling effect of both the increased conductivity and viscosity as well as the gradually increasing influence of the hydrogen evolution reaction with increasing H 2 SO 4 concentration. However, the -i pa /i pc values changed little with H 2 SO 4 concentration, indicating that the H 2 SO 4 concentration had a smaller effect on the electron transfer process. Electrochemical impedance spectroscopy (EIS) is a powerful non-destructive technique for studying the electrochemical processes at the electrode/electrolyte interface. Electrochemical parameters such as the solution resistance (R s ), constant resistance (R c ), and electron transfer resistance (R ct ) can be obtained simultaneously through the appropriate equivalent circuit (Cao and Zhang, 2002). Figure 5 shows the Nyquist plots of an SPGR recorded in different electrolytes at a polarization potential of −0.6 V, with an excitation signal of 5 mV and frequency ranging from 0.1 mHz to 10 mHz. As shown in Figure 5, the Nyquist plots for all samples consisted of a semicircle at high frequency and a linear part at low frequency, suggesting that the electrode reaction was dual controlled by the electrochemical reaction and diffusion TABLE 2 | EIS parameters obtained by fitting the impedance plots with the equivalent electric circuits in Figure 5. Frontiers in Chemistry | www.frontiersin.org processes (Wei et al., 2014). Thus, the Nyquist plots in Figure 5 also show the equivalent circuits, where R s is the bulk solution resistance; CPE is the constant phase element, which accounts for the double-layer capacitance; R ct signifies the faradaic interfacial charge-transfer resistance; and Z w is the diffusion capacitance attributed to the diffusion process of vanadium ions (Wang and Wang, 2007;Wei et al., 2014). According to the fitting results in Table 2, R s increased gradually with increasing V(III) concentration, which was caused by the decreased conductivity of the electrolytes. R ct decreased dramatically with increasing concentration of V(III), indicating the better electrochemical reactivity of a higher concentration of V(III), which was consistent with the CV results. Comparing the increased R s with the decreased R ct , the latter was much more remarkable, thus the electrochemical polarization was more prominent than the ohmic polarization on the SPGR in the V(III) acid electrolytes. For the electrolytes with different H 2 SO 4 concentrations, R s rapidly decreased with increasing H 2 SO 4 FIGURE 6 | CV curves on an SPGR recorded at different scan rates in 1.0 mol·L −1 V(III) with 2.0 mol·L −1 H 2 SO 4 (A). Peak current density as a function of the square root of the scan rate (B). Frontiers in Chemistry | www.frontiersin.org concentration, owing to the greater conductivity of the electrolyte with a higher concentration of H + . However, R ct was almost unchanged with increasing H 2 SO 4 concentration, suggesting that the electron transfer process of the V(III)/V(II) redox reaction had little relationship with H + . In short, the concentration of H 2 SO 4 mainly affected the ohmic resistance of the electrolyte, while the V(III) concentration mostly influenced its electron transfer resistance, which was consistent with the CV results.
Based on the above, the electrolyte containing 1.0 mol·L −1 V(III) and 2.0 mol·L −1 H 2 SO 4 exhibited favorable electrochemical properties, so CV behaviors at different scan rates in that electrolyte were further investigated. As shown in Figure 6A, the oxidation and reduction peaks showed comparative symmetry at all scan rates, indicating a favorable electrochemical reversibility. In addition, the peak current proved to be proportional to the square root of the scan rate (Figure 6B), which suggested that the oxidation and reduction reaction of the V(III)/V(II) redox couples on an SPGR were controlled by the diffusion process (Wei et al., 2014).
Diffusion Kinetics Study of the V(III) Acid Electrolytes
CV is one of the most commonly used electrochemical techniques to study the electrode reaction kinetics. For an irreversible electrode process, the peak current density is given by Bard and Faulkner (2001): Figure 7. where V is the potential sweep rate (V·s −1 ) and D is the diffusion coefficient of the active reactant (cm 2 ·s −1 ). Based on Equation (2), we can obtain the value of D V(III) at different temperatures from the slope of the plot of i p vs. V 1/2 . Moreover, the values of the reaction rate constant k s can be calculated by Equation (3) (Bard and Faulkner, 2001): where i p is the peak current density (A·cm −2 ); E p is the peak potential (V); C b is the bulk concentration of the electroactive species (mol·L −1 ); k s is the standard heterogeneous rate constant (cm·s −1 ); α is the charge transfer coefficient; E O ′ is the formal potential of the electrode; n is the number of electrons involved in the rate-limiting step; and other symbols such as F, R, and T have their usual meanings. The formal potential E O ′ at different temperatures can be calculated by Equation (4) (Bard and Faulkner, 2001): where j is the total number of potential scans applied in the CV tests; E pa is the anodic peak potential; and E pc is the cathodic peak potential. Figure 7C.
Frontiers in Chemistry | www.frontiersin.org Herein, CV tests in an electrolyte consisting of 0.5 mol·L −1 V(III) with 2.0 mol·L −1 H 2 SO 4 at different temperatures were conducted to study their electrode reaction kinetics. Figure 7 shows the typical CV curves at scan rates ranging from 10 to 200 mV·s −1 at 283. 15, 293.15, 303.15, and 313.15 K, respectively. As shown in Figure 7, E p was significantly >60 mV, indicating the electrochemical irreversibility of the V(III)/V(II) redox reaction (Kazacos et al., 1990;Aaron et al., 2013).
The mean values of -i pc /i pa and the formal potential E O ′ at different scan rates obtained from the CV curves in Figure 7 are listed in Table 3. The values of -i pc /i pa changed slightly with the temperature, suggesting an insignificant effect of temperature on the reversibility of the V(III)/V(II) redox reaction.
The values of the anodic charge transfer coeffcient (α) and electron transfer number (n) have been estimated to be 0.56 and 1, respectively, according to our earlier work (Jing, 2017). Based on Equation (2-3) in Jing (2017), we can deduce a linear relationship between i pc vs. V 1/2 and ln i pc vs. (E p -E O ′ ). The corresponding results measured at 303.15 K are shown in Figures 8A,B. Next, the values of D V(III) and k s at different temperatures can be calculated according to the slope of the linear curves in Figures 8A,B, respectively. For comparison, the viscosities (η) measured by Ubbelohde viscometry under different temperatures are listed in Table 4. Table 4 show that the values of the reaction rate constants (k s ) were of the order of 10 −5 cm·s −1 and became larger with increasing temperature, which suggested that a higher temperature might facilitate the V(III)/V(II) redox reaction. Furthermore, D V(III) increased from 5.034 × 10 −7 cm 2 ·s −1 at 283.15 K to 11.9 × 10 −7 cm 2 ·s −1 at 313.15 K, suggesting that an increased temperature was beneficial to the mass transfer of V(III), which was also reflected in the change of viscosity. Indeed, the diffusion coefficient of the active ion has an important effect on the battery performance. The larger coefficient suggests a faster ion migration rate, which is conducive to the mass transfer kinetics of the electrode reaction reducing the concentration polarization of the battery under a higher current density and leading to a better rate capability and electrolyte utilization rate.
The results in
However, as mentioned above, CV is not an ideal quantitative method to determine the kinetic parameters of the peak current. Herein, chronopotentiometry was carried out as it is a promising approach to obtain the diffusion coefficient by Sand's equation (Kazacos et al., 1990;Sepehr and Paddison, 2016). The corresponding potential-time curves under various temperatures are shown in Figure 9.
For an irreversible or reversible reaction, Sand's equation is given by Kazacos et al. (1990): where τ is the total time taken to achieve an abrupt change in the potential of the electrode and i is the current density. The values of τ are determined as the transition time when the absolute values of the slope of the plots increase abruptly. The plots of i vs. τ 1/2 obtained from Figure 9 are shown in Figure 10, and the values of D V(III) calculated from the slopes of these plots are listed in Table 5. By comparing the D V(III) values in Tables 4, 5, it can be seen that the values of D V(III) obtained from CV and chronopotentiometry were of the same order (10 −7 cm 2 ·s −1 ), but there was a smaller variability when changing the temperature, which might result in a smaller deviation of the calculated D V(III) values.
Based on the chronopotentiometry results, the diffusion activation energy, E D , can be obtained from the slope of the plot of ln D(T) vs. 1/T by the Arrhenius equation (Zha, 2002): where D 0 is a temperature-independent factor (cm 2 ·s −1 ) and E D is the diffusion activation energy. The shift of ln D(T) with 1/T is shown in Figure 11, from where the values of E D and D 0 can be estimated as 13.7 kJ·mol −1 and 1.3 × 10 −4 cm 2 ·s −1 , respectively. As a result, the diffusion coefficient of V(III) can be expressed as follows: D V(III) = 1.3 × 10 −4 exp 13700/RT which could be used to estimate the diffusion behavior of V(III). In summary, an increase in temperature could facilitate the V(III)/V(II) redox reaction and improve the mobility of V(III) ions in the negative electrolyte, which would result in improved electrochemcial performance. However, the more intense hydrogen evolution at higher temperatures should also be considered.
CONCLUSION
In this work, the physical and electrochemical characteristics of the V(III) acidic electrolytes at different concentrations and with different diffusion kinetics have been systemically investigated. The results show that the surface tension and viscosity of the V(III) acidic electrolyte were mainly affected by the V(III) concentration and that they were in direct proportion to each other, which suggested the negative effects of a high concentration of V(III) on the mass transfer kinetics. As the supporting electrolyte, the H 2 SO 4 concentration had a significant effect on the conductivity of the electrolyte; however, the higher H 2 SO 4 concentration might result in significant hydrogen evolution and increased mass transfer resistance.
The electrochemical measurements showed that a higher V(III) concentration would facilitate the redox reactions of V(III)/V(II), while the increase in H 2 SO 4 concentration could improve the ion transmission and had little effect on the electron transfer process. In addition, the diffusion kinetics of V(III) were further studied by CV and the chronopotentiometry method. The results demonstrated that an elevated temperature would facilitate the V(III)/V(II) redox reaction, and so the reaction rate constant (k s ) and diffusion coefficient [D V(III) ] were obtained at different temperatures. On this basis, the diffusion activation energy (13.7 kJ·mol −1 ) and the diffusion equation for V(III) are provided to integrate kinetic theory in the redox reaction of V(III)/V(II).
DATA AVAILABILITY STATEMENT
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. | 6,216 | 2020-07-14T00:00:00.000 | [
"Materials Science",
"Chemistry"
] |
Incentives for Delay-Constrained Data Query and Feedback in Mobile Opportunistic Crowdsensing
In this paper, we propose effective data collection schemes that stimulate cooperation between selfish users in mobile opportunistic crowdsensing. A query issuer generates a query and requests replies within a given delay budget. When a data provider receives the query for the first time from an intermediate user, the former replies to it and authorizes the latter as the owner of the reply. Different data providers can reply to the same query. When a user that owns a reply meets the query issuer that generates the query, it requests the query issuer to pay credits. The query issuer pays credits and provides feedback to the data provider, which gives the reply. When a user that carries a feedback meets the data provider, the data provider pays credits to the user in order to adjust its claimed expertise. Queries, replies and feedbacks can be traded between mobile users. We propose an effective mechanism to define rewards for queries, replies and feedbacks. We formulate the bargain process as a two-person cooperative game, whose solution is found by using the Nash theorem. To improve the credit circulation, we design an online auction process, in which the wealthy user can buy replies and feedbacks from the starving one using credits. We have carried out extensive simulations based on real-world traces to evaluate the proposed schemes.
Introduction
With the improvement in hardware manufacturing, CPU architectures, radio communication techniques and software design, the price of smartphones can be accepted by the majority of people, resulting in large availability. Due to powerful computation, communication capabilities and various functional built-in sensors, smartphones enable accurate trace world-related information and activities of citizens by taking advantage of people willing to collaborate toward continuous data collection, called crowdsensing [1]. The rapid development and proliferation of sensor-equipped smartphones are making mobile crowdsensing an effective way to enable more and more applications, ranging from L3 [2] for traffic monitoring, iCal [3] for noise monitoring, vCity Map [4,5] for smart cities [6], for air pollution monitoring, LiFS [7] for indoor localization and [8] for urban WiFi characterization.
We consider mobile opportunistic crowdsensing formed by mobile users who share similar interests and connect with one another by exploiting Bluetooth and/or WiFi connections of their mobile phones or portable tablets. Mobile opportunistic crowdsensing is often created for a local community where the participants have frequent interactions, e.g., people living in an urban neighborhood,
Incentives and Selfishness
Mobile users in mobile opportunistic crowdsensing can be either cooperative or selfish. Each cooperative user carries data packets for others voluntarily. However, if a user is selfish, it is often reluctant to consume its energy, storage and bandwidth resources for other users, resulting in poor performance. The more users help others deliver data packets, the better the network performance. Therefore, it is imperative to propose an incentive scheme to stimulate user cooperation.
In this work, we consider users selfish and rational. The data query is a "push-and-pull" model, where a query issuer intends to query data. At the same time, each data provider has expertise for each category to reply to queries. In practice, however, it is nontrivial to properly define the expertise, because a mobile user hardly knows precisely his or her probability to answer queries in each category. He or she may initially claim his or her expertise based on the mobile user's social roles. However, such initially claimed expertise is often inaccurate. Therefore, data providers intend to require feedbacks from query issuers. Both query issuers and data providers pay for the delivery service. Other users participate in the query, reply and feedback delivery only if they are beneficiaries. This is in a contrast to other incentive models in the literature, where either sources intend to "push" data as payers or receivers intend to "pull" data and, thus, are deemed as payers [20][21][22]. In this paper, we assume that neither do all users consume their resources to help, nor to maliciously attack others. We also assume strong authentication that provides auditability for the verification of the identities of users and prevents forging identification to obtain a free forwarding service or more rewards from others.
An example is illustrated in Figure 1, where User A wonders "who can get the ticket for the Superbowl final" and would like to query the data in his or her community. Therefore, User A generates a query for sports. Obviously, User A needs to get the ticket before the final; therefore, the delay budget for the query is from the date User A generates the query to the final date. When User A meets User B, they trade the query by a trading process. The reason that User B is eager to obtain the query is that User B can meet User C frequently, who has expertise for sports news. Therefore, when User B meets User C, User B sends the query to User C and gets the reply from it. After the retrieval of the reply, User B has two ways to get paid from User A. On the one hand, User B can cash in the reply when User B meets User A directly. However, it will be infeasible if User B has a small contact probability with User A. On the other hand, User B may trade the reply with another user (e.g., User D). Such a trading process should benefit both users. User D gets paid from User A when they meet; at the same time, User A sends a feedback to User D. When User D meets User E, the feedback will be exchanged via the same trading process, if the process benefits both users. Finally, User C cashes in the feedback to User E when User C meets User E. Note that queries, replies and feedbacks can be traded when two users meet. However, it is difficult to determine which data packet should be traded, since selfish users aim to maximize their own benefits only. Therefore, when a user wants to require a data packet from another user, he or she needs to trade one data packet of his or her own. The two-user trading process can be formulated as a two-person cooperative game; given the selfishness of the two users, the binding agreement must do good for both users.
Contribution of this Work
We propose incentive schemes for data query in mobile opportunistic crowdsensing. A query issuer generates a query and requests replies within a given delay budget. When a data provider receives the query for the first time from an intermediate user, the former replies to it and authorizes the latter as the owner of the reply. Multiple copies of a query can be created and replied to by different data providers. When a user that owns a reply meets the query issuer that generates the query, it requests the query issuer to pay credits. Note that a query issuer only pays replies issued by himself or herself. The query issuer pays credits and provides a feedback of the reply to the data provider. Each user has an expertise for each category. In practice, however, it is nontrivial to properly define the expertise, because a mobile user hardly knows precisely his or her probability to answer queries in each category. He or she may initially claim his or her expertise based on the mobile user's social roles (e.g., professions), interests and available resources. However, such initially claimed expertise is often inaccurate. Therefore, after initialization, the expertise should be updated according to the feedbacks from the query issuers. When a user that carries the feedback meets the data provider that relates to the feedback, the data provider pays credits to the user in order to adjust his or her claimed expertise. Queries, replies and feedback can be traded between mobile users. Multiple replies of a query can be obtained by different data providers. Only the first copy of the reply can be payed credits. On the other hand, the first copy of the feedback can be cashed back. Therefore, the key point is how to effectively track the potential value of queries, replies or feedbacks and how to have them get payed as quickly as possible in such an intermittent connectivity setting. We propose an effective mechanism to define rewards for queries, replies and feedbacks. We formulate the bargain process as a two-person cooperative game, whose solution is found by using the Nash theorem. We carry out extensive simulations to evaluate the proposed schemes under real-world mobility traces.
The rest of the paper is organized as follows: Section 2 discusses related work. Section 3 introduces our proposed incentive scheme. Section 4 develops an online auction algorithm based on the optimal stopping strategy. Section 5 presents simulations under real-world mobility traces. Finally, Section 6 concludes the paper.
Related Work
Dealing with selfish users has been extensively studied in the context of mobile Ad Hoc networks [23][24][25][26]. The work in [23] proposes a reputation-based approach, where the reputation of each node reflects its degree of cooperation. Nodes update their reputation by forwarding packets for other nodes and select a routing path based on nodal reputation. The work in [24][25][26] develop credit-based approaches, where a node earns credits by delivering packets for others and uses such credits to obtain the data delivery service from other nodes in the network. However, these incentive approaches are not directly applicable in mobile opportunistic crowdsensing. The intermittent connectivity in mobile opportunistic crowdsensing makes it impractical for a node to build up the reputation of its neighbors as required in the reputation-based approaches or to estimate the number of intermediate nodes that would be involved in packet forwarding as required in the credit-based schemes.
Several incentive works are developed for mobile crowdsensing. For example, the work in [27] takes advantage of the pervasive smartphones to collect data. They consider two system models: the platform-centric model where the platform provides a reward shared by participating users and the user-centric model where users have more control over the payment they will receive. For the platform-centric model, they design an incentive mechanism using a Stackelberg game, where the platform is the leader, while the users are the followers.
The work in [28] proposes an online incentive mechanism design for crowdsensing applications with smartphones, where the platform does not have to synchronize large amounts of users simultaneously while distributing tasks. The work in [29] investigates two-sided online interactions among service users and service providers in mobile crowdsourcing. They model such interactions as online double auctions, explicitly taking the dynamic nature of both users and providers into account, and propose a general framework for the design of truthful online double auctions for dynamic mobile crowdsourcing. Clearly, the above scenarios are different from this work, where data query in mobile opportunistic crowdsensing results in a distinctive communication paradigm characterized by intermittent link connectivity, autonomous computing storage and unknown or inaccurate expertise, making data query in mobile opportunistic crowdsensing a very unique, interesting and challenging problem. Only a handful of works have considered data query in opportunistic network settings. The work in [30] employs the epidemic approach for query dissemination in the network, and the replies are routed back based on the traces while traveling between query issuers and data providers. The work in [31] proposes to query geo-location-based information, where each node moves according to a given schedule and adopts a semi-Markov model to predict nodal meeting events, in order to identify a proper relay to carry the query to the target location and bring the interested information back to the source. However, neither of them considers incentives for the data query in mobile opportunistic crowdsensing.
Proposed Incentive Scheme for Delay-Constrained Data Query and Feedback
In this section, we first introduce some preliminaries, then propose our incentive scheme. To make the definitions clearer, we list them in Table 1. Table 1. The main definitions in the paper.
Definition
Description The deliveries of Query q γ f The deliveries of Feedback f λ q The appraisal of Query q λ f The appraisal of Feedback f Pr c i (δ) The delay-constrained category contact probability (DCCP) The direct delay-constrained contact probability of User i with data providers in Category c with remaining Delay Budget δ The indirect delay-constrained contact probability of User i with data providers in Category c with remaining Delay Budget δ R
Deliveries
Each query is associated with a number of deliveries, which indicates the number of the intended data providers that can reply to the query. Let γ q denote the deliveries of Query q, which can be learned by a counting algorithm [32]. Each feedback maintains a delivery number, which is an indicator of the current estimation of the copies of the feedback in the network. Let γ f denote the deliveries of Feedback f . γ f can be obtained by a split-based approach [33].
Appraisal
Each a copy of a query is associated with an appraisal, which indicates the amount of credits the query issuer is willing to pay to each intended data provider that replies to the query. Let λ q denote the appraisal of Query q. Similarly, each feedback maintains an appraisal that indicates the number of credits the data provider is willing to pay to the user that delivers the feedback. Let λ f denote the appraisal of Feedback f .
Delay-Constrained Category Contact Probability
The delay-constrained category contact probability (DCCP)indicates the probability that User i delivers queries of Category c to data providers within a given delay budget directly or indirectly. Its value intrinsically depends on the aggregated direct and indirect delay-constrained contact likelihood with data providers. The former, i.e., the direct delay-constrained category contact probability of User i in Category c, represents the probability that User i directly meets a data provider that can reply to the queries in Category c within a given delay budget. The latter is the indirect delay-constrained category contact probability, which indicates the probability that User i delivers the queries to the data provider via other users indirectly within a given delay budget. In this research, we adopt the exponentially-weighted moving average (EWMA), which is an effective scheme for online estimation to maintain and update the delay-constrained category contact probability.
The delay-constrained category contact probability is intrinsically the cumulative distribution function of delivery delay [34], which is ideal to support QoS data delivery, but impractical to maintain in continuous time under an arbitrary delay distribution. Thus, we adopt discrete time slots to construct approximate delay distributions, where a slot is ∆ minutes. The delay distribution of a direct link between User i and a data provider in Category c can be represented by a vector [P 1 ic , P 2 ic , ..., P K ic ], where P k ic is the probability that their inter-meeting time is greater than (k − 1)∆ and less than k∆. Such delay distributions can be built via a trivial online learning algorithm according to historical inter-meeting times. Let P c i (δ) denote the direct delay-constrained contact probability of User i with data providers in Category c with remaining Delay Budget δ. When User i meets User j that is not a data provider in Category c, User i maintains the delay distribution between User i and User j, i.e., [P 1 ij , P 2 ij , ..., P K ij ], User j maintains the delay distribution between User j and the data provider in Category c, i.e., [P 1 jc , P 2 jc , ..., P K jc ]; thus, the indirect delay distribution from User i to Data Provider c via User j can be calculated as the convolution of [P 1 ij , P 2 ij , ..., P K ij ] and [P 1 jc , P 2 jc , ..., P K jc ]. LetP c i (δ) denote the indirect delay-constrained contact probability of User i with data providers in Category c with remaining Delay Budget δ. As shown in [35], two-hop relaying achieves the most performance gains. Therefore, we assume that indirect contacts involve only two-hop relaying in the following discussions. We have the DCCP of User i with Category c with Delay Budget δ:
Reward
The reward of a query depends on two factors. First, if a user has a higher likelihood to meet data providers within the remaining delay budget, he or she has a higher possibility to get payed from the query issuer. Second, a query with more copies tends to have a higher value since a user may deliver the query to more data providers in order to obtain more replies. The second factor does not change as long as a query is generated, while the first one is user-dependent. Let R q i (c, δ) denote the reward if User i trades Query q in Category c with Delay Budget δ. We define R q i (c, δ) as: where λ q is the appraisal of Query q and Pr c i (δ) is the DCCP of User i in Category c with Delay Budget δ.
If a query is replied to by a data provider, User i obtains the reply. LetRq i (c, δ) denote the reward if User i trades Replyq in Category c with Delay Budget δ. We defineRq i (c, δ) as: where λq is the appraisal of Replyq andPr j i (δ) is the delay-constrained reply contact probability (DRCP), i.e., the probability that User i meets Query Issuer j directly or indirectly within Delay Budget δ. The calculation of DRCP is the same as DCCP, but based on individual users instead of categories.
The reward of a feedback depends on two factors. First, a user only gains credit when he or she delivers the feedback to the data provider; therefore, the reward depends on the contact probability with the data provider. Second, the data provider only pays for the first copy of the feedback. The more the copies, the lower the probability for a copy to be delivered to the data provider before other copies. LetŘ where λ f is the appraisal of Feedback f andPr k i (δ) is the delay-constrained feedback contact probability (DFCP), i.e., the probability that User i meets the Data Provider k directly or indirectly within Delay Budget δ. The calculation to obtain DFCP is similar to DRCP. γ f is the deliveries of Feedback f .
Utility Function
When User i meets another user, he or she needs to decide whether or not to exchange queries, replies and feedbacks with the latter. Due to his or her selfish nature, User i wants to maximize his or her own reward if the data packet exchange happen. The utility function used by User i to trade the queries, replies and feedbacks is given as follows: where U i is the utility function of User i, φ(i) andφ(i), ϕ(i) andφ(i), and ψ(i) andψ(i) are the set of queries, replies and feedbacks in Category c before and after the exchange, respectively.
Overview of the Proposed Scheme
To facilitate our discussion, we assume that each data packet is associated with a category, a sequence number, an appraisal and deliveries. Let L i ,L i ,Ľ i denote the set of queries, replies and feedbacks at User i, respectively.
1. When User i meets User j, he or she first updates his or her DCCP, DRCP and DFCP, respectively. User i creates his or her candidate query, reply and feedback lists, i.e., . L i ,L i andĽ i are sorted in a decreasing order of the reward of data packets. Note that they are the queries, replies and feedbacks at User j, but not at User i. 2. User i checks if he or she is a data provider for any query in L i . If he or she is, User i requests those queries from User j. For each received query, e.g., Query q, User i replies to it, gives it back and authorizes User j to deliver it to the query issuer. Upon receiving the reply, User j decreases the deliveries of Query q by one, i.e., γ q = γ q − 1. 3. User i checks if it is a query issuer of any reply inL i . If it is, User i pays User j a number of credits, which are equal to the appraisal of the reply, and User j removes the reply fromL j . 4. User i checks if he or she is a receiver for any feedback inĽ i . If he or she is, User i pays User j a number of credits, which are equal to the appraisal of the feedback, and User j removes the feedback fromĽ j . At the same time, User j examines if he or she is a data provider for queries, a query issuer for replies or a receiver for feedbacks similarly. 5. Users i and j bargain about which queries, replies and feedbacks should be traded. The bargaining process is formulated as a two-person cooperative game, and the Nash theorem is applied to reach the optimal solution. Users i and j exchange queries, replies and feedbacks pair by pair according to the Nash bargaining solution.
We summarize the description in Algorithm 1.
Algorithm 1:
Incentive algorithm for delay-constrained data query and feedback.
1: When User i meets User j, User i updates his or her DCCP, DRCP and DFCP and creates his or her candidate query, reply and feedback lists, i.e., if User i is the data provider for query q in L i then 3: replies to the query and gives back to User j; 4: L i = L i − {q}; 5: γ q = γ q − 1; 6: else if User i is the query issuer for replyq inL i then 7: pays a number of credits to User j; 8: else if User i is the receiver for feedback f inĽ i then 9: pays a number of credits to User j; 10: end if 11: Users i and j exchange queries, replies and feedbacks pair by pair according to the Nash bargaining solution.
Game Theory Model
The two-person cooperative game allows players to reach a binding agreement based on the conflicting interests. Given the selfish nature of the two persons, the binding agreement, once reached, must promote the interest of both persons.
The solution for a two-person cooperative game is given by: where (Û i ,Û j ) is the optimal solution, also called the Nash solution;Û i andÛ j are the utility gain of User i and User j in the Nash solution, respectively; U i and U j are the utility gain of User i and User j, respectively; (U i , U j ) forms the utility gain space S; (D i , D j ) is the status quo point in space S, usually defined as the utility gain of no cooperation, i.e., (0, 0); (U i − D i ) × (U j − D j ) is called the Nash product. What the Nash theorem [36] says is that, for a two-person cooperative game, the solution (U i , U j ) that maximizes the Nash product While the optimal solution can be obtained by applying the two-person cooperative game, it is non-trivial to get such an optimal solution. Since queries, replies and feedbacks are traded separately, we take query trading as an example. We assume the final traded query lists are L i and L j . We define l = L i = L j , m = L i and n = L j . Apparently, l ≤ m and l ≤ n. We assume m ≤ n. In order to obtain the optimal solution, the total number of query exchanges should be ∑ m l=0 (C l m × C l n ) = O(2 m ). We can see that the computational cost to get the optimal solution is exceedingly high due to the great number of data packets in the list, rendering it un-scalable to the real implementation. Therefore, we propose a heuristic algorithm by trading only one pair of data packets at a time. The algorithm selects the data packet pair that results in the maximum Nash product for trade. Therefore, the computational complexity is O(mn). The selection process repeats until L i or L j is empty, or there no feasible solution for Equation (6).
Distributed Online Auction Algorithm
Since replies and feedbacks are not real credits and cannot be used to pay for delivery service directly, even if the trading process helps cashing them in more quickly, there is a possibility that some users may starve due to being out of real credits, and some users may be wealthy, withholding enough credits. If those two users meet and the wealthy user can buy some replies and/or feedbacks from the starving one using real credits, the credit circulation in the network will be greatly improved. Furthermore, this process also does good to both. From the perspective of the starving user, he or she is desperate to get real credits to initiate or pay for his or her own query dissemination process, and he or she is happy to sell some replies and/or feedbacks, even if less than the reward. From the perspective of the wealthy user, the reason why he or she is willing to buy the data packets is that he or she may get profits from cashing in replies from query issuers and feedbacks from data providers. In this way, both users benefit from the data packet buying/selling process. We formulate this process as an auction model. Now, we come to the question of how the seller decides which bid he or she should accept or reject. Since the seller meets the buyers in a subsequent order and he or she does not meet Buyer A, Buyer 1 places two credits for a reply, and he or she accepts the bid and sell the reply for two credits. While at time t 2 , Buyer B may place five credits for the same reply. In this case, the seller does not find the best buyer for the reply and loses the chance to maximize his or her gain. Thus, it is essential to choose the best buyer to make an irrevocable decision to sell the reply. In general, User i may meet a sequence of users, similar to a stochastic process. He or she must make an adaptive, online decision on which bid should be accepted, in order to achieve the optimal gain. We observe that the online auction process matches the optimal stopping problem with a finite horizon and tries to find the best buyer for the replies and/or feedbacks among all of the buyers.
Based on the above observation, we propose a distributed online algorithm based on the optimal stopping theory. We first present an analysis followed by the protocol design.
Analysis
Since we are concerned about the problem of accepting a bid within a certain delay budget, we propose a distributed approach based on the stopping rule problem with a finite horizon. A stopping rule problem has a finite horizon if there is a known upper bound on the number of stages at which one may stop.
In order to facilitate the following discussion, we denote the remaining delay budget δ as T − τ, then we denote the expected reward of a reply with remaining delay budget (T − τ) as X (T−τ) , and we define Y τ as the return when User i decides to accept a bid after τ time slots, i.e., where (T − τ) is the remaining delay budget to deliver the reply and g τ means that the further delivery is at the cost of the decrease in the delay budget. We denote the pdf of X as f (x). For the online auction problem, User i will meet a certain user (or users), which may help he or she to deliver the reply at time slot τ, with Y τ observed. User i may decide to stop at time slot τ or to continue to meet other users. Therefore, the online auction problem can be considered as an optimal stopping problem with an objective to find the optimal stopping time that maximizes the expected return, i.e., We define V τ as the maximum return the user can obtain if the user accepts a bid with Delay Budget T after τ time slots. At τ, we compare the return for stopping, namely Y τ , with the return we expect to be able to get by continuing and using the optimal rule for time slots τ + 1 through T, which at time From Equation (9), we can see that E(V τ+1 (Y τ+1 )) serves as a threshold in the sense that if Y τ is above the threshold, it is optimal for the user to accept the bid. We define the threshold at time slot τ as: Then, we can obtain the optimal stopping strategy of the online auction problem as follows.
Theorem 1.
For the online auction problem, it is optimal for the user to accept the bid if the following condition is satisfied at τ, Theorem 2. For the online auction problem, the threshold of the optimal stopping strategy is given by: (13) . . . (14) Proof. When T time slots are consumed, the reply cannot be delivered to the query issuer due to the expiration of the reply; therefore, the reward is zero, i.e., ρ * T = 0. Then, we have Y T ≥ ρ * T . Then, according to Equation (10), we can obtain ρ * T−1 as: Combining Equations (12) and (13), we can next compute {ρ * τ } T−2 τ=0 by the backward induction as:
Protocol Design
After the query is replied to by the data provider, it will meet a set of intermediate users.
As introduced in Section 3.2, letRq i (c, τ) denote the expected reward of Replyq in Category c at User i that intends to get payed from the query issuer.
We consider that User i meets User j at time slot τ; User j places a bid b j ; if the reply is not sold to User j, then in the next time slot τ + 1, the expected reward of User i to deliver the reply will becomeRq i (c, τ − 1).
According to Theorem 1, User i accepts the bid if and only if the following condition is satisfied: Note that feedbacks can be handled similarly. Since the communication opportunity is low, transmission is often between two users only. If more than two users are within communication range, we assume an underlying medium access control protocol (e.g., IEEE 802.11) that randomly selects one user as the seller and another as the buyer.
Performance Evaluation
We have carried out simulations to demonstrate the efficiency of the proposed schemes. In this section, we first introduce our simulation setup and then present simulation results.
Simulation Setup
We have compared the performance of different schemes as summarized in Tables 2 and 3: the "selfish" scheme, where no cooperation exists among users, and thus, queries are replied to and feedbacks delivered only when query issuers meet data providers directly; the "cooperative" scheme, where users are fully cooperative and always choose the most valuable data packets to carry; the "TFT" scheme, where a user forwards as much traffic for a neighbor as the neighbor forwards for him or her [37]; and our proposed incentive schemes denoted by "incentive" and "incentive with auction". We have evaluated our proposed schemes under two real-world traces, i.e., the Cambridge Haggle trace and the UMassDieselNettrace. The former involves 98 iMotes and Bluetooth devices and runs for a total period of about three days. The latter is based on a MOSNtestbed, which is constructed by 37 transit buses, serving an area of approximately 150 square miles for a period of about two weeks in 2006.
We assume there are 30 categories in the network. The queue size of each user is 250. The initial credit for each user is 120. The query issuer generates one query every 15 min in a random category with the delay budget randomly distributed from one hour to 10 h. For the online auction process, the bid price is set to be randomly distributed from 70% to 100% of the reward of the query or feedback when it is generated. The claimed expertise in each category for each user is randomly set and learned and updated during the simulation.
Performance Comparison
We are interested in the following metrics for performance evaluation: query reply rate, query delay and transmission overhead. The query reply rate is defined as the ratio of the total number of replied queries to the total number of queries generated. Query delay is a measure of how long a query issuer waits to get a query reply. Transmission overhead is defined as the ratio of the total number of transmissions to the total number of replied queries. Tables 2 and 3 compare the overall performance of different schemes based on the Haggle trace and the DieselNet trace, respectively. The high query reply rate of "incentive" is attributed to the fact that users are well stimulated by employing the reward of data packets, leading to highly efficient data transmission. It seems counter-intuitive that the query reply rate of "cooperative" is lower than that of "incentive", because users are all altruistic with respect to helping each other by always choosing the most valuable data packets to carry. It actually makes sense that cooperation always makes data packets aggregate quickly and, thus, dropped due to queue overflow. While "TFT" considers selfishness, its query reply rate is lower than "incentive", because maintaining a mutual forwarding balance wastes useful contact opportunities. Moreover, we can see that "Incentive with auction" further improves the query reply rate. This is because the auction process makes users have more chances to cash in the replies and feedbacks and to get credits to pay for their delivery service and initiate more query dissemination and feedback retrieval. Finally, users under "selfish" do not cooperate at all, resulting in the lowest query reply rate.
The shorter delays of "cooperative", "incentive" and "incentive with auction" are contributed to by the fact that they leverage the packet value to estimate the probability to deliver the packet and choose the best routes to forward it. "Selfish" exhibits the longest delay because a query is delivered and replied to only when the query issuer and the data provider meet directly. Although the source in the "TFT" scheme specifies the complete route for each generated packet, packets may not always follow the best routes due to the "TFT" constraint and results in a longer delay. Moreover, "cooperative" has a much higher overhead than "incentive", because its altruism leads to more packets to be duplicated and distributed in the network. In contrast, the proposed "incentive" and "incentive with auction" achieve very low overhead, because a user receives a query only if the query is redeemed as a benefit for the user. Clearly, the overhead of "selfish" is always one, because a user only replies to the queries in its own categories. Figure 2 illustrates the distribution of available credits under "incentive" and "incentive with auction". Credits are consumed for replies and feedbacks. The more credits a user owns, the more queries it can disseminate and the more feedbacks it can retrieve. We can see that most users have available credits less than 120; this is because they all hold some replies and feedbacks waiting to be delivered to the query issuers and data providers. Fifty four percent of users keep their available credits around 40 to 100 in "incentive"compared to 68% in "incentive with auction". This indicates that the auction process helps users keep a better balance of credit retrieval and consuming. Figure 3 shows the performance of the query reply delay if we change the credit amount of one query. A user is chosen as an example, while similar results are observed for other users, as well. We increase its credit amount from 1 to 6, while all other queries keep one. As can be seen, the query reply delay decreases with the increase of credit. This is because a higher credit indicates more rewards if a user successfully delivers the query, resulting in stronger incentives to stimulate nodal collaboration. Figure 4 illustrates the average number of packets exchanged when two users meet. We can see that "incentive" exchanges more packets than "TFT", but less than "cooperative". The more packets are exchanged, the more resources are consumed. Sixty-one-point-two percent of users in "TFT" exchange less than 10 packets per communication, because the constraints in "TFT" enforce bilateral balances. Sixty-nine-point-four percent of users in "cooperative" exchange more than 30 packets per communication. In "incentive", users keep a good balance between their own gains and contributions to the network. Forty-eight percent of users exchange 10 to 20 packets per communication. Since "incentive with auction" does not impact the packet exchange factor, we omit its performance here.
Available Credits
The distribution of failed transmissions among users is depicted in Figure 5. The failed transmissions are due to the lack of cooperation opportunities, which consequently leads to credit shortage. We can see that 60% of the users do not have any failed transmissions, and the average number of failed transmission is less than five, showing that the two-person cooperative game helps both users obtain gains and prevents unilateral benefit. Furthermore, the auction process helps about 27.6% of users reduce their failed transmissions, thus improving the credit circulation in the network. Figure 6 shows the convergence of the claimed expertise to the ground truth. We randomly choose a user as an example. As we can see, the feedback mechanism effectively adjusts the user's expertise, gradually approaching the true value within a few hours. Figures 7 to 9 illustrate the performance trend with the variation of several network parameters based on the Haggle trace. The DieselNet trace shows a similar trend. With the increase of queue size, the query reply rate of all schemes increases (see Figure 7). Particularly, "cooperative" increases significantly, because a longer queue allows more data packets to be buffered for a longer time, thus increasing the probability of query delivery. At the same time, query delay decreases. On the other hand, the overhead increases rapidly. Since "TFT" is constrained by the amount of traffic forwarded for others and "incentive" and "incentive with auction" exchange data packets based on self interests and aim to maximize their rewards, the increase of queue size has marginal impact on the performance of "TFT" and the proposed "incentive" and "incentive with auction".
Number of Packets per Communication
The impact of traffic load is illustrated in Figure 8; we vary the packet generation rate. With the increase of the query generation rate, the query reply rate decreases. Given the limited resources at individual users, the higher the generation rate, the longer the queries and replies need to reside at the sources and intermediate users, leading to longer reply delay. The overhead increases as well, since more data packets are duplicated during their transmissions. Figure 9 compares the performance by varying the delay budget of queries. Figure 9a shows that with the increase of the delay budget, all schemes achieve a higher query reply rate. We notice from Figure 9b that most queries can be replied to within eight hours in "cooperative", "incentive" and "incentive with auction", while "selfish" and "TFT" need to keep queries longer in the queue. Moreover, the overhead of all schemes increases with the increase of the delay budget. This is because a longer delay budget allows queries to stay longer in the queue, resulting in a better chance to be duplicated and distributed in the network.
Conclusions
We have proposed an incentive scheme to stimulate cooperation between selfish users for data query and feedback in mobile opportunistic crowdsensing. Queries, replies and feedbacks can be traded between mobile users. We have proposed an effective mechanism to define rewards for queries, replies and feedbacks and formulate user interaction as a two-person cooperative game. To improve the credit circulation, we have considered a bid placing problem. We have proven that the problem can be formulated as an optimal stopping problem, given a closed-form expression for the threshold of the optimal stopping strategy and then developed an online auction algorithm based on the optimal stopping strategy that makes an efficient decision on every bid. Extensive simulations have been carried out based on real-world traces to evaluate the proposed schemes. We leave further study on a testbed experiment using off-the-shelf Nexus tablets to demonstrate the feasibility and efficiency of the proposed algorithms and to gain useful empirical insights, as well as possible consideration of data fusion as our future works. | 9,588.6 | 2016-07-01T00:00:00.000 | [
"Computer Science",
"Economics"
] |
Current Status of Ceramic Industry and VR Technology Used in Ceramic Display and Dissemination
With the deepening of reform and opening up, the development of China’s ceramic industry has been rapidly improved, leading the world, and various ceramic varieties have also been greatly developed. However, as the growth rate of the global economy has gradually slowed down and structural imbalances have become more obvious, China’s economy has gradually entered a new development trend. In the context of supply-side structural reforms, the severe macroenvironment and policy pressure to eliminate backward production capacity have further promoted the development of China’s ceramic industry to face greater challenges. In the context of the rapid development of various high-tech technologies such as “Internet +” and intelligent manufacturing, this paper discusses the use of VR technology in the design of ceramics from the principles and characteristics of ceramic design and, according to the characteristics of virtual design of ceramics, demonstrates the feasibility of its shape, decoration, color matching, and so on. ,e ceramics are classified according to their use functions, and the characteristics of different types of virtual display of ceramics and their suitable virtual display methods are discussed. Finally, this paper combines panoramic image display technology and graphic VR display technology to create the best virtual display method suitable for different types of ceramic products, implements the interactive design in virtual software, and then performs virtual display.
Introduction
Over the past 30 years of reform and opening up, China's ceramic industry has developed rapidly. China has taken the forefront of ceramic development, becoming the center of ceramic manufacturing and the main producer of ceramics, with the first annual output and export. Chinese daily ceramics are about 70% of the world, 65% and about 50% of sanitary ceramics, and 64% of construction ceramics. Due to the obvious labor cost and resource advantages, the competitiveness of China's ceramic industry is also being rapidly improving, and its position in the world ceramic market is being rapidly improving.
In recent years, China's ceramic industry has grown rapidly at an average annual rate of more than 20%, is at the middle development level in all industries of the national economy, and far exceeds the growth rate of GDP. e ceramic industry has developed into one of the important industries to promote the sustainable, steady, and healthy development of China's national economy.
Despite the continuous improvement of the technology and development of China's ceramic industry, the domestic and foreign markets have grown rapidly and have achieved good results. However, as the global economic growth rate gradually slows down and the structural imbalance becomes more obvious, China's economy has gradually entered a new development trend. In the context of supply-side structural reform, the severe macroenvironment and the policy pressure of eliminating backward production capacity further promote the development of China's ceramic industry that is facing greater challenges. However, with the continuous improvement of Chinese residents consumption level, "Internet +," and intelligent manufacturing rapid development of various high-tech technology, as well as the rapid implementation of the development of ceramic industry special planning, China's ceramic industry ushered in a good development opportunity, supporting the development of China's ceramic industry in the future in a good and rapid direction. is paper mainly analyzes the development status of China's ceramic industry in recent years in a narrative manner and combines the advantages of VR technology in the promotion and display of the ceramic industry to do research. Based on the application of VR technology in ceramics, some scholars have made some comments from different perspectives. ese studies can be roughly divided into three categories. One is the analysis of the local kiln opening display from a case study [1], which focuses on the use of VR technology to display ancient cultural relics. e point of view is that the display of ancient cultural relics in the real scene is likely to cause damage to the artifacts, and not only can the use of VR technology satisfy people's appreciation of ancient cultural relics, especially in the postepidemic era, but also people can display them in VR without leaving their homes. Feel the charm of ceramics in the virtual space. One is the use of VR technology in the design of ceramic products [2], mainly discussing how to apply cutting-edge VR technology to improve users' interactive virtual experience of products on e-commerce platforms. Solve the contradiction between mass production and individual differences in consumption; at the same time, it discusses how to use it to realize the reproduction, repair, and data storage of ceramic products and how to realize the construction of a virtual inspection and evaluation system for ceramic products; third class is the application of VR technology in modern ceramic display design [3]. e point of view is that the ceramic exhibition hall is a platform to showcase the ceramic culture and disseminate ceramic art to the audience. As the ceramic capital of millennia, Jingdezhen ceramic art has been promoted to a large extent anks to the development of exhibition art, with the development of technology, virtual reality technology is also promoting the development of ceramic exhibition halls. e current basic characteristics of immersion, interactivity, and sharing of virtual reality technology are designed for ceramic exhibition halls. Work brings new inspiration and vitality. e application of VR technology to the design of ceramic exhibition halls is bound to be the development trend of ceramic exhibition hall exhibition design in the future. e above-mentioned research results are to explain the application of VR technology in ceramics display and design in a small range, and the application methods and technical points of VR technology had not been involved in the analysis of China's ceramic industry; this will be the focus of this paper. In the coming extraordinary period, global economic development will generally recover. Under this background, there will still be greater uncertainty and instability. e global trade environment has also become worse, and it is unlikely to improve in the short term. Moreover, with the introduction of the TPP trade agreement headed by the United States, China, as the world's largest producer and exporter of ceramic products, has also been excluded from the agreement. With the arrival of US President Donald Trump, the TPP agreement was canceled during this period, which is also a very rare opportunity for the development of China's ceramic industry.
Ceramic Development
At the same time, we also need to see that the overall trend of international economic development presents new opportunities. World economic development focus gradually shifted from the established developed countries to the current emerging markets and developing countries or economies, industrialization and urbanization space, may form huge demand for production and life, and will have a great impact on the development demand and regional distribution of Chinese ceramic products. Developing countries and emerging economies are also playing an increasing position in China's ceramic industry exports.
However, the international market demand for daily ceramics has a trend of slowing down under the environmental impact of the global economy. However, the demand for high-grade and quality daily ceramics is increasing year by year. is situation has caused the international manufacturers of daily ceramics to focus on the high-grade and cultural and artistic characteristics of ceramics; those products with collection value and gift value, good quality, full function, and novel decoration are gradually favored.
is situation is an opportunity for Chinese producers. Since ancient times, the ceramic industry has had a profound cultural heritage, the ceramic products in each producing area are distinctive, and the whole can produce complementary utility to seize the international market. China is a large import country of ceramics between the United States and the EU, USD 22,222 billion from China, accounting for 38.56% of the total ceramic imports, and $1.698 billion from China in 2016, representing 43.67% of the total ceramic imports. From the total ceramic import output of the United States and the EU in 2016 (Figures 1 and 2), China has an obvious ceramic industry in these two regions.
Influence of the Domestic Economic Environment on the Ceramic Industry.
As the Chinese market economy enters a structural slowdown, GDP slows from high development to medium development. China is also facing great downward pressure on the reverse of transformation and upgrading of economic structure. Under increasing economic downward pressure, decreasing marginal effect of monetary policy, increasing resource and environmental constraints, rising factor costs, and overcapacity industries, "three high" type pollution industries face "overcapacity," and the ceramic industry is also affected. Under the background of supplyside structural reform and development, the ceramic industry is bound to face the extremely severe market competition pressure and the huge constraints of the environment, and the transformation and upgrading will also become an inevitable choice for the ceramic industry in the future development process.
Over the years, the rapid growth of GDP has gradually become an important prerequisite for the stable growth of household income, covering the social security system of towns and villages, and specific consumption habits have also been initially formed. e current desire to upgrade consumption is extremely strong. In addition, in the context of accelerating the construction of new urbanization, new consumer demand has also created a new market for the transformation and upgrading of the ceramic industry. Based on the recovery of the real estate market, continue to promote the rapid recovery of the downstream building and health ceramics industry. During this period, new rigid demands and improved demands were also activated. e development of the ceramic industry is facing new development opportunities. e "Internet + ceramics" model has been continuously developed in the exploration of the modern market, which has promoted the rapid integration and development of the traditional ceramic industry and the emerging service industry and promoted the structural transformation and upgrading of the ceramic industry.
Development Environment and Technical Status in the Ceramic Industry.
ere are a large number of enterprises in the ceramic industry, and the average scale is relatively small, with weak research and innovation ability and low brand value. A series of problems have become very prominent at present, resulting in problems related to low-level repeated construction, unreasonable industrial layout, and the imbalance between supply and demand caused by the rapid growth of production capacity. In recent years, as the whole society has increased awareness of environmental governance, energy conservation, and emission reduction, the rapid increase of energy consumption and environmental protection costs restricts the development of the ceramic industry, because the problem has been "closed, stopped, merged, and transferred" production lines or those seeking pollution "haven" are gradually increasing. A large number of backbone enterprises with large scale, advanced technology, standardized management, strong brand awareness, and a strong sense of social responsibility in the industry have achieved good results in product quality, energy conservation, emission reduction, economic benefits, and many other aspects, and the industrial concentration degree has been gradually improved. However, as far as the overall operation and development level of ceramic enterprises, the number of enterprises that can participate in the whole industrial chain of the cooperation is not large. e marketoriented cooperation of the whole industrial chain still needs the further deepening and development of the ceramic industry. At the same time, China's ceramic industry has its typical disadvantages, mainly small and medium enterprises, difficult to quickly integrate into the global value chain. e competitive advantage is low factor cost and low tax incentives. Fusion positioning and homogenization of competition are very prominent; the low-level competitive advantage is prone to an antidumping investigation. In addition, small-and medium-sized ceramic exporters also have the risk of being replaced by lower-cost developing country enterprises, relying on low value-added products to support the industry's rapid growth model. e technical environment of China's ceramic industry has been continuously improved, and the industrial technology research and development have basically entered a virtuous circle, but the intellectual property protection system still needs to be further improved. In the process of global competition, key technologies still need to be Scientific Programming improved. In recent years, the construction of intellectual property protection system in China's ceramic industry has gradually tended to the overall benign development but is not ideal in intellectual property protection and law enforcement effect; the infringement still needs to be strengthened; the victim relief measures are still imperfect, resulting in the insufficient innovation motivation of small and medium-sized enterprises. "Promoting porcelain through science and technology" has gradually developed into a consensus of enterprises. Investment in research and development and technology promotion is constantly increasing, providing solid technical support for the transformation and upgrading of the industrial structure. To further improve the development quality and output efficiency of the building and healthy ceramic industry, prevent excessive growth, curb low-level repetitive construction, and promote the transformation and upgrading of the ceramic industry, relevant departments have also formulated or further revised a series of necessary development plans. Introduce the technical conditions for the development of smart ceramics and the basic conditions for its application. In addition to the various production technologies of the porcelain areas themselves, these technologies vigorously develop VR technology and AR technology, which are a more practical and scientific way to publicize and display Chinese ceramics in the world. Because VR technology does not need to work in the porcelain area and public display space, at home you can browse the representative ceramics in the virtual space with the Internet and computer, which effectively plays a role in promoting the publicity of Chinese ceramics.
VR Technology.
VR technology is an advanced computer man-machine interface technology with basic features of immersion, interactivity, and conception, which integrates the science of human and information [4]. It comprehensively utilizes computer graphics, simulation technology, multimedia technology, artificial intelligence technology, computer network technology, parallel processing technology, and multisensor technology to simulate human vision, hearing, touch, and other sensory organ functions, so that people can be immersed in computer generation in the virtual realm and can interact with it in real time through natural means such as language and gestures, creating a humanized multidimensional information space. Users can not only experience the fidelity experienced in the objective object through the virtual reality system, so that people have a kind of "immersive" sense of reality, but also can break through the space, time, and other objective constraints and feel in the real world. Experience cannot be experienced personally [2].
Due to the main characteristics of VR technology such as immersion, interactivity, and conception, in the current society, especially in the postepidemic era, it has a very obvious advantage in the promotion and display of Chinese ceramics. In this process, companies use modern technological means of VR technology to achieve good interaction between users and products in the process of publicity and display in an effective way. Even under certain conditions, viewers can also use VR. Technology is involved in the design of ceramic products.
3D Modeling of VR Technology Intervention Display.
e interactive characteristics of VR video refer to the audience's subjective consciousness and the right to operate things existing in space; on the other hand, virtual reality space accordingly gives natural and reasonable feedback and interaction behavior. e generation of VR video interactivity requires some supporting equipment such as using VR glasses, VR helmets, and data gloves so that the audience can feel the same in the real world through natural contact. e quality of the interactive hardware device affects the content delivered. It is because of the interaction function that determines that the effect of video delivery varies from person to person. e implementation of audience interaction in VR video also reflects the nature of the current interaction. is is also an improvement of Internet interaction that implements interactive behavior. Real-time feedback improves the value of the interaction behavior [5,6].
SolidWorks is the mainstream solid modeling software based on parametric geometric features. SolidWorks uses geometric features as the design unit and uses geometric features to build part models. Geometric features are the basic units that make up a 3D model. In SolidWorks, geometric features are divided into sketch features and directly generated features according to different production methods. Before designing the model, it is necessary to decompose the complex model, establish a general function sequence, and clarify how to determine the sketch and reference level of each function [7].
Entity modeling software requires users to have certain 3D reverse thinking and can split a complex 3D model into groups of sketch features or directly generate feature combinations. At the same time, in the process of creating the basic features, how to choose the benchmark plane and how to choose the sketch plane are a test of the user's ability and experience. Polygon modeling software requires users to have a strong sense of space and space sense, and reasonable structure control ability, reasonable wiring ability, 3D model structure control ability, and 3D model grid distribution ability are also an index [8] to distinguish the level of polygon modeling ability.
e Production Process of VR Technology Virtual Display
Design.
e need for the virtual display of ceramic products is mainly based on the virtual reality display technology of 3D modeling. e display principle can be embodied by the following design and production steps: (1) According to the purpose and content of the display, the designer uses 3D MAX, MAYA, and other types of three-dimensional modeling software to construct the digital model and make related optimizations.
Scientific Programming
(2) Use the corresponding mechanism material materials to map or render the built ceramic product three-dimensional model, give the relevant scene lights in the virtual space, and then adjust the relevant parameters according to different space display requirements. (3) Create animation effects after setting up the scene camera. (4) After the first three parts are completed, save them in the corresponding format and then import them into the VR software for interactive design production, so that it has the function of the interactive operation. At the same time, multimedia information such as sound effects, text, and UI interface should be added. (5) After the interactive production is completed, the output file can be used in practice according to the type of release.
e design process is shown in Figure 3.
Realization of VR Technology in Ceramic
Display. e three-dimensional modeling design based on virtual reality display has its characteristics. Its rich display effect allows the audience to watch it. In addition to zooming in and out, it can also rotate at any angle, modify the selection of colors and patterns, and so on. e interactive performance is much better than the way of displaying panoramic video. erefore, for the realization of VR technology in the display of ceramic products, we must first figure out how to encode and decode VR panoramic images. Among them, the process of encoding and decoding is essential to analyze and decode each frame for projection to realize panoramic video. We proceed as follows.
One is to decompose the panoramic video image into a single frame image. Since ceramic products are mostly threedimensional, it is most appropriate to analyze the ceramic display with the principle of panoramic projection of geometric spheres as a case. First, perform different projection formats according to different geometric models [9]; then, according to the actual situation, expand the geometry module [10] and again rearrange the geometry by different layout methods module; finally, the spherical image is converted into more conventional flat rectangular images, and these images are regarded as panoramic frames and correspond to the two-dimensional flat frames.
Second, the sequence of two-dimensional plane frames is encoded and compressed to obtain a data stream for video storage or transmission.
After the above two processes, the panoramic video encoding is completed. For decoding, it is to reverse the sequence of the encoding process to achieve decoding.
Spherical panoramic images are generally represented by a three-dimensional sphere, and the coordinates on it are mostly following the rules of right-hand operation and are represented by a three-dimensional coordinate system. e points on the sphere can refer to the marking method of the three-dimensional scanner and perform punctuation on several latitudes and longitudes [11]. Longitude takes the x-axis as the coordinate direction and uses counterclockwise rotation, and the rotation angle value is positive. On the contrary, when the rotation angle is clockwise, the rotation angle is negative. e longitude value is represented by π, and the value range is [−π, π]. Taking the equator on the sphere as the coordinate, the latitude uses the Y-axis as the coordinate direction, the coordinate point moves toward the north pole, and the angle value is positive. On the contrary, if the coordinate point moves to the south pole, the angle value is negative. erefore, the latitude value range is [−π/2, π/2]. e coordinates of a point on the unit sphere can be expressed by latitude and longitude coordinates [Φ, θ] or by three-dimensional coordinates (X, Y, Y), which are expressed as follows: (1) rough the above analysis, the essence of panoramic image projection is to project the panoramic image frame and all pixels on the spherical texture in a certain way and then convert the 3D video image into a 2D plane video frame image. at is to say, to realize the VR technology to display the artifacts in the virtual space, it is necessary to establish a geometric model, map the coordinate points on the spherical surface to the surface of the geometric body, complete the spherical video projection to the two-dimensional plane [12], and transform and rotate the texture pixels on each surface of the geometry.
From another level, in the panoramic video image, the spherical panoramic image can also be realized according to the idea of equidistant cylindrical projection. e method is to use coordinate points of the same value to expand the latitude line data on the sphere and map it on a two-dimensional plane to obtain a rectangular video. In the unified plane coordinate system, the plane coordinate points use U and V to represent the values, and the range of values is (0,1). e latitude and longitude coordinate points on the spherical surface are (θ, Φ); thus, the spherical panoramic image to the plane image conversion, the corresponding Scientific Programming method of its value, is obtained by a formula using the coordinates of the plane point (U, V) (formula (2)). en, the three-dimensional point coordinates (X, Y, Y) on the spherical surface are calculated by formula (1).
If any point (X, Y, Y) in the three-dimensional space is inversely converted into a two-dimensional plane point, the longitude and latitude values (Φ, θ) can be obtained by formula (1), and the value of the plane point (U, V) can be obtained by formula (2) [1].
Display
Mode of the Ceramic Space Scene. VR technology is involved in the design of ceramic virtual exhibition space; one not only can feel the atmosphere of the scene but also can better observe the objects. is method is more suitable for ceramic products and artworks, allowing viewers to fully and profoundly reflect the performance of ceramic products in the space atmosphere. It can not only display ceramic products and artworks but also use other ceramic categories, such as architectural ceramics, industrial ceramics, and special ceramics. However, from the perspective of display purposes, ceramic products and artworks require a panoramic view, object body, and scene. Several display methods cooperate to realize viewing in virtual space. First of all, in the environment of panoramic display mode, the displayed objects are relatively fixed and can only be moved 360 degrees through the lens. During the movement of the lens, the viewer will appreciate the complete display space scene as the lens moves. Secondly, look around the 360-degree landscape from a height. Secondly, use the object display. In this display mode, the lens is relatively fixed. By rotating the object up, down, left, and right, the viewer can observe the object; the last is the scene display, which is both the object and the lens. Generate movement, such as setting multiple observation points; you can walk from one observation point to another; that is, you can watch the scene as well as the object, and you can feel the object from the atmosphere of the scene.
Regardless of the way of display, the show requires multiple observation points; through VR technology and interactive design in the virtual software with corresponding background music, the viewer can comfortably roam from one display scene in the virtual space to another display scene; in the virtual display space, the viewer can not only appreciate the whole object but also magnify the part of the object and appreciate the local details of the object; under the background music rendering, the viewer is completely immersed in the appreciation in the charming space of ceramics. At the same time, the viewer can zoom in and out, move up and down, watch from multiple angles, and change the glaze color or decorative style of the ceramics in the virtual design [1]. Especially in the postepidemic era, some public exhibition spaces such as museums have been affected by the epidemic, and the flow of people has been correspondingly restricted. e introduction of VR technology into the design and display of ceramic products effectively solves the limitation of not being able to go to the museum to watch the real scene but also narrates the distance between people and the objects and increases the enthusiasm of the public to participate in ceramic design.
VR Technology Used to Display Ceramic Products Helps
Spread China's Ceramic Culture. China has a long ceramic culture and there are so many ceramic-producing areas in China. e audience cannot visit all the ceramic-producing areas for on-the-spot inspections to experience the characteristics of the utensils in different producing areas. In addition, the porcelain is fragile, which is displayed in the real space. e above brought certain difficulties, which made it difficult for Chinese ceramics to be accepted by the masses, resulting in the inadequate dissemination of ceramic culture. e emergence of VR technology provides a new way for the dissemination of ceramic culture, breaks through the limitations of traditional communication methods, and adds a boost to the dissemination of ceramic culture. Relevant units and social organizations can use VR technology to spread ceramic culture, make full use of the characteristics of VR technology, and display ceramic culture to the masses in a novel way so that ceramic culture can be better spread. e masses can also make full use of VR technology, VR cultural promotion center, or their VR equipment to understand the ceramic culture.
e Viewer Has a Diversified Sensory Experience for Ceramic Display under the VR Field of Vision.
e traditional display method mainly adopts real-time display, which consumes a lot of manpower and material resources in the preparation process, and it is also easy to cause damage to the ceramics. In addition, the flow of people is often restricted during the exhibition, so that the viewer cannot get the sensory experience of the appointment. In the current era of information development, traditional display methods cannot meet the needs of the masses and therefore cannot meet the needs of ceramic display and ceramic culture dissemination. VR technology has unique advantages and strong interactivity, which can effectively strengthen the interaction between the experiencer and ceramic culture. e VR scene designer can construct virtual historical characters in the VR scene, interact with the experiencer, and enhance the experience of the experiencer. At the same time, VR technology can present sensory sensations such as hearing and touch, enabling the experiencer to obtain a diversified sensory experience.
VR Technology Makes the Audience Situation of Ceramics
Exhibition Tend to Be Good. In recent years, with the strong support of the government and relevant departments, intangible cultural bases and ceramic industry inheritance centers have been established everywhere, and ceramics can be displayed and disseminated to the maximum extent. As the broad masses of the people, we have the responsibility and obligation to promote, protect, and inherit Chinese traditional culture. Compared with the encouraging, compulsory, and guiding propaganda methods adopted by the government and relevant departments, mass groups have the advantages of wide dissemination and high information openness. is result has made the public actively appreciate the sensory charm brought by ceramic culture and spontaneously join in the promotion of Chinese ceramic culture, becoming the main force of ceramic culture promotion. In the process of studying the bluish-white porcelain of Fanchang kiln, the author took the bluish-white glazed phoenix head pot (Figure 4) as the display object, explained the characteristics of the phoenix head pot in detail to the audience in the virtual scene of VR, and analyzed the main points of the design of the utensil. After obtaining this information, he quickly became a disseminator, making Fanchang kiln blue and white glaze a star display product. Based on this, to obtain audience data, the author publishes questionnaire information through his social circle, museum visitors, and student groups. As shown in Figure 5, among the 100 users, 40 are women, 40%, and 60 are men, accounting for 60%. Most of them are 21-40 years old, followed by 41-60 years old, under 20 years old, and 61 years old and above. e results show that the use of VR technology in the display of ceramic products has increased the number of audiences to a certain extent, and there is a tendency to increase.
Conclusion
With the development of science and technology, VR technology as a display medium provides an opportunity for the development of the ceramic product industry. e use of VR technology for display allows people to perceive products as if they have entered a real scene, and they can also interact with products in the scene at any time. is method not only plays a role in protecting ceramic products but also provides a superior path for the spread of ceramic culture. VR technology has realized the essential characteristics of different ceramic products and the physical functions of ceramic products. Take targeted solutions to explore the most suitable virtual display performance methods for different types of ceramic products, and accurately express the display content and display of different types of ceramic products. e key point is to realize the viewing and interaction of ceramic products from different angles, and the virtual display method can be applied to the production of the virtual display through the network and interactive projection. Let VR technology make up for the disadvantages of traditional exhibition halls. In terms of interactivity, both customers and merchants can obtain accurate information and promote the development of the ceramic industry.
Data Availability
Data sharing is not applicable to this article as no datasets were generated or analyzed during the current study.
Conflicts of Interest
e authors declare that they have no conflicts of interest. | 7,182 | 2021-10-26T00:00:00.000 | [
"Materials Science"
] |
Directly Printable Frequency Signatured Chipless RFID Tag for IoT Applications
This paper proposes a low-cost, compact, flexible passive chipless RFID tag that has been designed and analyzed. The tag is a bowtie-shaped resonator based structure with 36 slots; where each patch is loaded with 18 slots. The tag is set in a way that each slot in a patch corresponds to a metal gap in the other patch. Hence there is no mutual interference, and high data capacity of 36 bits is achieved in such compact size. Each slot corresponds to a resonance frequency in the RCS curve, and each resonance corresponds to a bit. The tag has been realized for Taconic TLX-0, PET, and KaptonHN (DuPont) substrates with copper, aluminum, and silver nanoparticlebased ink (Cabot CCI-300) as conducting materials. The tag exhibits flexibility and well optimized while remaining in a compact size. The proposed tag yields 36 bits in a tag dimension of 24.5 25.5 mm. These 36 bits can tag 2 number of objects/items. The ultimate high capacity, compact size, flexible passive chipless RFID tag can be arrayed in various industrial and IoT-based applications.
Introduction
Internet of things (IoT) is a combination of number of smart objects, connected via wired/wireless networks to the internet [1][2][3][4][5].RFID and wireless sensor networks (WSN) are major entities of IoT system.Latest developments in RFID have enabled IoT [6].RFID is an emerging contactless data capturing technology which is widely used for tracking purposes, theft control, health monitoring, food monitoring, luggage tracking, clothing, electronic cards and pollution control, etc., [7].An estimated 75 billion products equipped with RFID tags will be sold yearly till the year 2019 [8].RFID has to upswing for the latest requirement of IoT development and emergence to meet the demands of modern era [9].Limitations of RFID technol-ogy are cost, reliability and recycling aspects [10].The main hindrance in RF identification deployment depends on its cost per tag [11].The emerging aspect of RFID technology and such limitations have motivated the researchers to move towards chipless tags that outperform compared to conventional chip-based tags hence tremendously reducing the cost compared to chip-based tags [10], [12].Chipless RFID involves information coding in the form of electromagnetic signature (EMS) [13].Chipless RFID does not need any communication protocol for identification process [13].The most promising benefit of chipless RFID tag is that they can be printed directly on the products [8].The reliability and versatility of chipless RFID tags can be depicted from the fact that they can replace ten trillion barcodes yearly [8].
Chipless RFID has been an interesting field for researchers because of some challenging features like enhancing the coding capacity, miniaturization, within a suitable frequency band and an enhanced read range [12].A number of papers have appeared addressing various such aspects [12], [14][15][16].One of the major aspects is to enhance data density while maintaining a suitable tag size in a reasonable frequency band [12].Various researchers have addressed such aspect [12], [17][18][19][20][21][22][23][24][25][26][27][28].In [29], a 3.8 bits/cm 2 compact polarization independent, discrete slot ring resonator based chipless RFID tag has been proposed.It gives back-scattered frequency signature in a compact size with enhanced coding capacity.Another spurline resonator based chipless RFID tag in a size of 40 27 mm 2 yielding 8-bit data capacity has been proposed [30].In [27], a low-profile data encoding chipless RFID tag is designed.In this design, the data is encoded as complex natural resonances (CNRs) on the structure within an area of 24 24 mm 2 yielding 24-bit data.Similarly, a compact, flexible 24-bit dual polarized chipless RFID tag in a size of 20.6 19.9 mm 2 is designed [7].It discusses flexibility in very compact size.
In this paper, a novel 36-bit chipless RFID tag is presented.The novelty of the tag relies on its flexibility, compact size and high data capacity of 36 bits, which has not been done so far in such a compact size.Also, the tag is The tag is a resonator based structure that is excited by a linear polarized incident plane wave in a tag dimension of 24.5 25.5 mm 2 .Firstly, copper is used as a radiator for the Taconic TLX-0 substrate to achieve desired RCS response.Then, aluminum and silver nanoparticlebased ink are deployed as a radiator for PET and Kapton ® HN substrates to achieve printability along with flexibility in a reduced tag design.The entire tag yields a data capacity of 36-bit, hence 2 36 number of objects can be tagged.The frequency ranges for Taconic TLX-0 along with copper as the radiator is 5-15.5 GHz, for PET along with aluminum is 5.3-18.2GHz and 4-18.2GHz with silver nanoparticle-based ink.The frequency range for Kapton ® HN along with aluminum as the radiator is 5-17 GHz and 4-17 GHz for silver nanoparticle-based ink as a radiator.
Theory and Fundamental Principle
RFID involves electromagnetic waves to identify a tagged object [13] remotely.The tag is a resonator structure, where each slot corresponds to one dip and each dip corresponds to one bit.Hence 36 slots yield 36 bits data density.36-bit tag is designed using CST STUDIO SUITE ® .The tag is excited by using linearly incident plane wave.The E-field plane wave equation is given as where E is an electric field, w is the angular frequency, t is time, k is wave vector, (x,y,z) is the position vector.
Chipless RFID tags are classified as retransmission based tags and backscattering based tags [31].The main working principle of chipless RFID tags is backscattering.Identification is based on unique frequency signature generation in a desired frequency range that is measured as radar cross-section (RCS) of the tag [12], [32].The radar cross-section (RCS) versus frequency shows the electromagnetic behavior of tags.The RCS is analyzed at far-field distance given by Fraunhoffer distance formula where D is the radiator's largest dimension and λ is the wavelength of radio wave [33].
To measure RCS response of chipless RFID tag, we need two antennas: one for transmitting and the other for receiving.Reader antenna sends an electromagnetic wave (EW) also known as 'interrogator signal' towards the tag [34].The tag then encodes the data information in that signal and sends the 'backscattered signal' containing encoded information towards the reader [35].The back- scattered signal contains unique frequency signature for identification.So, there is no need for any integrated circuit to encode the data.Figure 1 shows the backscattering phenomenon.
The power received from a transmitting antenna by a receiving antenna is given by Friis transmission equation [36] 2 RX TX RX TX 4 where r is the distance between transmitter and receiver, G TX and G RX are the gain of the transmitter and receiver, P TX is power transmitted and P RX is power received.
All the tagged items/objects should lie in the read range/working space of system for proper RFID operation [37].The maximum theoretical read range of chipless RFID system can be calculated from (4) [23], [38] where P TX is transmitting power, G TX is transmitting antenna gain, G RX is receiving antenna gain, λ is the wavelength, P RX is the sensitivity of the receiver and σ min is the most minimum RCS level possible to be detected by the reader.
Proposed Tag Design
The proposed tag is loaded with 36 slots, each of varying length in a tag dimension of 24.5 25.5 mm 2 .Each slot is numbered according to its length.Each slot of different length corresponds to a dip that resonates at a particular frequency.So there are 36 dips corresponding to 36 bits yielding 2 36 number of possible tag ID combinations.The tag is designed in a way that each slot in upper patch corresponds to metal in the lower patch and vice versa.Therefore, the slots are at alternate positions with metal gaps for adjacent patches.Hence, each slot will be of different length, resonating at a different frequency.Ultimately, there will be no mutual coupling and high dense data in a compact size is achieved while fully utilizing the frequency band.The proposed tag design is shown in Fig. 2.
There are five tags that have been designed using Taconic TLX-0, PET, and Kapton ® HN substrates along with copper, aluminum and silver nanoparticle-based ink as radiators.We can analyze that along with changing substrate and radiator; there is variation in tag electrical properties.The tag has been designed and optimized for different substrates, so there is a slight variation in dimensions while optimizing the tag for flexible substrates.The detailed characteristic comparison of all the tags is shown in Tab. 1.It can be observed from Tab. 1 that the tag has been initially designed for the Taconic TLX-0 substrate.Then the tag has further been optimized for PET substrate to achieve flexibility.To meet the modern application requirements there is a trade-off between the bandwidth and tag size/flexibility.Moreover, for efficient band utilization while using flexible substrate, the tag design is optimized for Kapton ® HN.
Results and Discussion
The proposed tag design encodes 36 bits data.The tag has been designed for different substrates of varying electrical properties using different conducting materials.The detailed analysis of tag dimensions for all the designed tags is shown in Tab. 2.
Taconic TLX-0 Substrate
The tag designed using Taconic TLX-0 substrate and copper radiator is referred as 'Tag-1'.The RCS vs. frequency response for Tag-1 is shown in Fig. 3.The electrical permittivity of Taconic TLX-0 is 2.45, deployed using copper as a radiator with a thickness of 0.035 mm.36-bit tag response has been analyzed in the frequency range between 5 GHz and15.5 GHz.
PET Substrate
Tags referred as 'Tag-2,' and 'Tag-3' are designed using PET substrate along with aluminum and silver nanoparticle-based ink as conducting materials, respectively.The electrical permittivity of PET is 2.9.The tag designed using PET as substrate and aluminum as radiator is referred as 'Tag-2'.The RCS vs. frequency response for Tag-2 is shown in Fig. 4. Aluminum used as radiator has thickness of 0.007 mm.The tag yields 36 bits in the frequency range of 5.3-18.2GHz.
The tag represented as 'Tag-3' is designed using PET substrate along with silver nanoparticle-based ink as conducting material.The RCS response for Tag-3 is shown in Fig. 5.The thickness of the silver nanoparticle-based ink is 0.015 mm.The RCS curve for Tag-3 lies in the frequency range of 4-18.2GHz.
Kapton ® HN Substrate
The tags referred as 'Tag-4,' and 'Tag-5' are designed on Kapton ® HN substrate using aluminum and silver nanoparticle-based ink as radiators.
The electrical permittivity of Kapton ® HN is 3.5.Kapton ® HN is deployed for its easy availability and lowcost along with flexibility.By changing the radiator, there is variation in resonances of the tag.'Tag-4' has been designed using aluminum radiator on Kapton ® HN substrate.The RCS vs. frequency response for Tag-4 is shown in Fig. 6.Aluminum in Tag-4 has thickness of 0.007 mm and yields 36-bit data in the frequency range between 5 GHz and 17 GHz.
The tag designed using silver nano ink radiator on Kapton ® HN substrate is referred as 'Tag-5'.The RCS response for Tag-5 along with fabricated design is shown in Fig. 7.
Parameters
Paper 1 [36] Paper 2 [7] Proposed paper To analyze different coded combinations, various tag ID's have been generated, simulated and tested.A comparative analysis of presented full tag with two different tag ID's along with their prototypes is shown in Fig. 10.Tag-A corresponds to all 1's with a tag ID 111111111111111111111111111111111111.For Tag-B, S9, S10 and S14 slots are shorted leading towards a coded combination of ID having '0' bits: 111111111111111111111101110011111111.Again, another data word having '0' bits representing tag ID: 111111111111111111110101111110011111 is presented as Tag-C by shorting S6, S7, S14, and S16.It has been analyzed that the occurrence of 0-bit has a slight effect on the amplitude and resonance frequency of neighboring peaks.
Measured Results
After testing, the results are measured and analyzed.The measured and computed results are shown in Fig. 11.RCS evaluation for each bit at a particular resonance can be calculated theoretically by using (5) where ε r is the relative permittivity of substrate, L is the slot length and c is the speed of light.
The experimental setup of chipless RFID for RCS evaluation includes two horn antennas; one transmitting and the other receiving as shown in Fig. 12.The tag deployed on the item is set at a far-field distance from the antennas.The transmitting antenna bombards an interrogator signal on the tag and receiver antenna then reads its response for identification using Vector Network Analyzer (VNA) R&S ® ZVL13.The tag is printed using a table-top printer available from Dimatix "DMP2800" inkjet printer.
Conclusion
The research has proposed a novel, 36-bit passive chipless RFID resonator based tag.The tag has been designed, printed and tested.The tag consists of slots of dif-
Fig. 12 .
Fig. 12. Experimental set-up.ferentwidths and lengths etched on the radiating patch.The proposed tag is of 24.5 25.5 mm 2 dimensions, designed on Taconic TLX-0, PET, and Kapton ® HN substrates with copper, aluminum, and silver nanoparticlebased ink as radiators.The novelty of the tag relies on its compact size, high-density data and flexibility; deployed all together for green electronics, economic, environmentfriendly and IoT-based applications. | 3,012.8 | 2017-04-14T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Reconstruction And Identification Of Heavy Long-Lived Particles At The ATLAS Detector At The LHC
Long-lived charged particles are predicted by many models of physics beyond the Standard Model (SM). At the LHC, the common signature would be a heavy long-lived charged particle with velocity smaller than the speed of light, β<1. This paper presents methods we developed for identifying slow particles and measuring their mass using the ATLAS muon spectrometer. The efficacy of these methods is demonstrated using two different models.
THE SIGNATURE OF A HEAVY CHARGED PARTICLE IN ATLAS
ATLAS was designed to fully exploit the LHC discovery potential [1,2] but the scenario of slowly moving particles was not considered.With β < 1, a particle may be lost during data collection.We offer methods to identify such particles and measure their mass [3] and demonstrate their efficacy using two different models: Gauge Mediated SUSY Breaking with 102 GeV stau NLSP [4], and split SUSY R-Hadrons (hadronized gluinos) [5] with a mass of 300 GeV.These methods ensure the acceptance and identification of these particles from the data acquisition, through the trigger, and in event reconstruction.
A long-lived charged slepton, chargino or R-Hadron would not lose a lot of energy in the calorimeter and will reach the Muon Spectrometer.R-Hadrons may flip their charge in the calorimeter from being neutral in the inner detector to being charged in the muon spectrometer (or other flips).Thus the least model dependent signature is a charged particle with low β which reaches the muon chambers and the largest background is high p T muons with mis-measured timing.Since ATLAS is so large that information from 3 separate bunch crossings (BC) co-exists at the same time, it is crucial to match correctly event fragments from different sub-detectors [6]: when β is small, the particle will take longer to reach the detectors and hits may not be read-out.In order to find all hits from slow particles ATLAS must collect hits from the consecutive BC.
TRIGGERING ON LONG-LIVED HEAVY CHARGED PARTICLES
ATLAS has a three-level trigger system [1].
-At the first level (L1) [6], a long-lived charged particle is most likely to trigger as a muon.The L1 muon trigger, based on dedicated fast detectors (RPCs and TGCs), identifies the detector region and BC.The BC may be misidentified for a slow particle.
-The second level trigger (L2) [7] analyzes data acquired by L1 and includes several software stages.For muons, the first stage reconstructs the muon p T in the spectrometer.The second matches an inner detector track with the muon spectrometer track and refines the p T estimate.We developed an algorithm to trigger on heavy stable charged particles, by measuring their time of flight and velocity.Our algorithm uses the excellent time resolution (3ns) of the RPC chambers located in the barrel of the muon spectrometer.Our selection requires pT > 40 GeV, β < 0.97 and m > 40 GeV.The efficiency is above 90% for GMSB sleptons and R-Hadron events in the barrel.Figure 1 (left) shows the mass distribution of GMSB signal and background.The p T resolution is improved by using a matching inner detector track, if it exists (right).If the R-Hadron is identified as a slow particle candidate, it is accepted without the matching inner detector track, to prevent the loss of R-Hadrons that are neutral in the inner detector or have inner detector hits in the previous BC.
-The final trigger decision is made in the event filter (EF) [7], which uses algorithms adapted from the offline reconstruction.Below we present a specific reconstruction algorithm which includes identification of charge-flipped (or late) R-Hadrons (no inner detector track).The identification based only on the muon spectrometer is applied in two cases; if the particle was already identified by the L2 stau selection in the barrel, or for high pT muon candidates in the end-cap that do not have an inner detector track.
ATLAS Preliminary
. Mass distribution of signal and background resulting from the L2 selection for an integrated luminosity of 100 pb -1 (left).The shaded area is the signal, the dashed line is the muon background, and the full line is the sum.The shaded area in the plot on the right shows the mass resolution when a matching inner detector track is found.
The efficiency of the standard muon trigger for the two test models is compared to the slow particle trigger in Table 1.The background trigger rate of the slow particle trigger is estimated to be 4.6 mHz at a luminosity of 10 31 cm -2 s -1 .
RECONSTRUCTING HEAVY LONG-LIVED PARTICLES
In ATLAS, muons are reconstructed using the MDTs, precision muon chambers, in combination with the RPC, TGC and CSC sub-detectors [1].Standard muon reconstruction efficiency starts dropping sharply for β < 0.75 and goes to 0 at β = 0.4.This is due to two main issues: the data may not be collected if the particle hits are in the next BC; late arrival of the particle spoils segment fitting in the MDTs.We reconstruct slow particles and estimate their mass with a muon identification package [8] which starts from inner detector tracks and looks for corresponding hits in the muon spectrometer, identifying candidates even when the segment reconstruction is imperfect.It is based on 3 techniques: recovering trigger detector hits from the next BC, estimating the particle velocity from the RPC hit time and by selecting the β that minimizes the MDT segments χ 2 .The algorithm yields a high efficiency for signal events even for low values of β (greater than 90% for β > 0.4). Figure 2 shows the efficiencies of the muon reconstruction (left) and the slow particle reconstruction (right) for heavy long-lived charged particles.Figure 3 shows the reconstructed mass obtained by the reconstruction program for sleptons from GMSB (left) with a mass of 102 GeV and for R-Hadrons (right) with a mass of 300 GeV.The mass resolution can be further improved when the full track is re-fit.The details of particles identified as heavy long-lived particle candidates are stored with the measured β for further analysis.
CONCLUSIONS
Heavy long-lived charged particles can be discovered in ATLAS.However this must be done in the data acquisition, high level trigger and reconstruction stages and it cannot be done with the standard ATLAS tools.We have presented model independent methods to measure their mass in the trigger and reconstruction software.
FIGURE 2 .
FIGURE 2. Reconstruction efficiency as a function of β for two muon reconstruction programs (left)compared with the efficiency of a reconstruction program that also estimates β (right).
FIGURE 3 .
FIGURE 3. Reconstructed mass for sleptons from GMSB (left) with a mass of 102 GeV, and for R-Hadrons (right) with a mass of 300 GeV.
TABLE 1 .
Slow particle trigger efficiencies for heavy stable charged particles, with respect to L1.The numbers in parentheses refer to efficiencies for the standard muon trigger. | 1,582 | 2010-03-05T00:00:00.000 | [
"Physics"
] |
Effects of Thermal Treatment on Mineral Composition and Pore Structure of Coal
With the increasing depth of coalbed methane (CBM) exploitation, temperature becomes the main factor affecting the efficiency of CBM exploitation. The change of temperature has significant influence on the mineral composition and pore structure of coal. To study the effects of thermal treatment on mineral composition and pore structure of coal, X-ray diffraction (XRD) test, scanning electron microscopy (SEM) test, and mercury intrusion test were carried out for three groups of coal. The mineral composition and pore structure of coal specimens after thermal treatment (25, 50, 75, and 100°C) were analyzed. The results show that the main mineral compositions of three groups of coal specimens after different temperatures are basically unchanged, and the maximum diffracted intensity after different temperature treatments decreases first and then increases with the increasing temperature. The count of fissures decreases first and then increases with temperature, and the count of pores increases first and then decreases with the increasing temperature. The velocity of mercury injection in high pressure (100~400MPa) of coal specimens increases first and then decreases with temperature. The porosity, pore area, median pore diameter, and average pore diameter increase with the increasing temperatures. The volume of microfracture decreases, then increases, and finally decreases. The volume of macropore and mesopore increases slowly, and that of transition pore decreases slowly with the increasing temperature. Meanwhile, the volume of micropore increases first and then decreases during the process of thermal treatment. The fractal dimension of pore size ranges from 2.6 to 2.9 and increases linearly with the increasing temperature.
Introduction
With the depletion of shallow coal resources, more and more coal mines are entering the deep mining stages. The high ground stress, high temperature, and high water pressure restrict the safe and efficient mining of deep coal resources [1][2][3]. Coalbed methane (CBM) is a form of low-carbon clean energy, which is important for optimizing energy production and achieving carbon neutrality goals [4][5][6]. The temperature has significant influence on permeability and porosity of coal, which is important for the efficiency of CBM exploitation [7][8][9]. Therefore, it is urgent to study the evolution of mineral composition and microstructure of coal after different temperature treatments.
Previous studies have proved that rocks and rock-like materials have significant thermal effects, and the physical and mechanical properties change significantly after thermal treatment. The physical and mechanical properties of granite after temperature treatments have been a research hotspot for underground nuclear waste disposal, and many scholars have analyzed its physical and mechanical properties, such as fracture toughness [10], rockburst proneness [11], uniaxial compressive strength [12,13], elastic modulus [13], longitudinal wave velocity [13], mechanical behavior [14,15], tensile strength [12,16,17], acoustic emission characteristics [18,19], mineral composition [20], pore structure [16,20,21], and permeability [22]. The physical and mechanical properties of sandstone after temperature treatments have also attracted a lot of attention; the properties such as thermal cracking process [23], peak strength [24], mechanical behavior in unloading conditions [25], wave velocity [26,27], porosity [27], triaxial mechanical behavior [28], permeability behavior [28], microstructure [26], pore characteristics [29,30], elastic modulus [29], tensile strength [31], energy evolution [32], microstructure deterioration [33], and morphological properties [34] have been investigated in depth. The physical property and tensile strength of shale have been also analyzed [35,36]. Peng et al. and Rong et al. analyzed the physical and mechanical behaviors of thermal-damaged marble [37,38]. The microstructure characteristic, mechanical behaviors, pore distribution, and AE characteristic of limestone have been investigated [39][40][41]. Yavuz et al. [42] investigated the changes of physical properties of five carbonate rocks (two marbles and three limestones) after different heating temperatures. The unconfined compressive strength and elastic moduli of gabbro after thermal loading have been studied by Keshavarz et al. [43]. Brotóns et al. [44] investigated the effect of thermal treatment on physical and mechanical properties of calcarenite. Ugur et al. [45] studied the changes in porosity features of natural stones after thermal treatment. The effect of thermal treatment on petrographic and mineralogical composition of coal mining wastes was analyzed by Nowak [46]. Tian et al. [47] investigated the changes of physical and mechanical behavior of claystone due to the thermal treatment, such as uniaxial compressive strength, triaxial compressive strength, and density. The grain size distribution and mineral composition of flux calcined porcelanites after thermal treatment have been studied by Saidi et al. [48]. Ersoy et al. [49] analyzed the mineralogical and geomechanical properties of volcanic rocks subjected to high temperatures. The effect of temperature on pore structure and mechanical properties of shotcrete was studied by Liu et al. [50]. Miao et al. investigated the evolution of coal pore-fracture during the thermal damage process [51]. The above studies mainly focus on physical and mechanical properties of dense hard rocks (granite, sandstone, marble, shale, and gabbro). The main reason is that these dense rock strata are often at high temperatures when storing nuclear waste. Meanwhile, the high temperature in those studies generally exceeds 500°C. However, the geothermal temperature increases by 25~30°C with an increase of 1000 m in mining depth, and the geothermal temperature is generally below 100°C in deep coal mining. Therefore, the temperature of thermal treatment ranges from 25 to 100°C in this study.
Coal, as an anisotropic medium, is highly sensitive to temperature. The structure and mechanical properties of coal will change significantly when the temperature changes, thus affecting the permeability and porosity of coal. To analyze the changes of physical and mechanical properties from a microscopic point of view, the X-ray diffraction (XRD) tests, mercury intrusion tests, and scanning electron microscopy (SEM) tests were carried out for three groups of coal after thermal treatment at different temperatures (25,50,75, and 100°C). The mineral composition and microstructure of coal specimens after thermal treatment were analyzed, and velocity of mercury injection, pore parameters, and distribution of pore size for three groups of coal were investigated based on the mercury intrusion tests. The result is significance for CBM exploitation and gas extraction.
Experimental Equipment and Process.
Before the test, the three groups of coal lumps were heated to target temperature (25,50,75, and 100°C) in the drying oven and kept at a target temperature for 24 hours, and then, the coal lumps were sealed and cooled naturally to room temperature. The coal lumps after thermal treatments were prepared for the XRD, SEM, and mercury intrusion tests to analyze the variation of mineral composition, microstructure, and pore structure. The XRD test was performed on D8 ADVANCE X-ray Diffractometer (BRUKER Corporation, Germany). The radius of goniometer is 250 mm, divergence slit is 0.6 mm, and antiscatter silt is 8 mm. The coal powder was milled and sieved to pass through a 325 mesh sieve, and the mass of each group of coal powder was not less than 0.5 g. The coal powder was poured into the center of a clean sample tray and then capped with a clean glass sheet to flatten the surface of the coal powder. Then, the mineral composition can be measured. The next coal powder is measured after the coal powder measurement completed.
The mercury intrusion test was conducted on Autopore IV 9510 Automatic Mercury Porosimeter. The working pressure ranges from 0 to 60000 psi (414 MPa), and the measurement range of pore size is 0.003~1000 μm. According to the previous research [52], the contact angle was set to 130°, and the mercury (Hg) surface tension was set to 0.485 N/m. The mercury intrusion tests were carried out on three groups of coal specimens, and the related parameters such as pore size distribution, total pore volume, and total pore area can be obtained after the test.
The specimens were scanned by Quanta 250™ SEM scanner (FEI, USA). The electron beam voltage ranges from 200 V to 30 kV, and the range of magnification is 6~1000000. Before the test, the sample is first placed in the sample bin and then vacuumed. The region of interest (ROI) was magnified by 400, 1000, 3000, 10000, and 20000 times, respectively, and the SEM images were stored simultaneously.
Results and Discussions
3.1. Mineral Composition after Thermal Treatment. XRD tests are commonly used to investigate the mineral composition of coal and rock materials [20,39,46,53,54]. Therefore, XRD tests were carried out on coal powder after different temperature treatments (25, 50, 75, and 100°C). As shown in Figure 2, the main mineral composition of three groups of coal 5 Geofluids specimens is basically unchanged; namely, no chemical changes occur after the thermal treatment. The result is consistent with that of granite [20]. As shown in Figure 2(a), the main mineral compositions of JJ coal specimens are kaolinite, calcium phosphate, and clinochlore. The maximum diffracted intensity is 1506 at 25°C, decreases at 50°C, and then increases to the peak of 1605 at 100°C. As shown in Figure 2(b), the mineral compositions are kaolinite, quartz, and clairite for CZ coal specimens. The maximum diffraction intensity is 1575 at 25°C, decreases to 1250 at 75°C, and then increases to 1323 at 100°C. As shown in Figure 2(c), the mineral compositions of PDS coal specimens are kaolinite, clairite, and montmorillonite. The maximum diffraction intensity is 1601 at 25°C, decreases to 1264 at 75°C, and then increases to 1492 at 100°C. Thus, the maximum diffracted intensity of three groups of coal specimens after different temperature treatments decreases first and then increases with the increasing temperature. The phenomena are mainly caused by thermal treatment and the anisotropy of coal specimens. In summary, there is no change in the main mineral compositions, and the maximum diffracted intensity changes a little after heat treatment. The maximum diffracted intensity after different temperature treatments decreases first and then increases with the increasing temperature.
3.2. Microstructure after Thermal Treatment. SEM tests are commonly used to investigate the microstructure of cross-sections of coal and rock materials [26,33,34,39,41,51,55]. Therefore, SEM tests were carried out on coal samples after different temperature treatments (25,50,75, and 100°C). The regions of interest (ROI) of coal specimens were magnified by 400, 1000, 3000, 10000, and 20000 times, respectively. The SEM images of three groups of coal specimens after different temperature treatments are shown in Figures 3-5, which were magnified by 1000 times. As shown in Figure 3, the fissure width is large at the temperature of 25°C, and there are no obvious fissures at the temperature of 100°C. Thus, the fissure width decreases with the increasing temperature. Meanwhile, there are a few pores generated after thermal treatment. Similarly, it can be seen from Figure 4 that the pores grow with the rise of temperature of thermal treatment, and the fissure width decreases with the increasing temperature. As shown in Figure 5, the fissure width decreases first and then increases with the rise of temperature of thermal treatment, and there are many pores generated after thermal treatment. Thus, thermal treatment has a significant effect on microstructure of coal specimens, which is related to the permeability. In summary, the count of fissures decreases first and then increases with the increasing temperature, which is mainly caused by thermal expansion and thermal cracking of coal matrix. Meanwhile, the count of pores increases first and then decreases with the increasing temperature of thermal treatment, which is mainly due to the thermal shrinkage and thermal expansion of coal matrix. [20,50,51,[56][57][58]. To obtain the pore and fissure characteristics of coal specimens after thermal treatment, mercury intrusion tests were carried out on coal samples after different temperature treatment (25, 50, 75, and 100°C). As shown in Figure 6, the mercury injection curve is S-shape, which can be divided into three stages, namely, initial injection stage, slow injection stage, and rapid injection stage. The cumulative pore volume increases fast at the initial injection stage, increases slowly during the slow injection stage, and increases rapidly at the rapid injection stage. There are significant differences in cumulative pore volume for different coal specimens. The cumulative pore volume is 0.03 mL/g in PDS coal specimen, and the cumulative pore volume is 0.12 mL/g in JJ coal specimen, which is 4 times higher than that in PDS coal specimen. The main rea-son is the differences in microstructure of coal specimens. Similarly, there also exist some differences in mercury injection curve of coal specimens after different temperature treatment. The cumulative pore volume increases first and then decreases with the increasing temperature for CZ coal specimen. For JJ coal specimen, the cumulative pore volume increases first, then decreases, and finally increases with the increasing temperature. The cumulative pore volume decreases first and then increases with the increasing temperature for PDS coal specimen. These phenomena indicate that the mineral composition and pore distribution have a significant effect on the mercury injection curve.
To analyze the velocity of mercury injection of coal specimens after different temperatures, the velocity in high pressure (100~400 MPa) is calculated using the least squares method. As shown in Figure 7, the fitted curves fit well for those data. The velocity of coal specimens after different 7 Geofluids temperature treatment is shown in Figure 8. The velocity of JJ coal specimen is higher than 1:4 × 10 −4 mL/(g·MPa), and that of CZ and PDS coal samples ranges from 4 × 10 −5 to 6 × 10 −5 mL/(g·MPa), which is mainly caused by the differences in microstructure of coal specimens. Meanwhile, the velocity of JJ and CZ coal specimen increases first and then decreases with the increasing temperature. However, the velocity of PDS coal specimen decreases first and then increases with the rise of temperature. These phenomena are mainly caused by the mineral composition and pore structure.
The pore distribution of coal specimens can be measured by the mercury intrusion tests, and the relationship between the pore size and the applied pressure can be expressed as follows [52]: where pðrÞ is the applied pressure, r is the radius of pore, θ is the contact angle (130°), and γ is the Hg surface tension (0.485 N/m).
Geofluids
The pore parameters of the three groups of coal specimens after different temperature treatments are listed in Table 1. The porosity, pore area, median pore diameter, and average pore diameter for JJ coal specimens are higher than those for CZ and PDS coal specimens. However, the bulk density of JJ coal specimens is lower than that of CZ and PDS coal specimens. In general, the porosity, pore area, median pore diameter, and average pore diameter increase with the increasing temperatures. However, there are some exceptions. These phenomena are mainly caused by the mineral composition and pore structure. Figure 9 shows the relationship between pore volume and pore diameter in three groups of coal specimens after different temperature treatments, and the pore diameter ranges from 3 nm to 150 μm. Previous studies show that the pore can be classified as microfracture (10~150 μm), macropore (1~10 μm), mesopore (0.1~1 μm), transition pore (10~100 nm), and micropore (3~10 nm) [52]. As shown in Figure 9, the pore volume of micropore and microfracture increases first and then decreases with the increasing temperature of thermal treatment, and the pore volume of macropore, mesopore, and transition pore changes a little with the increasing temperature. Those phenomena indicate that the micropore and microfracture are strongly influenced by the temperature. However, there exist significant differences among these three coal specimens. The pore diameter of JJ coal specimen is mainly concentrated on 3~100 nm, which mainly consisted of micropores and transition pores. On the contrary, the pore diameter is concentrated on 50~150 μm (microfracture) for CZ and PDS coal specimens.
To quantitatively analyze the pore distribution characteristics after different temperature treatments, the pore volume 10 Geofluids with different diameters after the thermal treatments is calculated. As shown in Figure 10, the volume of microfracture decreases at 50°C, which is mainly caused by thermal expansion of coal matrix; it increases at 75°C due to thermal shrinkage of coal matrix; it decreases at 100°C due to thermal expansion. The volume of macropore and mesopore increases slowly, and that of transition pore decreases slowly with the increasing temperature. Meanwhile, the volume of micropore increases first and then decreases after the thermal treatment. These phenomena indicate that the micropore and microfracture are strongly influenced by the temperature. However, there are some exceptions due to the differences in mineral composition and pore structure. Previous studies have demonstrated that the pore size distribution of coal and rock conforms to fractal characteristics [51,59,60]. Therefore, the effect of temperature on the pore structure of coal after different temperature treatments can be quantitatively analyzed based on fractal theory. According to the methods mentioned in previous studies [51,59,60], where V is the pore volume, P is the applied pressure, k is the linear slope, and b is the constant. And the fractal dimension (D) of pore size can be obtained: As shown in Figure 11, the fractal dimension ranges from 2.6 to 2.9 and increases with the increasing temperature, which indicates that the pore surfaces is less and less smooth as the temperature rises. To analyze the evolution of fractal dimension with temperature quantitatively, the least squares method was used to fit the above data. As shown in Figure 11, the fractal dimension is nearly linearly related to the temperature. These phenomena indicate the pore surfaces of coal become rougher as the temperature rises.
In summary, thermal treatment has significant effects on mineral composition, microstructure, and pore structure of coal. The main mineral compositions after thermal treatment are basically unchanged, and the maximum diffracted intensity decreases first and then increases with the increasing temperature. The count of fissures decreases first and then increases, and the count of pores increases first and then decreases with the increasing temperature. The porosity, pore area, median pore diameter, and average pore diameter increases with the increasing temperatures. The fractal dimension of pore size ranges from 2.6 to 2.9 and increases linearly with the increasing temperature. The pore structure varies with mineral composition and microstructure due to the thermal shrinkage and thermal expansion of coal matrix. Meanwhile, the macroscopic mechanical properties are related to mineral composition, microstructure, and pore structure.
Conclusions
The X-ray Diffraction (XRD) test, scanning electron microscopy (SEM) test, and mercury intrusion test were carried out for three groups of coal, and the mineral composition and pore structure of coal specimens after thermal treatment (25,50,75, and 100°C) were analyzed. The main conclusions are as follows: (1) The main mineral compositions of three groups of coal specimens after thermal treatment are basically unchanged, and the maximum diffracted intensity 11 Geofluids changes a little after heat treatment. The maximum diffracted intensity after different temperature treatments decreases first and then increases with the increasing temperature (2) The count of fissures decreases first and then increases with the increasing temperature, and the count of pores increases first and then decreases with the increasing temperature. Thermal treatment has a significant effect on microstructure of coal specimens (3) The velocity of mercury injection in high pressure (100~400 MPa) of JJ and CZ coal specimens increases first and then decreases with the increasing temperature, but that of PDS coal specimen decreases first and then increases with the increasing temperature. The porosity, pore area, median pore diameter, and average pore diameter increase with the increasing temperatures (4) The volume of microfracture decreases, then increases, and finally decreases. The volume of macropore and mesopore increases slowly, and that of transition pore decreases slowly with the increasing temperature. Meanwhile, the volume of micropore increases first and then decreases during the process of thermal treatment. The fractal dimension of pore size ranges from 2.6 to 2.9 and increases with the increasing temperature. The fractal dimension is nearly linearly related to the temperature
Data Availability
The experimental data used to support the findings of this study are included within the article.
Conflicts of Interest
The authors declare no conflict of interest. | 4,597.6 | 2021-01-01T00:00:00.000 | [
"Environmental Science",
"Materials Science"
] |
Spatially-segmented undersampled MRI temperature reconstruction for transcranial MR-guided focused ultrasound
Background Volumetric thermometry with fine spatiotemporal resolution is desirable to monitor MR-guided focused ultrasound (MRgFUS) procedures in the brain, but requires some form of accelerated imaging. Accelerated MR temperature imaging methods have been developed that undersample k-space and leverage signal correlations over time to suppress the resulting undersampling artifacts. However, in transcranial MRgFUS treatments, the water bath surrounding the skull creates signal variations that do not follow those correlations, leading to temperature errors in the brain due to signal aliasing. Methods To eliminate temperature errors due to the water bath, a spatially-segmented iterative reconstruction method was developed. The method fits a k-space hybrid signal model to reconstruct temperature changes in the brain, and a conventional MR signal model in the water bath. It was evaluated using single-channel 2DFT Cartesian, golden angle radial, and spiral data from gel phantom heating, and in vivo 8-channel 2DFT data from a FUS thalamotomy. Water bath signal intensity in phantom heating images was scaled between 0-100% to investigate its effect on temperature error. Temperature reconstructions of retrospectively undersampled data were performed using the spatially-segmented method, and compared to conventional whole-image k-space hybrid (phantom) and SENSE (in vivo) reconstructions. Results At 100% water bath signal intensity, 3 ×-undersampled spatially-segmented temperature reconstruction error was nearly 5-fold lower than the whole-image k-space hybrid method. Temperature root-mean square error in the hot spot was reduced on average by 27 × (2DFT), 5 × (radial), and 12 × (spiral) using the proposed method. It reduced in vivo error 2 × in the brain for all acceleration factors, and between 2 × and 3 × in the temperature hot spot for 2-4 × undersampling compared to SENSE. Conclusions Separate reconstruction of brain and water bath signals enables accelerated MR temperature imaging during MRgFUS procedures with low errors due to undersampling using Cartesian and non-Cartesian trajectories. The spatially-segmented method benefits from multiple coils, and reconstructs temperature with lower error compared to measurements from SENSE-reconstructed images. The acceleration can be applied to increase volumetric coverage and spatiotemporal resolution.
Background
Over the last ten years, MR-guided focused ultrasound (MRgFUS) has emerged as a promising treatment modality for several neurological conditions. Targeted thermal heating delivered by MRgFUS is being used to treat conditions such as essential tremor [1][2][3], chronic neuropathic pain [4], Parkinson's disease [5], obsessive compulsive disorder [6], and brain tumors [7,8]. In cases targeting subcortical areas near the center of the brain, the potential benefits of MRgFUS therapy are promising. With no incisions, the risk of damage to surrounding brain structures and cortical tissue is dramatically lower than with invasive procedures. For this reason, MRgFUS may be the only treatment option in otherwise inoperable situations [7,9].
Current clinical transcranial MRgFUS systems comprise a hemispheric 1024-element ultrasound phased array transducer with 30 cm diameter (Insightec ExAblate Neuro 4000; Insightec Ltd, Haifa, Israel). The patient's head is positioned in the device and immobilized by a stereotactic frame. Degassed water fills the space between the transducer and the head, and is contained by a rubber membrane that allows direct contact between the water and scalp [9]. Figure 1a illustrates a cross-sectional view of the transducer and water bath positioned around the patient's head. The water bath couples ultrasound energy between the transducer and the body, and is chilled to 15-20°C and circulated after each sonication to dissipate heat from the head. Although active water circulation is performed between imaging sequences, residual circulatory flow and acoustic streaming effects during sonication cause motion of the water bath during imaging. This intrascan motion of the water bath results in artifacts with a ripple-like appearance that alias into the MR images and temperature maps.
The current clinical temperature monitoring protocol for transcranial MRgFUS dynamically images a single 2D slice. Increased spatial coverage is needed to enable monitoring of off-target heating and to evaluate new treatment targets [10,11], but this will require some form of accelerated temperature imaging to acquire more data without compromising frame rate. Accelerated temperature imaging could also reduce temperature errors due to intra-scan water motion artifacts. However, conventional MRI scan acceleration approaches such as parallel imaging [12,13] and simultaneous multi-slice imaging [14][15][16] require a dense array of receive coils to be placed near the head, so they are of limited utility in MRgFUS applications because coil placement is restricted by the transducer. As will be shown here, the sensitivity profiles of coils placed outside the transducer (far away from the head), are not sufficiently distinct in the brain to provide artifact-free images and temperature maps at useful scan acceleration factors using conventional parallel imaging reconstruction. Specialized coils that can be integrated with the transducer are in early stages of development, and may offer modest parallel imaging acceleration [17][18][19]. However, integrated coils are not yet widely available, and may still benefit from combination with other accelerated imaging approaches such as the one described here.
Multiple groups have developed accelerated temperature mapping methods from undersampled k-space data that exploit temporal correlations between and among baseline (pre-treatment) and dynamic (during treatment) images to suppress undersampling artifacts [20][21][22]. However, adaptations of these methods for brain applications [10,11] could be affected by signal variations that are not accounted for in signal models, and are not captured in baseline images due to their random dynamic behavior (Fig. 1b). This breaks temporal correlation assumptions between images collected during a single focused ultrasound sonication, and results in temperature map artifacts.
We present a spatially-segmented approach for reconstructing temperature maps in brain MRgFUS, in which we separately estimate a water bath image without a baseline, and a temperature map in the brain using the k-space hybrid method with a baseline (Fig. 1c). We compare the approach with temperature maps calculated by the conventional whole-image k-space hybrid method and (when multiple receive coils were used) by phase difference after SENSE image reconstruction [12,22]. Gel phantom heating data are evaluated using Cartesian and non-Cartesian k-space sampling. We also investigate the effect of reducing the water bath signal intensity on the temperature reconstruction performed with and without spatial segmentation.
Signal model
The spatially-segmented thermometry algorithm reconstructs both a brain temperature map and a water bath image. Inside the brain, the k-space hybrid model is applied which incorporates baseline image data, while no image model is applied in the bath. Given a set of in-brain image voxels B, the signal is modeled as: where y i is a complex-valued k-space data sample acquired at location k i , the x j = (x j , y j , z j ) are spatial locations, the f j f ( x j ) are samples of the phase drift-compensated inbrain baseline image, thef j f ( x j ) are samples of the current bath image, the α j α( x j ) are samples of a heatinduced image magnitude attenuation map, the θ j θ( x j ) are samples of the heat-induced phase shift map, ı = √ −1, and the ε i are i.i.d. complex Gaussian noise samples [22,23]. Here, θ j = α j = 0 is assumed for j / ∈ B. The water bath signal exhibits random dynamic changes during sonication (arrow indicates sonication target in gel phantom). c The proposed spatially-segmented reconstruction model for undersampled brain MRgFUS, which separately estimates a water bath image without a baseline, and a temperature map in the brain with a baseline. In the proposed method, undersampled dynamic data are reconstructed using the k-space hybrid method in the brain and a conjugate gradient (CG) method in the water bath The hybrid referenceless and multibaseline thermometry model [24] is enforced in the brain, as: where N b is the number of complex baseline brain images b reconstructed from fully-sampled k-space data acquired prior to treatment, the w l are baseline image weights, A is a matrix of polynomial basis functions, and c is a vector of polynomial coefficients. Using a weighted combination of baseline images allows robust thermometry over a range of tissue positions and has previously been shown to benefit brain thermometry for MRgFUS [25], though a single baseline will be used in the present work. Background phase drift is modeled by the low-order polynomial basis functions in the matrix A, to account for spatially-smooth dynamic changes in the magnetic field, such as result from B 0 field drift and respiration. Incorporating this model for the baseline image, Eq. 1 can be written as: Figure 1c illustrates the overall undersampled dynamic image model. Brain voxels are defined using a userdefined region of interest (ROI) mask. As the patient is immobilized in the scanner, preventing translational motion during treatment, the ROI can be defined once for each treatment session.
Problem formulation
The signal model (Eq. 3) is fit to acquired k-space data contained in a vectorỹ by solving the following optimization problem: where · 1 is the 1 norm, λ is an 1 regularization parameter that controls the sparsity of α and θ , and R(·) is a second-order finite differencing spatial roughness penalty with regularization parameters β and η that control the smoothness of α, θ , andf [22,26]. This problem is solved by alternately updating the water bath image, and the brain image model parameters, as described next.
Algorithm
The following alternating minimization algorithm is used to solve the problem in Eq. 4, given initial estimates of w, c, α, θ , andf : Update the water bath imagef , by solving: where N k is the total number of k-space samples, andỹ ¬B is the residual k-space signal due to the water bath: This is solved using a conjugate gradient (CG) (single receive coil) or CG-SENSE (multiple receive coils) algorithm [26][27][28]. Upon updatingf , the residual k-space signalỹ B due to the brain is updated: and used in the subsequent brain model parameter updates.
3:
Update w by solving the quadratic programming problem: where G is a discrete Fourier Transform (FT) matrix and B is a matrix whose columns are the baseline images b [22]. 4: Update α and θ by solving the constrained minimization problem: using a nonlinear conjugate gradient (NLCG) algorithm as described in Ref. [22].
5:
Update c using an NLCG algorithm that is similar to the α and θ updates, but incorporates the basis matrix A and applies no sparsity regularization or sign constraints. This is further described in Ref. [22]. 6: until Stopping criterion met. 7: To eliminate temperature bias due to the 1 norm, steps 1-6 are repeated with λ = 0, and α and θ are only updated in voxel locations j in which θ j is more negative than a threshold value after Step 6.
Algorithm implementation
All reconstructions and evaluations were performed in MATLAB R2015a (Mathworks, Natick, MA) on a workstation with dual 6-core 2.8 GHz X5660 Intel Xeon CPUs (Intel Corporation, Santa Clara, CA) and 96 GB of RAM. Nonuniform fast Fourier transforms were used for reconstructions from non-Cartesian k-space trajectories [28].
No parallelization was used beyond intrinsically multithreaded MATLAB functions. Initial values for c, α, θ , andf were set to zero. The initial baseline image weights, w, were then determined according to Eqs. 7 and 8. The algorithm stopping criterion was a relative change in the objective function of less than 0.1% between consecutive iterations. Estimates of image magnitude attenuation and temperature shift were corrected for bias due to the 1 norm in voxels where θ was more negative than −0.01 radians, as described in step 7 of the algorithm. The backtracking line search used in the NLCG algorithm to update α and θ , described in Ref [22], exited when the relative change in the objective function was less than 0.1% and 0.001% between consecutive iterations for phantom and in vivo datasets, respectively.
Effect of water bath signal level
The signal intensity of the image in the water bath was manually scaled to 0, 25, 50, 75, and 100% of its original value, prior to synthesizing the sampled k-space data, to evaluate the effect of its presence in whole-image and spatially-segmented reconstruction approaches. Temperature reconstructions were performed for 3× undersampled 2DFT data as described below.
MRgFUS thalamotomy
Imaging A patient received MRgFUS thermal ablation treatment at 3T (Signa Excite, GE Healthcare, Milwaukee, WI; ExAblate Neuro, Insightec Ltd., Haifa, Israel) as part of a chronic neuropathic pain treatment study at the University Hospital of Zurich. Full informed written consent was obtained prior to the treatment. 2DFT gradient echo images were collected with an 8-channel receive coil that wrapped around the outside of the transducer (RAPID Biomedical, Rimpar, Germany), a 13 ms TE, 28 ms TR, 28×28×0.3 cm 3 field of view, 256×128 acquisition matrix, and 30 • flip angle.
Undersampled temperature reconstruction Images and maps were reconstructed to a 128×128 matrix and retrospectively undersampled by 2, 3, and 4× (64, 42, and 32 lines), with full sampling over 32 (2×) and 17 (3 and 4×) central k-space lines. SENSE coil sensitivity maps were estimated by reconstructing the average k-space data across dynamics and dividing by the sum-of-squares image [12]. Figure 2b shows k-space sampling and SENSE image reconstructions of the undersampled data for each acceleration factor. Temperature maps were calculated by phase difference between baseline and dynamic SENSE-reconstructed images ("SENSE everywhere"), and using the segmented method with CG-SENSE reconstruction of the water bath image ("spatially-segmented"). Temperature maps derived from the SENSE-reconstructed images incorporated the background phase drift correction estimated by the spatially-segmented k-space brain model. k-Space hybrid regularization parameters and iterations per CG-SENSE bath image update were: λ = 10 −6.265 , β = 10 −20 , and n = 2. An ROI mask of the brain mask was selected from a fully-sampled baseline image. A 4×4-voxel square ROI centered on the temperature hot spot was defined to calculate mean and peak temperature changes. RMSE was calculated as for the phantom data. Figure 3 shows temperature maps and errors at peak heat for images in which the water bath was scaled between 0 and 100% of its true value. With full data sampling, the water bath signal level does not affect the temperature map error. However, as the water bath signal increases to its true value, errors arise in temperature maps estimated from undersampled data without spatial segmentation. With zero signal in the water bath, temperature estimates from k-space everywhere and spatially-segmented methods are similar. Temperature maps reconstructed using the spatiallysegmented algorithm are also similar in appearance and RMSE across the range of water bath image scaling levels. The RMSE in the brain/hot spot was improved by factors of 1.06/0.89 (0% scaling), 1.75/2.97 (25% scaling), 2.81/3.81 (50% scaling), 3.47/4.16 (75% scaling), and Undersampled temperature reconstructions Temperature maps reconstructed from undersampled data during heating (dynamic 5), peak heat (dynamic 8), and cooling (dynamics 10 and 15) are displayed in Fig. 4. Across all dynamics of the 3× undersampled 2DFT data, temperature maps have high error when reconstructed using the k-space everywhere method. With GA radial sampling, 3× undersampled k-space everywhere reconstructions have much lower in-brain artifact, although errors are observed near the periphery of the brain. Compared to 2DFT, 2× undersampled spiral k-space everywhere reconstructions have lower in-brain artifact, though errors are present throughout the brain that are similar in appearance to the CG reconstruction errors in the magnitude image (Fig. 2a). All spatially-segmented reconstructions have low temperature error in the brain. Figure 5 shows mean and peak temperature change in the hot spot for fully sampled and undersampled reconstructions for each trajectory. Mean and peak temperature change estimates contain errors using the k-space everywhere reconstruction, even with no undersampling, since unaccounted for signal differences in the water bath between baseline and dynamic images create errors in fitting the temperature change model to the data. As acceleration increases, 2DFT estimates of peak temperature change are slightly overestimated during cooldown dynamics, and GA radial and spiral estimates of peak heat are slightly dampened using the proposed method. In all cases, spatially-segmented reconstructions tracked the average temperature change in the hot spot within 0.24°C. Spatially-segmented 2DFT reconstructions tracked the peak temperature rise within 0.89°C at all factors; GA radial and spiral reconstructions tracked within 0.94°C for factors up to 4× (GA radial) and 2.4× (spiral). Fig. 4 Phantom heating results. Reconstructed temperature changes in the brain phantom with 3× undersampled 2DFT and GA radial, and 2× undersampled spiral trajectories RMSE is lower for spatially-segmented reconstructions compared to k-space everywhere across all image dynamics in both the brain and hot spot (Fig. 6). On average, RMSE in the brain/hot spot was reduced by factors of: 5.96/26.77 (2DFT), 2.20/4.91 (GA radial), and 5.65/12.00 (spiral).
Figures 7-8 show reconstruction results from the in vivo
MRgFUS thermal ablation treatment. At 2×, temperature estimates from SENSE reconstructed images are similar to fully sampled maps in the hot spot, but contain large errors within the brain. SENSE-reconstructed images contain significant aliasing artifacts (Fig. 2b) that degrade temperature map accuracy in the brain. At 3× and 4×, increased phase artifacts obscure the hot spot and cause higher temperature error across image dynamics. Artifacts are lower in all the spatially-segmented temperature maps. At all factors, the spatially-segmented reconstruction tracked the average temperature change within 1.53°C, and tracked the peak temperature rise within 3.38°C up to 3×, reflecting slightly higher temperature error at dynamic 3. Excluding dynamic 3, the peak temperature estimate was within 1.52°C up to 3×. RMSE in the brain/hot spot is reduced using the proposed method compared to SENSE by factors of 1.
Summary of main results
Unpredictable water bath motion during brain MRg-FUS confounds model-based approaches to accelerated MR temperature mapping, resulting in large temperature artifacts due to aliasing of water bath signals into the brain. The proposed spatially-segmented reconstruction approach was demonstrated to reduce error in undersampled temperature reconstructions of a gel-filled human skull ablation with a single receive coil, which is the most common coil configuration currently, and an in vivo thermal ablation with 8 receive coils, which may become the most common coil configuration in the near future. Rather than relying on previously acquired baseline images, separately reconstructing the image in the water bath at each treatment dynamic better characterizes its signal and results in lower temperature error in the brain.
Phantom heating experiments demonstrated errors in undersampled temperature reconstruction when using the baseline water bath image as a reference for the treatment image, even when water bath signal intensity was reduced to 25% of its actual value. However, Fig. 7 In vivo MRgFUS treatment results. Reconstructed temperature change maps in the brain across dynamics from fully sampled and 2-4× undersampled 2DFT data reconstructing the water bath image at each dynamic and applying a model-based temperature reconstruction in the brain resulted in undersampled temperature maps with low error using a single receive coil with 2DFT, golden angle radial, and variable density spiral k-space sampling trajectories, regardless of the water signal strength. This indicates that the proposed method could be of value, even if water is doped to reduce its signal.
In vivo data demonstrated the spatially-segmented reconstruction approach achieved low temperature error compared with temperature maps calculated from images reconstructed with SENSE. Magnitude images estimated by the k-space hybrid method in the brain (derived from the input baseline images and corrected for phase drifts) and CG in the water bath also had lower error than SENSE reconstructions of the dynamic images (results not shown). Overall, the segmented method is complementary to parallel reception, since parallel imaging Fig. 7) (b) Root-mean-square error in the brain and hot spot for accelerated temperature reconstructions reconstruction will perform better in the water bath where the multiple coils have more distinct sensitivities, but less well in the brain where the sensitivities are similar and do not provide distinct encoding. By using prior baseline information in the temperature model, the segmented method achieves good reconstructions in the brain.
Reconstruction of the image in the water bath
Early attempts to incorporate compressed sensing using standard wavelet 1 penalties did not significantly improve temperature reconstruction results. However, it is possible that using better sparsifying transforms that are tailored to the water bath could enable the use of a compressed sensing reconstruction in the water bath. Improved water bath image reconstruction could potentially reduce computation time, by reducing the number of iterations required in the reconstruction.
Modifications to reduce MR signal intensity in the water bath
A possible solution to reduce temperature error associated with the water bath is to alter the water to have low MR signal intensity. An acceptable contrast agent would need to be both biologically safe and acoustically transparent. Although deuterated water ( 2 H 2 O, or D 2 O) has low MR signal, it has been shown to have negative effects on cell function and structure, suggesting that dosage and safety effects would need to be investigated before adopting a D 2 O solution in the bath [30][31][32].
Gadolinium (Gd) could be added to the water to decrease its T 1 relaxation time. While Gd is also toxic, chelated forms such as Gd-diethylenetriaminepentacetate (Gd-DTPA) have been used safely in patients. However, high Gd-DTPA concentrations increase the inhomogeneity of the local magnetic field, causing signal loss in nearby pixels [33][34][35]. While this could be ignored in the water bath itself, it may impair safety monitoring near the skull surface, where the risk of tissue overheating is high. Studies in tissue have shown the Gd-DTPA structure is not disrupted by the application of ultrasound [36]. However, investigation of Gd-DTPA in the water bath may be warranted to determine whether there is any negative impact on the chelate structure, ultrasound wave propagation, or radiofrequency wave conduction.
Computational considerations
Computation times for the current implementation of the algorithm were on the order of tens of seconds per time frame, which is not compatible with real-time clinical use. Real-time clinical use will require more powerful computing, parallelization, and algorithm innovations to reduce compute time by approximately a factor of 100, so that each time frame's temperature map is fully computed before data acquisition for the next frame is completed.
For example, a finely parallelized GPU implementation could dramatically accelerate and even obviate the use of non-uniform fast Fourier transforms for non-Cartesian reconstructions [37]. However, the method could immediately be used for pre-clinical applications, as well as in-between clinical sonications to obtain the best possible temperature maps retrospectively for treatment verification and guidance.
Other possible embodiments
Although the method presented here was demonstrated with the k-space hybrid dynamic image model, it should be compatible with other accelerated temperature mapping methods [20]. The segmented approach may also be useful to suppress temperature artifacts due to intra-scan water bath motion in fully-sampled acquisitions. Finally, it may find applications outside the brain in scenarios where the sonicated target region does not move, but there is other organ motion distant from the target (such as bowel motion in uterine fibroid treatments).
Conclusions
While the water bath enables transcranial applications of MRgFUS by providing acoustic coupling and cooling, it presents unique challenges in the reconstruction of temperature maps, particularly from undersampled MRI data. Applying separate reconstructions to the image in the brain and water bath results in lower temperature error when undersampling k-space using single and multiple receive coils. The spatially-segmented reconstruction method enables temperature estimation with low artifacts from undersampled data during brain MRgFUS treatments, and can be combined with parallel imaging methods when multiple receive coils are available.
Availability of data and materials
The datasets supporting the conclusions of this article and MATLAB code for the described algorithm are available in the spatiallySegmentedMRIThermometry repository, DOI:10.5281/zenodo.214694, https://github.com/wgrissom/spatiallySegmentedMRIThermometry/. | 5,493.6 | 2017-05-30T00:00:00.000 | [
"Engineering",
"Medicine"
] |
Germination response of palm seeds on a two-way thermogradient plate
ABSTRACT Palm trees are propagated almost exclusively by seeds and each species germinates under a certain temperature range. In this sense, the two-way thermogradient plate may be used to determine temperature limits for germination and seed response to temperature. The objective was to define the alternating temperature regime promoting higher and faster seed germination of Carpentaria acuminata and Phoenix canariensis palms using a two-way thermogradient plate. This equipment allowed 64 combinations of alternating and constant temperatures, ranging from 6.97 to 36.42 ºC for C. acuminata, and 7.96 to 35.94 ºC for P. canariensis. Seeds were sown in Petri dishes (25 x 9 cm) containing 1% water agar. Linear regressions were estimated to determine cardinal temperatures. After 50 days, non-germinated seeds were transferred from the two-way thermogradient plate to a germination chamber at 30 °C. The temperature regime promoting highest seed germination percentage of C. acuminata was 30.45/33.00 °C (day/night), with minimum, optimum, and maximum temperatures of 9.13, 28.53, and 36.33 °C, respectively. For seed germination of P. canariensis, the most appropriate temperature regime was 29.77/17.93 °C (day/night), with minimum, optimum, and maximum temperatures of 9.53, 28.03, and 35.43 °C, respectively.
INTRODUCTION
Palms belong to the Family Arecaceae that comprises more than 3,500 species from more than 240 genera, spread throughout the world, mainly in the tropical regions of Asia, Indonesia, the Pacific Islands, and the Americas (Lorenzi et al., 2004).According to Uhl & Dransfield (1987), palms are distributed in virtually all tropical and subtropical regions, not occurring in desert or semi-desert areas, except when there is water near the surface, forming oases, with only few species occurring in temperate zones.
The carpentaria palm [Carpentaria acuminata (H. Wendl.& Drude) Becc.] originates from banks of rivers and streams in flooded forests of Northern Australia, while the Canary Island date palm (Phoenix canariensis Hort.ex Chabaud) comes from coastal areas of open vegetation on stony and dry terrains of the Canary Islands (Lorenzi et al., 2010).Although exotic, both species are widely used in Brazilian landscaping (Batista et al., 2016).
The propagation of palm trees is done, almost exclusive-Gisele Sales Batista et al. ly, by seeds.However, germination is generally considered slow, uneven, and often low, with great variation along that process which is influenced by several factors, such as seed maturation degree, pericarp presence or absence, period between harvest and sowing, environmental temperature, and substrate (Broschat, 1994;Pivetta et al., 2007;Viana et al., 2016).Among all, temperature is a critical environmental factor that regulates seed dormancy overcome and induces germination (Baskin & Baskin, 2014;El-Keblawy, 2017).
Its effect at germination time affects the rate of water uptake by seeds and may alter, among others, germination percentage, speed, and uniformity (Bewley & Black, 1996;Carvalho & Nakagawa, 2000;Castro & Hilhorst, 2004).
Therefore, there is a characteristic thermal range for each species, so values below or above minimum and maximum cardinal temperatures may turn seed germination impossible (Carvalho & Nakagawa, 2000).Moreover, within such range, the temperature also acts on the necessary time to reach maximum germination (Bewley & Black, 1985).
Seeds of certain species show better germinative behavior when submitted to temperature alternation corresponding to natural fluctuations found in the environment, with lower night and higher day temperatures; however, there are also species which seed germination is favored when submitted to constant temperatures (Copeland & McDonald, 1995;Salomão et al., 1995;Lima et al., 1997).Palm seeds usually germinate under a certain temperature range that may be defined by their place of origin, so the ultimate establishment of this amplitude is fundamental to define the possible geographic distribution of each species.Luz et al. (2008), for instance, indicate the temperature range of 25 to 30 °C for seed germination of Dypsis decaryi, while Wen (2019) recommends 20 to 30 °C for Archontophoenix alexandrae germination.Furthermore, Lorenzi et al. (2004) and Broschat (1994) suggest the ideal temperatures of 24 to 28 °C and 30 to 35 °C, respectively, for seed germination of several palm species.
In this sense, the two-way thermogradient plate, which is a bi-directional incubator, may generate data to determine the temperature limits for seed germination of many species; furthermore, it may be used for the development of germination temperature threshold models to assess the sensitivity of germination response to temperature (Manger, 1999).
The objective of this work was to define the alternating temperature regime promoting higher and faster seed ger-mination of Carpentaria acuminata and Phoenix canariensis palms using a two-way thermogradient plate.
MATERIAL AND METHODS
The work was carried out at the Seed Conservation Department of the Millennium Seed Bank, Wakehurst, belonging to the Royal Botanic Gardens, Kew, and located in Ardingly, Sussex, UK.Seeds were taken from pulped fruits of carpentaria palm and Canary Island date palm obtained from a commercial company located in London, UK.Water content of the seed lots and embryos were 18.13 ± 1.7% and 19.12 ± 2.1%, respectively, for carpentaria palm, and 10.32 ± 1.3% and 18.24 ± 2.9%, respectively, for Canary Island date palm, determined by the oven method at 103 °C for 17 hours (ISTA, 2017).
The germination test was conducted according to the Millennium Seed Bank methodology.Therefore, seeds were washed in a 2% sodium hypochlorite solution for 10 minutes in gentle manual shaking, then washed three times in distilled water, and sown in Petri dishes (25 x 9 cm) containing 1% water agar.There were 20 seeds per Petri dish for each temperature on the two-way thermogradient plate, and 64 temperature combinations for each species.
The Petri dishes were placed on the two-way thermogradient plate in an 8 x 8 factorial scheme.The two-way thermogradient plate was set with two temperature gradients that could vary from 4 to 40 °C.The equipment then ranged from 6.97 to 36.42 ºC for Carpentaria palm, and Seed germination was evaluated based on the emission of the germinative bud, every two days, for 50 days.Germination proportion was then plotted to produce a contour map of the 'landscape' germination to highlight optimal conditions for germination.Mean germination percentage was also calculated for the constant temperatures, i.e., from the lower left corner of the plate (always cold) to the upper right corner of the plate (always hot).Germination rate was also evaluated, expressed as 1/T50, which is the time, in days, to reach 50% of the final germination percentage.
Linear regressions were estimated using the germination rate values to determine base and maximum temperatures (Tb and Tm, respectively) (Covell et al., 1986).The optimal temperature (To) was estimated by the intersection between these two regression lines (Hardegree, 2006).
In addition, at the end of the experiment, seeds that did not germinate on the two-way thermogradient plate were transferred to a germination chamber type BOD, at 30 °C with a 12-hour photoperiod, to verify whether the extreme temperatures (either low or high) caused any harm in the seeds.
Germination percentage data were transformed into arcsine and submitted to the variance analysis, and means were compared by the Tukey test (P ≤ 0.05).The Sigma Plot 9.0 (Sigma Plot, 2004) software was used to plot the combination of the germination data with temperature regimes.
Gisele Sales Batista et al.
RESULTS
Both species germinated under a great temperature range on the two-way thermogradient plate.However, seeds under low temperature regimes did not germinate, i.e., those placed near the lower left corner (always cold), with temperatures close to 7.78 °C for Carpentaria palm (Figure 2) and 9.42 °C for Canary Island date palm (Figure 3).
The main germination range of Carpentaria palm at the alternating temperatures was concentrated between 24 and 34 °C during the day and 10 and 36 °C overnight.At constant temperatures, it occurred mainly between 20 and 36 °C (Figure 2a).Seeds germinated faster under warmer temperatures, especially when it was always warm (on the upper right corner) (Figure 2b).At constant temperatures, it occurred mostly between 18 and 35 °C.There was 100% germination at the alternating temperatures varying from 24 to 32 °C during the day and at 22 °C overnight (Figure 3a).Seeds also germinated faster under warmer temperatures, especially at the alternating temperatures of 24 to 32 °C during the day and 27 to 32 °C at night (Figure 3b).There was a significant difference among the temperature amplitudes for germination percentage of Carpentaria palm seeds (Figure 4).The highest value (41.75 ± 2.63%) was observed at the alternating temperature of 30.45 °C during the day and 33 °C at night, which did not differ from the alternating temperatures of 31.20/9.83, 30.45/17.64, 30.39/25.43, 15.02/24.35, and 15.15/32.02
Seeds also germinated faster at the alternating temperature of 30.45 °C during the day and 33 °C at night, which did not differ from the alternating temperatures of 30.45/17.64,30.39/25.43, and 15.15/32.02°C (day/night) (Figure 4b).
Similarly, a significant difference was observed among temperature ranges for both germination percentage and germination rate of Canary Island date palm seeds (Figure 5).The highest germination percentage (73.07± 6.23%) was observed at the alternating temperature of 29.77 °C during the day and 17.93 °C at night, which did not differ from the alternating temperatures of 29.55/10.04, 29.93/25.88, 30.17/33.74, 14.77/29.37, and 14.62/33.20 °C (day/night) (Figure 5a).No germination was observed at the alternating temperatures of 15.32/10.47 and 15.11/18.02°C (day/night).The highest germination rate occurred at the alternating temperature of 29.93 °C during the day and 25.88 °C at night, which did not differ from the alternating temperatures of 29.55/10.04, 29.77/17.93, 30.17/33.74, 14.77/29.37, and 14.62/33.20 °C (day/night) (Figure 5b).ture for seed germination of Carpentaria palm is 28.53 °C, with minimum temperature of 9.13 °C and maximum of 36.33 °C.For Canary Island date palm, the optimum temperature for seed germination is 28.03 °C, with minimum of 9.53 °C and maximum of 35.43 °C.
Seeds of Carpentaria palm that did not germinate under cold temperatures [alternating temperatures of 14.49/9.28and 14.65/16.92°C (day/night) -lower right corner of the two-way thermogradient plate] did achieve germination when transferred to the constant temperature of 30 °C, considered close to the ideal, which is 28.53 °C; there was then 10 to 40% germination (Figure 6).Similar effect was observed for Canary Island date palm seeds, which did not germinate under colder temperatures [alternating temperatures of 15.32/10.47 and 15.11/18.02°C (day/night) -lower right corner of the two-way thermogradient plate]; when transferred to the constant temperature of 30 °C, which is close to the ideal (28.03 °C), presented 80% germination (Figure 7).Gisele Sales Batista et al.
Cold temperatures were not lethal to seeds of both
Carpentaria palm and Canary Island date palm, even when exposed to low temperatures for 50 days, as they did germinate when submitted to the warmer, possibly ideal, temperature.
DISCUSSION
Seeds of Carpentaria palm and Canary Island date palm germinated at different temperature regimes following their, respectively, tropical and subtropical origin that determines the ideal temperatures for germination.Seeds of both species did not germinate under cold temperatures (lower left corner of the two-way thermogradient plate) but did germinate under hot temperatures (upper right corner of the two-way thermogradient plate) or when seeds were placed under 30 °C after such cold period.However, Orozco-Segovia et al. (2003) report that some subtropical species do not require high temperatures for germination and, also, some even needs a period of cool temperature to reach highest germination.Nevertheless, He et al. (2021) mention that, for Canary Island date palm, seed germination happens when dormancy is released by a cold stratification in the soil over winter.Sandoval (2011), when studying temperature regimes from 5.8 to 36.3 °C for seed germination of Browningia candelaris on the two-way thermogradient plate, also observed that there was either no germination under extreme temperatures or it was very low.Similarly, Asomaning et al. (2010) reported that seeds of Khaya anthotheca, when submitted to temperature ranges from 5 to 40 °C on the two-way thermogradient plate, did not germinate under extreme temperatures.
Seeds of both species presented an optimum temperature for germination of approximately 28 °C, either constant or alternating, although constant high temperatures may not be always appropriate for palm seed germination (Orozco-Segovia et al., 2003).Nevertheless, it has been proven by several authors that temperatures close to this are favorable for seed germination of many palm species, such as 25 °C for Syagrus coronata (Porto et al., 2018) and Copernica prunifera (Reis et al., 2010), 30 °C for Oenocarpus minor (Silva et al., 2006) and Elaeis guineensis (Norsazwan et al., 2016), or 25-30 °C for Dypsis decaryi (Luz et al., 2008).Furthermore, close temperatures may even be applied before germination to overcome dormancy, as reported by Ferreira & Gentil (2017) for Phytelephas macrocarpa seeds maintained under 25 °C for nine months without losing viability.Therefore, high temperatures may be necessary to overcome seed dormancy of many species (Orozco-Segovia et al., 2003).Lorenzi et al. (2004), for example, mention that temperatures between 24 and 28 °C are considered favorable for seed germination of several palm species, while Broschat (1994) observed that many palm seeds germinate better at 30 to 35 °C.Constant temperatures of about 30 °C also promoted seed germination of eight tropical palm species of Phoenix and Syagrus genera (Pritchard et al., 2004).Similarly, seeds of four species of Ravenea germinated rapidly at 30 °C (Rakotondranony et al., 2004).
As seeds of palm species are considered recalcitrant, Roberts (1973) and Sarasan et al. (2002) report that seeds shall not be exposed to low temperatures because of viability losses would occur from only few weeks to few months.This is due to high humidity present in the seeds, so cell freezing may occur and, consequently, cell disintegration, hindering germination.However, this effect may also be related to the characteristic seed dormancy of each palm species but, as reported by Orozco-Segovia et al. (2003), mechanisms of both seed germination and dormancy are yet poorly understood for many palms.For both Carpentaria palm and Canary Island date palm, cold temperatures were not lethal.
The low water content found in seeds of both Carpentaria palm and Canary Island date palm (18.13 ± 1.7% and 10.32 ± 1.3%, respectively) probably prevented seed degradation.However, for many palm species, such low water content is considered critical, but these seeds have been reported to be tolerant to desiccation, that is, the low seed water content does not affect germination (Batista et al., 2016).
CONCLUSIONS
The temperature regime promoting highest seed germination percentage of Carpentaria palm (Carpentaria acuminata) was 30.45/33.00 °C (day/night), with minimum, optimum, and maximum temperatures of 9.13, 28.53, and 36.33 °C, respectively.For practical purposes, the alternating temperature of 31/33 °C (day/night) may be recommended.
For seed germination of Canary Island date palm (Phoenix canariensis), the most appropriate temperature regime was 29.77/17.93°C (day/night), with minimum, optimum, and maximum temperatures of 9. 53, 28.03, and 35.43 °C, respectively.For practical purposes, the alternating from 7.96 to 35.94 ºC for Canary Island date palm, with a 12-hour photoperiod (50-100 W m -2 light photon flow) in one direction during the day and over the same temperature range in the perpendicular direction at night.Alternating temperatures (day/night) promoted a constant temperature line from the lower left (always cold) to the upper right corner (always hot) (Figure 1).The equipment then allowed for seed germination testing under a wide range of constant and alternating temperature regimes over time, resulting in 64 temperature combinations (regimes), with eight constant temperatures and 76 alternating temperatures; each combination represented a temperature regime (alternating and/or constant)with no replication.However, for statistical analysis, the germination results of both palm species were grouped into some temperature combinations to obtain the replicates.Eight groups of eight replicates were used for the analysis, considering temperature extreme and central conditions, with up to 5 °C difference among replications.
Figure 1 :
Figure 1: Temperature distribution on the two-way thermogradient plate used to evaluate seed germination response of Carpentaria palm (Carpentaria acuminata) (a) and Canary Island date palm (Phoenix canariensis) (b) along 50 days.
Figure 2 :
Figure 2: Germination percentage (%) (a) and germination rate (1/T 50 ) (days -1 ) (b) of Carpentaria palm (Carpentaria acuminata) seeds on the two-way thermogradient plate with a 12-hour photoperiod along 50 days [numbers and colors indicate germination percentage (a) and germination rate (b); curves show limits among results obtained in the contour maps].
Figure 3 :
Figure 3: Germination percentage (%) (a) and germination rate (1/T 50 ) (days -1 ) (b) of Canary Island date palm (Phoenix canariensis) seeds on the two-way thermogradient plate with a 12-hour photoperiod along 50 days [numbers and colors indicate germination percentage (a) and germination rate (b); curves show limits among results obtained in the contour maps].
Figure 4 :
Figure 4: Germination percentage (a) and germination rate (1/T 50 ) (days -1 ) (b) of Carpentaria palm (Carpentaria acuminata) seeds on the two-way thermogradient plate with a 12-hour photoperiod along 50 days -each cell represents the mean of eight alternating and constant temperatures (Different letters indicate statistical difference among temperatures by the Tukey test at 1% significance).
Figure 5 :
Figure 5: Germination percentage (a) and germination rate (1/T 50 ) (days -1 ) (b) of Canary Island date palm (Phoenix canariensis) seeds on the two-way thermogradient plate with a 12-hour photoperiod along 50 days -each cell represents the mean of eight alternating and constant temperatures (Different letters indicate statistical difference among temperatures by the Tukey test at 1% significance).
Figure 6 :
Figure 6: Germination percentage (%) of Carpentaria palm (Carpentaria acuminata) seeds in the BOD incubator at 30 °C with a 12-hour photoperiod [numbers and colors indicate germination percentage; curves show limits among results obtained in the contour maps].
Figure 7 :
Figure 7: Germination percentage (%) of Canary Island date palm (Phoenix canariensis) seeds in the BOD incubator at 30 °C with a 12-hour photoperiod [numbers and colors indicate germination percentage; curves show limits among results obtained in the contour maps]. | 4,095 | 2023-03-01T00:00:00.000 | [
"Engineering"
] |
Microscopic model for magneto-electric coupling through lattice distortions
We propose a microscopic magneto-electric model in which the coupling between spins and electric dipoles is mediated by lattice distortions. The magnetic sector is described by a spin S=1/2 Heisenberg model coupled directly to the lattice via a standard spin-Peierls term and indirectly to the electric dipole variables via the distortion of the surrounding electronic clouds. Electric dipoles are described by Ising variables for simplicity. We show that the effective magneto-electric coupling which arises due to the interconnecting lattice deformations is quite efficient in one-dimensional arrays. More precisely, we show using bosonization and extensive DMRG numerical simulations that increasing the magnetic field above the spin Peierls gap, a massive polarization switch-off occurs due to the proliferation of soliton pairs. We also analyze the effect of an external electric field $E$ when the magnetic system is in a gapped (plateau) phase and show that the magnetization can be electrically switched between clearly distinct values. More general quasi-one-dimensional models and two-dimensional systems are also discussed.
We propose a microscopic magneto-electric model in which the coupling between spins and electric dipoles is mediated by lattice distortions. The magnetic sector is described by a spin S=1/2 Heisenberg model coupled directly to the lattice via a standard spin-Peierls term and indirectly to the electric dipole variables via the distortion of the surrounding electronic clouds. Electric dipoles are described by Ising variables for simplicity. We show that the effective magneto-electric coupling which arises due the interconnecting lattice deformations is quite efficient in one-dimensional arrays. More precisely, we show using bosonization and extensive DMRG numerical simulations that increasing the magnetic field above the spin Peierls gap, a massive polarization switch-off occurs due to the proliferation of soliton pairs. We also analyze the effect of an external electric field E when the magnetic system is in a gapped (plateau) phase and show that the magnetization can be electrically switched between clearly distinct values. More general quasi-one-dimensional models and two-dimensional systems are also discussed. Multiferroic materials exhibit a magnetoelectric (ME) coupling between their electrical and magnetic moments, a promising feature for device designs controlling magnetization with electric fields, or conversely electrical polarization with magnetic fields. They have been the subject of intense research in the last decade, a century later than the pioneering insight of P. Curie [1] and fifty years after the first theoretical prediction and experimental realization in Cr 2 O 3 [2,3]. Current revival may be traced back to the discovery of simultaneous polarization and magnetization in bismuth ferrite BiFeO 3 [4] and gigantic magnetoelectric effects in rare earth perovskite manganites Te(Dy)MnO 3 [5]. Since then a series of exciting new materials and new microscopic descriptions have been developed (see the reviews [6][7][8][9][10][11][12][13] and references therein). Still, technologically useful multiferroic materials are very rare and constitute an active area of research.
Multiferroics are usually divided into two main groups, named type I and II, depending on whether ferroelectricity and magnetism have different or the same origin (see e.g. [10,12] and references therein). Within the second group, in which ferroelectricity occurs in a magnetically ordered state, further distinction can be made if the magnetic order is collinear [10] or non-collinear [14,15].
In the present Letter, we focus on quasi-one dimensional materials with collinear magnetic orders and propose an effective microscopic model in which the ME coupling is mediated by lattice distortions. Our main motivation arises from many different experiments where the coupling between magnetic moments, elastic distortions and electric dipoles have been observed, in particular, those in [24,25] where multiferroicity has been linked to magnetoelastic deformations in collinear spin models, which in turn produce a net electric polarization.
In this context, we aim to provide a natural microscopic connection between the electro-elastic and magneto-elastic effects and the resulting ME coupling. To this end we propose a model describing magnetic ions with spin S=1/2, dipolar degrees of freedom and deformations along a preferred axis, which allows for a description in terms of almost independent chains of octahedra, as is the case for e.g. [16,19,25], or any other structural units. We find, among other effects, that this model allows for a switch-off the electric polarization by applying a magnetic field, as well as for a magnetization jump induced by varying an electric field. These functionalities are key features that could lead to multiferroics based technologies [26].
We consider a chain of spin 1/2 magnetic ions [27] with the coupling to the lattice taken for simplicity as an adiabatic spin-Peierls term. We also assume that the ions whose motion produce the electric dipoles move in a deep enough double-well potential (the so-called orderdisorder limit [28]) so that the orientation of electric dipoles is described by local Ising variables σ i . Under lon-arXiv:1908.00312v1 [cond-mat.str-el] 1 Aug 2019 gitudinal distortions, we assume that dipoles remain located middle-way between magnetic ions. This granted, the coupling between elastic deformations and electric dipoles recognize two contributions: one stems from the natural 1/r 3 dependence of the dipole-dipole interaction and the other, central in our proposal, arises from a pantograph mechanism [29]. As changes in longitudinal bond lengths are related to the heights of the basic structural units, distortions change the width of the double well potentials which in turn modify the dipolar strengths. A slight generalization could include the so-called bondbending effects, where the magnetic superexchange is better described in terms of bond angles [19]; the conclusions of the present work would remain unchanged.
Assuming a preferred direction for the magnetoelastic distortions, a minimal geometry for the pantograph mechanism is depicted in Fig. 1(b-c), where for definiteness we set the dipoles to be perpendicular to the chain direction; octahedra in three-dimensional materials (a) undergo a similar process. In the following we analyze this simple geometry, considering the Hamiltonian where H SP is the usual spin-Peierls Hamiltonian for S=1/2 spins S i and bond length distortions δ i , with antiferromagnetic exchange J m and elastic stiffness K, and H D is the (long range) electric dipolar energy.
For transverse uniaxial dipoles H D can be written as where the distance r ij = r ij ({δ k }) between dipoles at links j > i depends on distortions. The electric dipolar moments also depend on distortions by the pantograph mechanism, leading in a linear approximation to External magnetic and electric fields along the z axis couple to the spins and dipoles, respectively, by In a general geometry, both α and β should be understood as phenomenological microscopic parameters, that could be fitted by experiments or by first principles computations. The transversality condition on dipole orientations could be relaxed, either because of classical tilting or the inclusion of quantum fluctuations; in these cases our model requires further elaboration, to be reported in a forthcoming work.
In the case that screening makes negligible dipolar interactions beyond first neighbors, the Hamiltonian in Eq.
(1) simplifies to (6) where J e = λ 2 (p 0 ) 2 is the undistorted effective electric exchange coupling. Integrating out deformations would lead to a quartic expression coupling directly the magnetic and elastic degrees of freedom, similar to that proposed in [27] to describe organic molecular solids. The microscopic derivation of this ME coupling will also be the subject of a forthcoming paper. We recall that the pantograph effect in Eq. (4) and the dependence of dipole-dipole electrostatic couplings on distance are at the root of the electroelastic coupling mechanism. The electroelastic part of the Hamiltonian (setting α = 0) is easily analyzed on classical grounds. Distinct dipolar configurations are favoured according to the electric field and the different couplings considered, leading to a rich phase diagram. We show in Fig. 1(d) the electroelastic phases in the E − J e plane for β = 0.2; K sets the energy scale. The lattice distortions can be analytically computed as a superposition of period two and/or period four harmonic distortions. The dimerized phase (Dim) has vanishing polarization at E = 0, slightly raising until a critical field E c1 where it jumps to nearly half of saturation in a quadrumarized phase (Quad). Distortions have contributions from both harmonics along this phase and the polarization also raises slightly, until a jump to saturation at a critical field E c2 .
On the other hand, the magnetoelastic part of the Hamiltonian (setting J e = 0) has been extensively studied mainly since the discovery of CuGeO 3 [30] and the spin-Peierls effect is well established: the system is unstable to a lattice deformation pattern commensurate with magnetic correlations and eventually dimerizes at zero magnetic fields. This mechanism happens to be effective also in frustrated chains, to give rise to spin gaps (magnetization plateaux) at non-zero magnetization M [31]. Magnetic excitations with S z = 1 (magnons) on top of a given plateau split into a number of solitons which is fixed by the plateau magnetization ratio. These solitons repel each other and form hence a periodic array [32]. An efficient analysis can be made in the bosonization framework (see [31] for details). In this language [33] the continuum expression for the spin energy density S i · S i+1 → ρ(x) reads where φ is the bosonic field, k F = π 2 (1 − M ), M is the magnetization (relative to saturation), a, b are Mdependent non-universal constants and the ellipses indicate higher harmonics. The magnetoelastic coupling will then be effective when distortion modulations are commensurate with spin energy density oscillations.
Our approach to the full Hamiltonian in Eqs. (5, 6) is based on a self consistent adiabatic procedure to minimize the energy for a given (classical) dipolar and (quantum) spin configuration, setting distortions as (with a subtraction of their average in order to fulfill a fixed length constraint). We have performed an iterative numerical analysis based on the Density Matrix Renormalization Group (DMRG) to solve the magnetic and electric sectors in the adiabatic equations (8), along the lines stated in [34] and implemented in a similar context in [31]. We have used periodic boundary conditions, keeping m = 300 states during up to more than 100 sweeps in the worst cases, getting truncation errors lower than O(10 −12 ). The present model is capable of displaying the ME interplay. In particular, when spin-Peierls dimerization occurs at zero magnetic field and the magnetic subsystem is in a gapped phase with M = 0, one has 2k F = π and the more relevant modulation term which is commensurate with the spin energy density oscillations reads δ(x) = δ D cos(πx + qπ) , q = 0, 1.
For E = 0 the electric subsystem is in the antiferroelectric Ising regime and exhibits a spontaneous polarization P z total ≡ 1 Notice that the polarization is extensive and spontaneous, with δ D = 0 due to the spin-Peierls effect. Moreover, the polarization has two possible orientations depending on the breaking of the translational symmetry of the magnetoelastic chain into Z 2 as indicated in Fig. 1(c). This in turn induces a spontaneous breaking of inversion symmetry along the z axis. By in- creasing the magnetic field above the spin gap (h > h c1 ) there occurs an incommensurate transition with the excitation of localized singlets into triplets, which decay into pairs of solitons. The double degeneracy of dipolar antiferroelectric configurations has a dramatic effect on the polarization: as solitons, for E = 0, form a regular array [32] interpolating between q = 0, 1, P z total vanishes identically.
Thus the magnetic transition causes a complete switchoff of electrical polarization, P z total (h > h c1 ) = 0. This effect could be observed in inelastic neutron scattering experiments.
The numerical results shown in Fig. 2 illustrate the polarization switch-off mechanism: the left panel shows the presence of a magnetization plateau at M = 0 and a critical magnetic field h c1 to overcome it; the right panels show the spin and distortion configurations, as well as the dipolar background and the net polarization. For M = 0 the alternating distortions are in phase (say q = 0) along the chain, while for S z = 1, 2 well defined equidistant solitons produce regions with q = 0, 1 and a vanishing net polarization; the analytical expression for the first soliton pair, δ i = ±δ D tanh( i−i1 ξ ) tanh( i−i1+N/2 ξ ), is indicated with dashed lines in the right middle panel.
The presence of a finite electric field E < E c1 penalizes the region with dipoles and distortions having the same sign (see Eq. (5)), gluing the soliton-antisoliton pairs and producing damping in the polarization switch-off effect (see Fig. 4, upper right panel).
Higher electric fields E c1 < E < E c2 induce dipole flips, driving the electric subsystem to a ↑↑↑↓ configuration. Being the distortions a superposition of period two and four harmonics, the presence of magnetization plateaux at M = 0 and M = 1/2 is anticipated. We have checked numerically that this picture remains qualitatively the same when the dipolar subsystem is coupled with magnetism (α = 0), with a smooth renormalization of the phase boundaries in Fig. 1(d). Representative magnetization curves exhibiting plateaux, computed numerically from DMRG, are shown in Fig. 3, for values of E = 0.2, 0.45 and α = 0.2, 1.0. One observes that the plateau at M = 0 is always present, while a second plateau opens at M = 1/2 when E drives the dipolar system into the quadrumerized phase. The plateaux widths are enhanced by higher magnetoelastic coupling α. when E drives the dipolar system into a quadrumerized phase a second plateau opens at M = 1/2. h ± , the lower and upper boundaries of the M = 1/2-plateau, are marked for later discussion (see Fig. 6).
Details on the quantum states at the M = 0 plateau and their magnetic excitations are shown in Fig. 4. We show distortion and magnetization profiles for low electric fields, at S z = 0 (a) and first magnetized excitation (b). In the latter the continuous lines indicate the soliton profiles for E = 0, to be compared with the finite field profiles (dashed lines) that show the gluing of solitons. This gluing effect is more pronounced for electric fields in the quadrumerizad phase (c), as seen in panel (d) where distortions are fitted with The plateau at M = 1/2 has particular features not present in the spin-Peierls one at M = 0. On the one hand, the magnetic wave function is compatible with an ordered direct product of singlets and spin up sites, as depicted in Fig. 5. Magnetic excitations are simply given by magnons, that is singlet-triplet transitions that do not decay into solitons (see Fig. 5, right panels). On the other hand the quantum state is topologically non trivial, as signaled by the even degeneracy of the entanglement spectrum [35]. The present pantograph model also describes the effects of an electric field on the system magnetization. Let us analyze the scenario in which both dimerized and quadrumerized phases appear as a function of E, e.g. by choosing J e = 0.5, β = 0.2 (see Fig. 1(d)). For E c1 < E < E c2 the dipolar sector is quadrumerized and so is the lattice, which forces the magnetic sector to open a plateau at M = 1/2, as clearly seen from the numerical results in Fig. 3. Choosing a background magnetic field h − at the lower boundary of this plateau, the magnetization will jump from some value M − < 1/2 to M = 1/2 as the electric field crosses E c1 from below; conversely, choosing h + at the upper boundary the magnetization will jump from some value M + > 1/2 to M = 1/2. This ME response is schematically depicted in Fig. 6. Such control of magnetization by an electric field is one of the goals of multiferroic technology developments [26].
FIG. 6. Magnetoelectric response to the electric field in the quadrumerized scenario (schematic). Under appropriate applied magnetic fields h ± , as the electric field produces a polarization jump (see Fig. 1(d) at Je = 0.5) the magnetization switches from M ± to M = 1/2, the value at the plateau (see Fig. 3).
Several quasi-one-dimensional materials showing multiferroicity have been studied in the last years [11,16,24,25]. In most of these systems, a similar mechanism to the one proposed here seems to be relevant to describe the origin of the magnetoelectricity, though the spin ordering in some of them is of the type ↑↑↓↓ at zero magnetic field, spins may have a strong Ising anisotropy or take alternating different values along the relevant chain directions, etc. In order to describe these observed phenomena, one needs to consider further neighbors couplings between the neighboring spins and allow for higher spin SU (2) representations or even consider Ising spins.
In the cases in which the magnetic moments can be treated as Ising variables, such as Ca 3 CoMnO 6 , the ANNNI model has been proposed to describe the physics [16]. Even in such cases, the description of the process of magnetic depolarization must include excitations and/or quantum fluctuations. In this respect, our model is expected to provide the correct description of the transition and could be tested against experiments.
The extended J 1 − J 2 model studied in [36] shows an M = 1/2 plateau with period four symmetry breaking and dissociation of solitons as one increases or decreases the magnetization out of the plateau. In experiments done in R 2 V 2 O 7 (R=Ni,Co) a similar situation has been observed, together with a sharp change in P on both sides of the 1/2 plateau [17]. In spite of these differences, the behavior of the magnetization and electric polarization in a magnetic field for spin gapped phases even at nonzero field (plateaux phases) seems to be ubiquitous in all of the materials listed above.
The present mechanism is readily generalized to higher dimensions by considering the relevant structural units as octahedra in perovskites, double tetrahedra in hexagonal manganites, etc. These units containing the magnetic atoms are arranged, say, in the corners of a square/cubic lattice and a kind of spin-Peierls mechanism can occur. Linking again the deformation of the lattice along a given preferential direction with the height of the basic unit (see Fig. 1(a)) the magnetoelectric coupling arises in the same way. Even in the case that tunneling between double-well potential minima was not negligible, and electric dipoles were better described by a transverse Ising model, we expect our main conclusions to remain valid. Also higher spin magnetic ions, either classical or quantum, could be considered.
Though the relation between striction and multiferroicity in quasi-one-dimensional systems has been discussed in several works [10,19,20], in most of the cases dipolar moments are not included as dynamical variables. In the present Letter, we fill this gap by proposing a more general mechanism that includes electro-elastic couplings via the distortion dependence of both local dipolar strengths and their interactions. The full Hamiltonian couples spins and electric moments via lattice deformations through the proposed pantograph-like effect.
We hope that the present pantograph mechanism will shed light on the understanding of the microscopic origin of ME coupling in type II multiferroics. | 4,506.6 | 2019-08-01T00:00:00.000 | [
"Physics"
] |
Primal Domain Decomposition Method with Direct and Iterative Solver for Circuit-Field-Torque Coupled Parallel Finite Element Method to Electric Machine Modelling
. The analysis and design of electromechanical devices involve the solution of large sparse linear systems, and require therefore high performance algorithms. In this paper, the primal Domain Decomposition Method (DDM) with parallel forward-backward and with parallel Preconditioned Conjugate Gradient (PCG) solvers are introduced in two-dimensional parallel time-stepping finite element formulation to analyze rotating machine considering the electromagnetic field, external circuit and rotor movement. The proposed parallel direct and the iterative solver with two preconditioners are analyzed concerning its computational efficiency and number of iterations of the solver with different preconditioners. Simulation results of a rotating machine is also presented.
Introduction
The numerical field calculation of electromechanical devices is a very complex task, because a lot of different physical aspects should be considered for the appropriate modelling.The performances of electrical equipments are not defined only by their electromagnetic field, because the electromagnetic field has strong interaction between the following quantities: electromagnetic field distribution, mechanical oscillation equa-tion, external circuits, etc.The electric machines are the most obvious examples.
The Finite Element Method (FEM) [1], [2] is a wellknown technique for the solution of a wide range of problems in science and engineering.However, a few years back, the simulation of complex structures considering multiple aspects in a same set of equations was restrictive, due to the unavailability of enough computer capabilities for data processing.However, nowadays, thanks to the improvements in the computer architecture, the analysis of complex electromagnetic systems is more affordable.But, the analysis of complex systems, e.g.rotating electrical machine analysis considering movement and voltage drive source require a computing effort to solve large sparse linear systems.These large linear systems arise from the discretization with the help of finite element method.The solution of these large equation systems are very resource-intensive and time consuming, wherein the resources and time of the calculation plays an important role for designers and researchers.Therefore, the solution of a complex system should be parallelised in order to speed-up the numerical computations with less computer requirement.
In this paper, we propose to solve a two-dimensional parallel time-stepping finite element problem using primal Domain Decomposition Method [3], [4], [5], [6], [7].The used primal DDM is also called the static condensation method, the method of sub-structuring or the Schur complement method [3].The direct solver is the parallel forward-backward method with parallel factorisation [4], [5].The iterative solver is a parallel Krylov method, the parallel preconditioned conjugate gradient method [3], [6], which is currently one of the most popular method for systems with real symmetric positive definite matrices.Two preconditioners, Jacobi and Neumann-Neumann preconditioner [7] used in the solver algorithm to improve the convergence behaviour.We present the numerical behaviour of the parallel direct solver and the parallel PCG solver with preconditioners for the modelling of electrical machine with direct coupled field-circuit formulations [2].
The paper is organized as follows.The next section briefly describes the used equations and methods to give an introduction to the formulation of the parallel time-stepping finite element method coupled with circuit and mechanical oscillation equations.The third section describes the Schur complement method and how this method, and its direct and iterative solver algorithms can be used to formulate and solve a coupled problem.Section 4. collect numerical results to illustrate the potential of the method, an induction machine with different mesh size are then presented.Finally, some extensions of the method are discussed.
Field-Circuit Coupling Finite Element Formulation
The electrical machine is modelled in two-dimensional space, using the FEM to discretize the domain, which is based on the weak formulation of the partial differential equations, which can be obtained by Maxwell's equations and the weighted residual method [1].The magnetic vector potential formulation has been applied, and the temporal derivatives are discretized by the backward Euler's scheme [2].The field and external circuit equations are combined together using the direct coupling method [2], [6].Equation (1) shows the matrix system of the field equations [2], [6]: where A is the vector of magnetic vector potential, I is the vector of currents in the windings, U is the vector of voltages at the terminal of the winding, S is the matrix related to permeability, N is the matrix related to electric conductivity, P is the matrix associated with constant coil current, Q is the matrix associated with flux linkage, R is the matrix of d.c.resistance of windings, L is the matrix of the end-windings inductances.
In order to simulate the rotation of the rotor in the two-dimensional case, we used one of the most common method, the so called sliding surface technique with first order nodal interpolation method [8].The interpolation method is necessary, when the fixed (stator) and mobile (rotor) part of the mesh are non-conforming because of variation of angular speed.The new angu-lar speed and rotor displacement are evaluated by the mechanical oscillation equation [6]: where J r is the rotor inertia moment, D r is the friction damping coefficient, T e is the electromagnetic torque, T L is the load torque acting on the mechanical axis, ω r is the rotor speed, and α r is the rotor angular position.At each time step, the electromagnetic torque is calculated via the Maxwell's stress tensor [2], [6] by the following relationship: where L is the axial length of the domain, and r is the position vector linking the rotation axis to the element dΓ, and Γ is a surface, which is placed in the middle of the air gap, B is the magnetic flux density, µ 0 is the permeability of vacuum, and n is the normal unit vector to the surface.
Primal Domain Decomposition Method
When domain decomposition method is used, the problem domain Ω is to divide into several sub-domains in which the unknown magnetic vector potentials and currents can be calculated simultaneously, i.e. in a parallel way.The general form of a linear algebraic problem arising from the discretization of a parabolic type problems defined on the domain Ω can be written as Ka = b [3], [5], in more detail: where K ∈ R (n×n) is the mass matrix, b ∈ R (n×1) on the right hand side of the equations, and a ∈ R (n×1) contains the unknowns.Here n is a number of unknowns.
After the problem is partitioned into a set of N S disconnected sub-domains, as it can be seen in Fig. 1 and the linear sparse system, Ka = b has been split into N S particular blocks [3], [4], [5], [6].For each sub-domains, the nodes are partitioned into interior nodes designated by the subscript i, and interface boundary nodes designated by the subscript Γ.If the interior nodes are numbered first and the interface boundary nodes are numbered last, then the sumdomain equation system can be written in the following matrix form [3], [6].
where j = 1 • • • N S , K jj is the positive definite submass matrix of the j th sub-domain, b j is the vector of the right hand side defined inside the sub-domain.
The sub-matrix K jΓ = K T Γj contains the coefficients of j th sub-domain, which connect to the interface boundary unknowns of that region.The superscript T denotes the transpose.K ΓΓ , and b Γ expresses the coupling of the interface unknowns.It should be noted, that it is much easier in the parallel computation, if the sliding surface is used as an interface boundary in the air gap, as it can seen in Fig. 1.Each sub-domain will be allocated to an independent processor core, because the sub-matrix K jj with the K jΓ , K Γj and the right-hand side b j are independent, i.e. they can be assembled in parallel on distributed memory.Only the matrix K ΓΓ , and the vector b Γ are not independent.The matrix K ΓΓ and the vector b Γ are assembled after interprocess data transfer, because they are the assembly of K ΓΓ and b Γ .K ΓΓ is called the local Schur complement, and K ΓΓ is the mass matrix of the reduced system, the Schur complement of the problem.
The assembly and the solution of the sub-matrices can be performed parallel by independent processors.However, the solution requires exchange of interface values, a Γ between the processes in charge of the various sub-domains.In many practical applications, the forward-backward method with LU factorisation, or the preconditioned conjugate gradient method is used because of its simplicity and efficiency.
The practical implementation of the parallel forward-backward substitution with parallel factorisation can be found in [4] and [5].
The parallel implementation of the preconditioned conjugate gradient method can be found in [3] and [6].
In this case, two preconditioners have been used, the Jacobi preconditioner [3], [6] and the Neumann-Neumann preconditioner [7].The Jacobi preconditioner is one of the simplest forms of preconditioning, in which the preconditioner is chosen to be the diagonal of the matrix.The Neumann-Neumann preconditioner is defined by approximating the inverse of the sum of local Schur complement matrices by the weighted sum of the inverses [7].
To illustrate how the above mentioned domain decomposition method with parallel solvers are implemented into the field-circuit coupled finite element method.
Application
In this section, to demonstrate the operation of the presented methods, a 4-pole 3-phase 3 kW induction motor with un-skewed rotor slot fed by sinusoidal voltage is tested.The test problem and parameters, and the GMSH model is from the free GetDP models it can find in [9], [10].The studied domain consists of one pole of the machine, i.e. a 45 • domain, as you can see in Fig. 2.
Anti-periodic boundary conditions are used to represent the whole problem.In this simulation, 20 periods have been calculated, and a period of the 50 Hz voltage excitation has been divided into 300 time steps.
Numerical experiments have been performed on platform composed of four CPUs Intel Xeon L5420.Each CPU is a Dual-core processor running at 2.5 GHz.It supplies 8 × 4 GB RAM memory.The benchmark presented in this paper consists in performing 15 times the same operation in order to overcome the problem caused by the finite precision of the clock.The implemented program has been developed under the MAT-LAB [11] computing environment in C language and in own scripting language of the MATLAB [11].
We compare the implemented method for different mesh size.Table 1 contains the data about the partitioned finite element mesh in various global element size factors.In the Tab. 1, the number of degree of freedom of the unpartitioned problem, GDoF, the number of degree of freedom of one sub-domain, DoF, the number of interface unknown, CDoF are summarized.In order to use the same stop criterion for the methods, ε = 10 −8 .The speedup has been calculated by the following formula, where T ime 1 is the running time in the case of the smallest element size factor, and T ime n is the running time of the different size factor [6].The efficiency has been calculated by the following formula: where Speedup sf is the speedup in the case of the different element size factor, and n is the applied processor cores [6].
The performance results of the parallel program are reported in Fig. 3, Fig. 4, Fig. 5, Fig. 6, Fig. 7 and Fig. 8 for all element size factor.The speedups (Fig. 3, Fig. 4 and Fig. 5) are computed using the wall clock time of smallest problem (2900 GDoF) as the reference point.The results show the speedup as high as 6.6 with direct solver, 9.8 and 10.8 with iterative solver for the Jacobi preconditioner, and the Neumann-Neumann preconditioner, respectively.In both iterative case, the speedup is continuously increase until the 0.004 element size factor, because the time of the interprocess communication is relatively smaller, than the time of the parallel PCG.This is also true for the direct solver until 0.006 element size factor.However, this is not true for the largest test cases.When the sub-problems are too big, and the operations of both parallel solvers are very time consuming.Further the memory requirement of the program is also very high in this case.This conclusions are also supported by the figures of efficiency, as it can be show in Fig. 6 in the case of direct solver, Fig. 7 in the case of Jacobi preconditioner, and in Fig. 8 The interprocess communication hardly depend on the number of interface unknowns (CDoF in Tab. 1) and the number of applied processor cores.However, the number of iteration shows the robustness of the presented algorithm, because the curves are continuously increase, so the solver more or less independent from the number of degree of freedom and the number of interface unknown.13 show the simulation results of the induction machine.These figures show the first ten periods of the simulations.Figure 11 shows the transient speed waveform.Figure 12 shows the transient torque waveform of the machine.Figure 13 shows the transient current waveforms of the stator windings.
Conclusion
In this paper, a two-dimensional parallel finite element modelling of an induction machine have been presented taking rotor movement and field-circuit equation of the windings into account.To study the operation of the implemented method, different global finite element size factor have been considered.Results of numerical experiments on all mesh sizes compared.
The parallel direct and the parallel PCG solver with the preconditioners are work properly, as the presented results show.Furthermore, the results obtained for the simulation of the induction machine have also been presented.
The numerical experiments show the work of the implemented program is hardly depend on the size of the problem.If the problem size is too large, the efficiency of the computation is decreased, so the running performance of the implemented program is depend on the size of the problem.However, it can be concluded based on the presented results, the parallel direct solver is more efficient for the smaller problems, whereas the iterative solver is more useful for larger problem.
It should be noted, that only a two-dimensional benchmark has been used for the numerical tests.The tests with more complex three-dimensional problems will be the subject of a forthcoming work.
Fig. 1 :
Fig. 1: Domain decomposition of the finite element mesh of the induction motor.
Fig. 2 :
Fig. 2: The assembled results, the equipotential lines of magnetic vector potential and the magnetic flux density vectors.
Figure 9 and
Figure9and Fig.10show the running performance of the preconditioned conjugate gradient solver, the number of iteration versus global element size
Figure 11 ,
Figure 11, Fig.12and Fig.13show the simulation results of the induction machine.These figures show the first ten periods of the simulations.Figure11shows the transient speed waveform.Figure12shows the transient torque waveform of the machine.Figure13shows the transient current waveforms of the stator windings.
Fig. 13 :
Fig. 13: The time variation of the three phase currents.
Tab. 1: Data of the different used finite element meshes. | 3,445.4 | 2015-12-22T00:00:00.000 | [
"Computer Science"
] |
A short-term high-fat diet alters rat testicular activity and blood-testis barrier integrity through the SIRT1/NRF2/MAPKs signaling pathways
Background Overweight/obesity are metabolic disorder resulting from behavioral, environmental, and heritable causes. WHO estimates that 50% of adults and 30% of children and adolescents are overweight or obese, and, in parallel, an ongoing decline in sperm quality and male fertility has been described. Numerous studies demonstrated the intimate association between overweight/obesity and reproductive dysfunction due to a highly intricate network of causes not yet completely understood. This study expands the knowledge on the impact of a short-term high-fat diet (st-HFD) on rat testicular activity, specifically on steroidogenesis and spermatogenesis, focusing on the involved molecular mechanisms related to mitochondrial dynamics, blood-testis barrier (BTB) integrity, and SIRT1/NRF2/MAPKs pathways. Methods Ten adult Male Wistar rats were divided into two groups of five and treated with a standard diet or an HFD for five weeks. At the end of the treatment, rats were anesthetized and sacrificed by decapitation. Blood was collected for serum sex hormone assay; one testis was stored at -80ÅãC for western blot analysis, and the other, was fixed for histological and immunofluorescence analysis. Results Five weeks of HFD results in reduced steroidogenesis, increased apoptosis of spermatogenic cells, and altered spermatogenesis, as highlighted by reduced protein levels ofmeiotic and post-meiotic markers. Further, we evidenced the compromission of the BTB integrity, as revealed by the downregulation of structural proteins (N-Cadherin, ZO-1, occludin, connexin 43, and VANGL2) other than the phosphorylation of regulative kinases (Src and FAK). At the molecular level, the impairment of mitochondrial dynamics (fission, fusion, andbiogenesis), and the dysregulation of the SIRT1/NRF2/MAPKs signaling pathways, were evidenced. Interestingly, no change was observed in the levels of pro-inflammatory markers (TNFα, NF-kB, and IL-6). Conclusions The combined data led us to confirm that overweight is a less severe state than obesity. Furthermore, understanding the molecular mechanisms behind the association between metabolic disorders and male fertility could improve the possibility of identifying novel targets to prevent and treat fertility disorders related to overweight/obesity.
Introduction
An intimate connection between balanced nutrition and the preservation of a good state of human health exists, in fact a salubrious diet is associated with a reduction in morbidity and premature mortality (1)(2)(3).Many studies reported that, especially in industrialized countries, a considerable percentage of noncommunicable diseases (obesity, diabetes, cardiovascular disorders, and even some types of cancer) are correlated, directly or indirectly, to the consumption of unhealthy food, particularly those with the high trans-fatty acids and low essential nutrients content (vitamins, minerals, and proteins) (4)(5)(6).It has been estimated that obesity and overweight, syndromes characterized by the accumulation of excessive fatty tissue in the body, affect more than 1.9 billion adults worldwide, rising from epidemic to pandemic states (7).Such high prevalence, accompanied by severe social and economic consequences, makes obesity/overweight one of the major global health issues (8).It is important to note that being overweight may be considered a preclinical condition less severe than obesity, since the excessive accumulation of body fat increases, in turn, the risk of chronic diseases (9).The most used parameter to define obesity is the body mass index (BMI), calculated as a person's weight (in kilograms) divided by the square of his/her height (in meters) (10).Conversely, more accurate but less used indexes, such as waist circumference and weight gain, may provide more reliable and individualized parameters to define the consequence of excessive body fat accumulation on the development of chronic disease (11).Obesity rates have significant impacts on personal and public health; however, overweight status is often trivialized as a mere body image issue (12,13).
Besides the well-known comorbidities associated with obesity, including dyslipidemia, type 2 diabetes, and hypertension, a growing body of evidence is now focusing on its correlation with human infertility, as evidenced by the numerous papers published on this topic in recent years and, in particular, on the positive correlation between growing BMI and sub-infertility (14)(15)(16).Alteration of the hormonal milieu is one of the most evident effects of obesity.In overweight or obese men, excess body fat accumulation can increase the production of serum sex hormonebinding globulin.This glycoprotein, produced by the liver, binds to testosterone (T) and inhibits its biological action; this, along with increased aromatase (ARO) activity, leads to a decreased T/estradiol (E 2 ) ratio; estrogen increases and, inhibiting Leydig and Sertoli cell function, further impairs T production and the process of spermatogenesis (17)(18)(19)(20).
Moreover, obesity has also been defined as a "systemic oxidative stress state", in which an imbalance between reactive oxygen species (ROS) production and antioxidant capacity occurs, leading to oxidative stress.This, ultimately, damages cellular components deleterious for male germ cells (GC), and particularly for spermatozoa (SPZ), as their plasma membrane contains high levels of polyunsaturated fatty acids, and their DNA, once damaged, cannot be repaired due to lack of the cytoplasmic enzymatic systems involved in DNA repair (17,(21)(22)(23).Several studies reported that, compared to normal-weight men, obese ones have a higher chance of oligozoospermia, asthenozoospermia, and an increased rate of fragmented DNA in sperm (24-28).Furthermore, in a meta-analysis, Campbell et al. (29) described that male obesity negatively impacts the success of assisted reproductive technology (ART).Interestingly, while changes in sex hormone levels may contribute to obesity-induced male subinfertility, data from ART indicate that they may not be the only cause; in fact, obesity in men is associated with decreased pregnancy rates and increased pregnancy loss in couples subjected to ART, but, following intracytoplasmic sperm injection, the fertilization rate is considerably improved, indicating that obesity may alter sperm maturation, capacitation, and their ability to bind and fertilize the egg with still unknown mechanisms (29)(30)(31).In this regard, one of the most common tools to study obesity and its related comorbidities, including infertility, is the use of animal models, especially mice and rats, fed with a high-fat diet (HFD).The duration of the HFD is crucial; in a recent review, de Moura e Dias et al. (32) summarized the time-dependent effects of HFD in provoking obesity, assessing that at least 3 weeks of HFD are sufficient to obtain satisfactory results.However, to strengthen the phenotypic and metabolic characteristics of obesity, a longer intervention period (from 10 to 12 weeks) is necessary.Coherently, most of the studies focused on the impact of obesity on testicular activity, used a long-term HFD (10-14 or longer weeks of treatment) (33)(34)(35)(36), while just a few papers used a different approach, with a short-term HFD (st-HFD), that is correlated to an overweight condition (37)(38)(39).
This may be interesting to obtain parameters to be used to monitor the progression of infertility related to being overweight, even at the early stages before it progresses to obesity, which is considered a real "pathological state".In previous studies, we demonstrated that a 5-weeks st-HFD induced an increase in body weight and serum cholesterol and triglyceride levels, as well as alterations in testis and epididymis, i.e., induced oxidative stress, increased autophagy, apoptosis, and mitochondrial damage (40-42).Here, using the same rats fed with a st-HFD, we evaluated additional parameters of testicular activity, such as steroidogenesis and spermatogenesis, with special attention to the involved mechanisms related to mitochondrial dynamics, and blood-testis barrier (BTB) integrity.Undoubtedly, these key regulators are essential in the spermatogenic process, which guarantees the formation of high-quality gametes (43,44); on the other hand, testicular cells mitochondria and BTB are two of the main targets highly sensitive to the non-physiological conditions, and particularly in a prooxidant milieu, induced either by environmental (such as the exposure to pollutants) (45-49), and pathological (like diabetes and obesity) (50)(51)(52) factors.Finally, because many reports demonstrated the association of SIRT1/ NRF2/MAPKs pathways with testicular function altered by obesity (33,(53)(54)(55), we verified whether the abovementioned pathways may also be involved in the molecular mechanisms underlying the diet-induced testicular dysfunction obtained via a st-HFD.
Animals and tissue collection
Male Wistar rats (250-300 g, aged eight weeks) were kept in one per cage in a temperature-controlled room at 28°C (thermoneutrality for rats) under a 12-h light/12-h dark cycle.Before the beginning of the study, water, and a commercial mash (Charles River Laboratories, Calco, Italy) were available ad libitum.At the start of the study (day 0), and after seven days of acclimatization to thermoneutrality, the rats were divided into two groups of five and treated as follows: • The first group of rats (n = 5, C) received a standard diet (total metabolizable percentage of energy: 60.At the end of the treatment, rats were anesthetized with intraperitoneal injection of chloral hydrate (40mg/100g body weight), sacrificed for decapitation.The trunk blood was collected and the serum was separated and stored at -20°C for later sex hormone determination.The testes were dissected out, one testis was rapidly immersed in liquid nitrogen and stored at -80°C for western blot (WB) analysis, and the other was fixed in Bouin' solution for histological analysis.This study is reported in accordance with ARRIVE guidelines.Animal care and experiments were conducted in accord with the guidelines of the Ethics Committee of the University of Campania "Luigi Vanvitelli" and the Italian Minister of Health (Permit Number: 704/2016-PR of the 15/07/2016; Project Number: 83700.1 of the 03/05/2015).Every effort was made to minimize animal pain and suffering.
Determination of serum T and E 2 levels
Sex steroid levels were determined in serum from control and st-HFD rats using T (#DKO002; DiaMetra, Milan, Italy) and E 2 (#DKO003; DiaMetra, Milan, Italy) enzyme immunoassay kits.The sensitivities were 32 pg/mL for T and 15 pg/mL for E 2 .
Protein extraction and WB analysis
Total testicular proteins were extracted from control (n = 5) and st-HFD (n = 5) rats as described in Venditti et al. (56).Forty micrograms of total protein extracts were separated into SDS-PAGE (9 or 15% polyacrylamide) and treated as described in Venditti et al. (57).The membranes were incubated overnight at 4°C with primary antibodies, listed in Table S1.The concentration of proteins was quantified using ImageJ software (version 1.53 t; National Institutes of Health, Bethesda, USA).Each WB was performed in triplicate.
Histology and immunofliorescence (IF) analysis
For hematoxylin/eosin staining and immunolocalization analysis, 5 µm testis sections were dewaxed, rehydrated, and processed as previously described (58, 59).For details on the used antibodies, see Table S1.The cells' nuclei were marked with Vectashield + DAPI (Vector Laboratories, Peterborought, UK) and then observed under an optical microscope (Leica DM 5000 B + CTR 5000; Leica Microsystems, Wetzlar, Germany) with UV lamp, images were analyzed and saved with IM 1000 software (version 4.7.0;Leica Microsystems, Wetzlar, Germany).Photographs were taken using the Leica DFC320 R2 digital camera.Densitometric analysis of IF signals and Proliferating Cell Nuclear Antigen (PCNA)/Synaptonemal complex protein 3 (SYCP3) positive cells were performed with Fiji plugin (version 3.9.0/1.53 t) of ImageJ Software counting 30 seminiferous tubules/animal for a total of 150 tubules per group.Each IF was performed in triplicate.
TUNEL assay
The apoptotic cells were identified in paraffin sections through the DeadEnd ™ Fluorometric TUNEL System (#G3250; Promega Corp., Madison, WI, USA) following the manufacturer's protocol, with little modifications.Briefly, before the incubation with TdT enzyme and nucleotide mix, sections were blocked with 5% BSA and normal goat serum diluted 1:5 in PBS and then treated with PNA lectin, to mark the acrosome.Finally, the nuclei of the cells were counterstained with Vectashield + DAPI.The sections were observed with the same microscope described in Section 2.4.To determine the % of TUNEL-positive cells, 30 seminiferous tubules/ animal for a total of 150 tubules per group, were counted using the Fiji plugin (version 3.9.0/1.53 t) of ImageJ Software.TUNEL assay was performed in triplicate.
Statistical analysis
The values were compared by a Student's t-test for betweengroup comparisons using Prism 8.0, GraphPad Software (San Diego, CA, United States).Values for p < 0.05 were considered statistically significant.All data were expressed as the mean ± standard error mean (SEM).
Effect of st-HFD on testicular steroidogenesis
Serum T levels in st-HFD rats were significantly reduced by about 28% compared to the controls (p < 0.01); by contrast no differences in E 2 levels between the two groups were evidenced (Figure 1A).
To better evaluate the effect of st-HFD on steroidogenesis, the protein levels of steroidogenic acute regulatory protein (StAR), and 3b-Hydroxysteroid dehydrogenase (3b-HSD), two enzymes involved in T biosynthesis, were analyzed (Figure 1B).WB analysis confirmed that st-HFD altered testicular steroidogenesis, as a decrease in StAR (p < 0.05; Figures 1B, C) and 3b-HSD (p < 0.01; Figures 1B, D) protein levels, as compared to the control, was observed.In addition, the protein level of ARO, the enzyme converting T into E 2 , was also evaluated, however, results showed no difference between the two groups (Figures 1B, E) The effects of st-HFD on steroidogenesis were further confirmed by an IF staining of StAR and 3b-HSD, which is shown in Figure 1F.The signals specifically localized into the interstitial Leydig cells (LC; asterisks; Figure 1F insets); however, fluorescence intensity analysis showed a weaker signal in st-HFD animals (p < 0.01; Figures 1G, H) as compared to the control.
Steroidogenesis analysis of controls and st-HFD fed rat testis.
Effect of st-HFD on apoptosis
Figure 2 shows the effect of st-HFD on the apoptotic rate of germ and somatic cells.WB analysis revealed an increase in Bax/ Bcl-2 ratio (p < 0.01; Figures 2A, B), p53 (p < 0.05; Figures 2A, C), and Caspase-3 (p < 0.001; Figures 2A, D) protein levels in the st-HFD group as compared to the control.
In support of these data, a TUNEL assay was performed (Figure 2E).Data showed the presence of dispersed apoptotic cells in the control group, especially spermatogonia (SPG; arrows and insets; Figure 2E).st-HFD induced an increase of 165% in the number of TUNEL-positive cells (p < 0.001; Figures 2E, F), particularly of SPG, as well as scattered apoptotic LC in the interstitial compartment, as related to the control.
Effect of st-HFD on spermatogenesis
Testis from control exhibited well-organized germinal and interstitial compartment, showing GC in all differentiation stages and with mature SPZ filling tubular lumina (rhombus) as well as LC and regular blood vessels in the interstitium (asterisk; Figure 3A).The histological organization of the testes from st-HFD rats was not dissimilar from that of controls; however, it appeared clear the reduced diameter of the tubules.Indeed, the analysis of three morphometric parameters further supported this observation since the diameter of the tubules (p < 0.001) and the thickness of epithelium (p < 0.05) were lower in st-HFD group than in the control, while no differences in the % of tubular lumens occupied by SPZ were detected (Table 1).In addition, although there were no changes in the frequency of the stages characterizing the rat seminiferous epithelium (data not shown), alterations in the different phases of the acrosome biogenesis, highlighted by the PNA lectin staining, were seen (Figure 3B).
Concomitantly, labeling of PCNA and SYCP3 was performed (Figure 4F).Data showed a PCNA (green panel) specific localization in the SPG (arrows) and spermatocytes (SPC; arrowheads) in the testis of both groups; however, in st-HFD an increase approximately of 51% in PCNA positive cells (p < 0.05; Figure 4G) was observed.As for SYCP3, it localized in the SPC nucleus (arrowheads; Figure 4E), and the % of SYCP3 positive cells decreased by 53% in st-HFD group as compared to the control (p < 0.01; Figure 4H).
Effect of st-HFD on biogenesis and mitochondria dynamics
To evaluate the effects of st-HFD on mitochondrial biogenesis, peroxisome proliferator-activated receptor-gamma coactivator (PGC-1a), nuclear respiratory factor 1 (NRF1), and mitochondrial transcription factor A (TFAM) were employed as markers.We found a significant decrease in the expression levels of PGC-1a (p < 0.01; Figures 5A, B), NRF1 (p < 0.01; Figures 5A, C), and TFAM (p < 0.05; Figures 5A, D) in the testis of st-HFD rats as compared to controls.
Mitofusin (MFN2) and Optic atrophy 1 (OPA1) were employed as markers of mitochondrial fusion; Dynamin-Related Protein 1 (DRP1) was used as a marker of the fission process.Testes from st-HFD rats exhibited a slight, significant decrease in MFN2 (p < 0.05; Figures 5A, E), OPA1 (p < 0.05; Figures 5A, F), and DRP1 (p < 0.05; Figure 5A, G) protein levels as compared to control animals.IF staining was performed for TFAM (Figure 5H), MFN2, and DRP1 (Figure 5J).In the control testis, TFAM localized in the cytoplasm of SPG (arrows), SPC (arrowhead), and in the residual cytoplasm of elongating spermatids (SPT; dotted arrows).Additionally, a clear signal in the interstitial LC was also observed (insets).In the st-HFD-treated group, TFAM localized in the same cell types abovementioned (Figure 5H), but a weaker immunofluorescent signal was observed (p < 0.05; Figure 5I).Similarly, DRP1 also localized in the cytoplasm of SPG (arrow), SPC (arrowheads), in eltongating SPT (dotted arrows), as well as in LC (insets); interestingly, MFN2 signal appeared dotted-shaped and diffused in all the cell types composing the seminiferous epithelium.The analysis of MFN2 (Figure 5K) and DRP1 (Figure 5L) fluorescent signals showed a comparable pattern, statistically significant, as observed for the protein level.
Effect of st-HFD on BTB integrity markers
st-HFD produced substantial alterations in the BTB at both structural and regulatory proteins, compared to control groups (Figures 6-8).Indeed, st-HFD resulted in a significant reduction in the protein levels of N-Cadherin (N-CAD; p < 0.01; Figures 6A, B), occludin (OCN; p < 0.001; Figures 5A, C), zonula occludens-1 (ZO-1; p < 0.01; Figures 6A, D), connexin 43 (CX43; p < 0.01; Figures 6A, E), and Van Gogh-Like 2 (VANGL2; p < 0.05; Figure 6A, F), as well as in the phosphorylation status of p-Src (p < 0.001; Figures 6A, G), p-FAK-Y397 (p < 0.01; Figures 6A, H), and p-FAK-Y407 (p < 0.05; Figures 6A, I) as compared to control.For a more detailed characterization of the effects exerted by st-HFD on N-CAD, OCN, ZO-1 (Figure 7) CX43, and VANGL2 (Figure 8) localization, an IF analysis was carried out.N-CAD, one of the components of cell adhesion complexes (adhesion junctions) in BTB (60), localized both in the basal compartment, at Sertoli cells (SC) interface (striped arrows; Figure 7A), and in their cytoplasmic protrusions of the luminal compartment, associated with the heads of elongating SPT (dotted arrows; Figure 6A).Interestingly, in the testis of st-HFD-treated rats, while N-CAD immunosignal was still present in the basal compartment, in the luminal one it was quite weak, and less intense that of the control group (p < 0.001; Figures 7A, B).
OCN (Figure 7C) and ZO-1 (Figure 7E) are integral membrane and adaptor proteins, respectively, that link integral membrane tight junctions (TJ) components to the actin cytoskeleton (61).They specifically localized in the SC cytoplasm (striped arrows; Figures 7C, E; insets) in the two groups; however, the signal intensity decreased in the st-HFD-treated rats (p < 0.05; Figures 7D, F) as compared to the control.
CX43 is the principal testicular gap-junction protein, localized between adjacent SC and at the SC-GC interface (62).IF data confirmed this localization pattern; in control, CX43 was detected in the above-mentioned cell types, particularly in SPG (arrows; Figure 8A), SPC (arrowheads; Figure 8A; insets), SC (striped arrows; Figure 8A), and their cytoplasmic protrusions surrounding SPT (dotted arrows; Figure 8A).st-HFD produced a marked decrease of signal intensity in SC and GC, as compared to the control (p < 0.05; Figure 8B).
Finally, VANGL2 is a member of the Planar Cell Polarity family, factors that regulate the spatial and temporal expression of actin-regulatory proteins and the polymerization of microtubules at the apical ectoplasmic specialization (ES) and SC-SC and SC-SPT interface levels (63,64).In the control testis, VANGL2 localized in SPC (arrowheads; Figure 8C), in the SC cytoplasm (striped arrows; Figure 8C; insets), and their protrusions surrounding the SPT/SPZ heads (dotted arrows; Figure 8C).In the st-HFD-treated group, although VANGL2 localized in the above-mentioned cell types (Figure 8C), a weaker immunofluorescent signal was observed (p < 0.01; Figure 8D).
To confirm these data, we performed double immunolabeling on SIRT1 and NRF2 in the two groups.In the control testis, SIRT1 possessed a nuclear localization, especially in SPG (arrows; Figure 9J), SPC (arrowheads; Figure 9J), and SPT (dotted arrow; Figure 9J and insets).On the contrary, although it was present in the same cells, NRF2 sub-localization was cytoplasmic (Figure 9J).In the testis of st-HFD rats, the intensity of both signals was weaker (p < 0.01; Figures 9K, L), particularly in the SPG nucleus for SIRT1 (arrows; Figure 9J) and in SPC cytoplasm for NRF2 (arrowheads; Figure 9J).
Effect of st-HFD on inflammation
To assess whether a st-HFD induced testicular inflammation, several markers, namely NF-kB (Figures 10A, B), b-catenin (b-CAT; Figures 10A, C), TNFa (Figures 10A, D), IL-6 (Figures 10A, E), and IL-1RA (Figures 10A, F) were used.Interestingly, there were no differences between st-HFD and control for any of the selected markers.
Discussion
Proper male and female reproductive activity are crucial for the health and survival of the species.This is accomplished by the production and differentiation of good quality gametes that, as for the male counterpart, are based on SPZ with the ability to cross the female genital tract, perform an accurate acrosome reaction, and contribute with an undamaged DNA for fertilization.Such events depend on an extremely intricate and specialized progression, which involves the proliferation (both mitotic and meiotic) of SPG into round SPT and their differentiation into SPZ, with also the contribution, for several aspects, of the somatic Sertoli and Leydig cells.Conversely, the decrease in sperm quality is a worldwide phenomenon, originating from a plethora of factors: genetic, environmental, and behavioral.Among the latter, dietary habits, with the spread of the so-called "Western diet" (characterized by being hypercaloric and nutritionally poor) is one of the most responsible, as a clear, multifunctional association between overweight/obesity and male sub-infertility has been extensively demonstrated (70)(71)(72).Indeed, many papers showed a positive correlation, in human and experimental rodent models fed with a long-term HFD, with increasing BMI and the worsening of several aspects related to fertility, as hormonal status (especially T level), sperm count, and motility, as well as the increased rate of oxidative stress and inflammation, increasing the risk of oligozoospermia and azoospermia (73,74).This work, with the use of a st-HFD rat model instead of the most usual mice/rats HFD-fed for a prolonged period, takes a different view, aimed to investigate the impact of overweight on testicular activity, since this condition represents the initial stage of the obesogenic process and may assess the status of affected people and direct them to a more correct diet (or other intervention strategies) in an attempt to mitigate its effects.
st-HFD alters testicular steroidogenesis and spermatogenesis
As expected, we found that the steroidogenesis was compromised in the testis of st-HFD rats.Herein, serum T levels were decreased significantly in the HFD group by about 28%, and our data agree with those by Migliaccio et al. (38), which evidenced a reduction in serum T levels and testicular androgen receptors in rats fed with a st-HFD (for 6 weeks).Nevertheless, the reduction in T levels was evidently less pronounced than that observed in rats fed with HFD for 12 (about 400%) (75), or 20 (about 180%) (35) weeks.On the contrary, we found no difference in ARO protein levels as compared to the control.Of note, a previous paper showed that 16 weeks of HFD induced an increase in serum E 2 levels and testicular ARO expression (76) and, considering that this enzyme converts T into E 2 , and that decreased T/E 2 ratio has been related to impaired spermatogenesis (56, 77-79), we highlighted that the disturbance of the hormonal milieu, induced by st-HFD, may be not so severe as that produced by a long term HFD.In addition, reduced T levels, together with the imbalance of oxidative status (42) are among the main causes of the negative impact induced by st-HFD on rat testis.Further, oxidative stress may also be one of the causes inducing LC apoptosis, exacerbating the reduced T bioavailability and, consequently, increasing the number of apoptotic GC.However, the apoptotic rate of testicular cells observed here was less pronounced as compared to that observed in the testis of HFD administered for a longer time (33-36, 80, 81), just confirming that an overweight-like condition provokes less detrimental effect as compared to that of obesity on testicular activity.
Our results showed that st-HFD impacts spermatogenic progression.While the histological organization was similar to controls, a reduced tubular diameter and epithelium thickness were observed.In addition, for the first time, we found significantly lower expression levels of SYCP3, an essential structural component of the synaptonemal complex, and PRM2, a protein associated with histone replacement in haploid cells during spermiogenesis (82).Vice versa, higher levels of PCNA, a nuclear antigen of cell proliferation, and p-H3, a histone protein crucial for chromatin condensation during mitosis/meiosis (83), were detected, together with a higher % of PCNA-positive SPG and I SPC.This last point is of interest, since our data are contrasting with that reported in other papers, in which a reduced number of PCNApositive cells were observed in the seminiferous tubules of rats HFD-fed for 8 (36), 12 (84, 85), 18 (34), and 20 (35) weeks.Therefore, a st-HFD appeared to have a major negative effect on meiotic and post-meiotic events, rather than the previous ones.This data was partially supported by the fact that no differences in the frequency of stages characterizing rat seminiferous cycle were observed.This is in contrast with the paper by Komnions and colleagues (86), whose data demonstrated that a long-term HFD altered this value in mice; however, there were slight alterations in the phases of acrosome biogenesis.Further studies are required to clarify the underlying molecular aspects and the impact of a st-HFD on sperm parameters and physiology since proper acrosome formation is fundamental for successful fertilization (87).
st-HFD alters testicular mitochondrial dynamics via SIRT1 pathway
It is known that self-renewing and proliferating SPG use predominantly glycolysis, while in SPC and SPT, energy is prevalently produced through mitochondrial respiration, for this, fully functional mitochondria are required to complete a successful meiosis (43).Therefore, the altered progression of meiosis in st-HFD testis, as demonstrated by lower SYCP3 and PRM2 levels, could be the result of mitochondria damage, while the increased expression of PCNA and p-H3 in SPG and I SPC may be a compensatory response to the impaired maturation of GC.Bearing in mind the interesting data obtained by Migliaccio et al. (88), reporting that a st-HFD modifies mitochondrial fusion/ fission processes in rat liver, we assessed whether the altered steroidogenesis/spermatogenesis in our animal model could also be induced by a consequence in mitochondrial dynamic changes.In particular, we analyzed several proteins involved in three pivotal mitochondrial processes: fusion (that promotes the maintenance of a homogeneous mitochondrial population that can tolerate higher levels of mitochondrial DNA mutations), fission (the division of a mitochondrion into two smaller mitochondria), and biogenesis (89).Our hypothesis on the involvement of mitochondrial damage in impaired spermatogenesis/steroidogenesis is confirmed by a decrease in MFN2 and OPA1 (fusion markers), DRP1 (fission marker), PGC-1a, NRF1, and TFAM (biogenesis marker) protein levels.
In this complex scenario, it should also be considered the multifaceted role played by SIRT1, a NAD + -dependent deacetylase, for several reasons (90).First, it has a well-recognized role in spermatogenesis, in particular to produce sex hormones by the hypothalamus-pituitary-testis axis (91) and for meiotic and post-meiotic progression (92).Second, SIRT1 is a ROS "sensor", regulating, in oxidative stress conditions, the expression of several redox-related factors, such as FOXOs and NF-kB (90).Third, SIRT1 regulates mitochondrial function and energetic metabolism activating PGC-1a through deacetylation and mediating the induction of several components of the ROS detoxifying system (93).Fourth, testicular SIRT1 downregulation has previously been associated with the insurgence of an oxidative stress status (94) and in HFD-fed mice (53,95).In view of these considerations, supporting earlier reports, we hypothesize that the effect of a st-HFD on impaired spermatogenesis may be also due to the downregulation of SIRT1 expression/activity and, consequently, of the downstream pathways, including those regulating mitochondrial dynamics.
st-HFD alters BTB integrity via NRF2/ MAPKs pathways
BTB integrity is sensitive to stressful conditions, such as survival factor depletion and oxidative stress, as reported in several papers (96, 97).BTB is a distinctive structure of the testis, dividing the seminiferous epithelium into two compartments: the basal, where SPG and preleptotene SPC reside, and the apical one, which contains all the other cell types.It is composed of several cell junctions, located between adjacent SC, and particular cytoskeletonbased structures (the ES and the tubulobulbar complex), which connect SC to SPT.The BTB is an extremely dynamic structure, which, at stages IX-XI of the rat seminiferous epithelial cycle, is "disrupted" and then "reassembled" to permit the transit of preleptotene/leptotene SPC.This action is mediated by the interplay of various mechanisms that generally regulate fluctuation in the expression, localization, activation, and interactions of structural, scaffolding, and signaling proteins (61).Indeed, all the BTB components work harmoniously through continuous cycles of phosphorylation/de-phosphorylation, endocytosis of membrane proteins, and their recycling to guarantee the accurate moving of GC, and to preserve the immune-privileged microenvironment.
Herein, we confirmed that in the testis of st-HFD-fed rats, the protein levels of ZO-1, OCN, and CX43 were reduced (34).However, to our knowledge, this is the first report showing that a st-HFD affects testicular levels of N-CAD and VANGL2, proteins found at basal and apical ES, respectively, as well as the activation of Src and FAK.
In particular, FAK is a central kinase regulator of BTB dynamics, since its phosphorylation, by Src, at tyrosines 397 and 407, allows it to interact with many other components, including OCN, ZO-1, and Src itself.Once activated, FAK regulates the transit of GC through the seminiferous epithelium, especially maintaining the integrity of the apical ES and SPT adhesion during spermiogenesis until spermiation (98).Thus, as previously observed by other authors in HFD-fed mice for 10 (99), and 16 (100) weeks, we found that also a st-HFD can produce perturbations in BTB components, highlighting that its stability is fundamental for a correct spermatogenesis.However, as a limitation of this study, these are indirect data, and an in vivo BTB integrity assay would offer direct evidence, solidifying the claim.
st-HFD alters testicular activity via NRF2/MAPKs pathways
Emerging evidence demonstrated that the disturbance of BTB integrity may be due to ROS overproduction, by the downregulation of NRF2 (101) and activation of the MAPKs pathways (102, 103).Worth remembering, in physiological condition, NRF2 levels are maintained low via the repressive action of the protein KEAP1 while, in an oxidative stress environment, NRF2 is released by KEAP1, allowing its translocation into the nucleus, and activating the expression of antioxidant enzymes, including HO-1 and SOD.As for the MAPKs pathways, the increased activity of p38, JNK, and ERK 1/2 leads to OCN ubiquitination and degradation, as well as endocytosis of junction proteins, including N-CAD and CX43 (104)(105)(106).
In addition, it has also been reported that p38/JNK work together to activate the mitochondrial apoptotic pathway, via the stimulated expression of pro-apoptotic genes, such as cytochrome c and Caspase-3 (107, 108).Finally, apart from its well-known contribution to cell proliferation, numerous studies revealed that ERK 1/2 is also involved in apoptosis ROS-triggered (109)(110)(111)(112). Consistently, our results showed that also a st-HFD induced the inhibition of the NRF2 pathway, as well as the phosphorylation, and thus the activation, of testicular p38, JNK, and ERK 1/2.These results were positively associated with the oxidative stress status and the enhanced apoptosis, while they were negatively correlated with the levels of structural proteins composing the BTB.The combined data suggest that BTB damage and apoptosis may be mediated by the inhibition of NRF2 and the activation of p38, JNK, and ERK 1/2 MAPK pathways, in st-HFD-fed rat testis, as already demonstrated in testicular tissues of type-1 diabetic or obese rodents (99, 113-116).
st-HFD does not induce testicular inflammation
Finally, for a broader picture of the effect of st-HFD on rat testis, the last analyzed parameter was the protein level of the proinflammatory markers NF-kB, b-CAT, TNFa, IL-6, and IL-1RA.However, no differences between st-HFD-treated rats and controls were found, and this point is particularly interesting, since one of the principal manifestations that are evidenced in obesity is the systemic inflammation, that produces altered testicular activity and sperm quality in men (114) and in rodents HFD-fed for a prolonged period (34,84,(117)(118)(119).Thus, although a st-HFD can lead to dysfunction in testicular physiology, the lack of inflammation may be the sign of a less severe influence of overweight on fertility, suggesting that in overweight men there are still possibilities of intervention strategies (restricted diet, exercise, drugs, and others) that may effectively ameliorate testicular activity.
Conclusions
This study is one of the few to highlight the effects of a st-HFD on rat testicular activity.We demonstrated that disturbance in the hormonal milieu and the increased oxidative stress enhanced LC and GC apoptosis, reduced meiotic progression, and altered the integrity of BTB.These effects may be related to altered mitochondrial dynamics, and also to dysregulation of the SIRT1/ NRF2/MAPKs pathways.However, we highlighted the absence of a claimed inflammation status, as well as the less % of TUNELpositive cells, the increased % of PCNA-positive cells and no changes in the ARO protein level, as compared to literature papers in which a longer HFD was employed.The combined data led us to confirm that an overweight condition provoked less intense effects than obesity; however, as a limitation of this study, we lack a direct comparison with a long-term HFD, leading us to not completely exclude that these differences could be related to factors other than diet duration.In any case, this report encourages further studies not only to confirm this aspect but also on the development of different strategies to be used in preventing/ mitigating the still not-so-severe effects of overweight on male fertility.The author(s) declared that they were an editorial board member of Frontiers, at the time of submission.This had no impact on the peer review process and the final decision.
Publisher's note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers.Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
10
FIGURE 10 Inflammation markers analysis of control and st-HFD fed rat testis.(A) WB analysis of testicular NF-kB, b-Catenin, TNFa, IL-6, and IL-1RA.(B-F) Histograms showing NF-kB, b-Catenin, TNFa, IL-6, and IL-1RA relative protein levels.All the values are expressed as means ± SEM from 5 animals in each group.
TABLE 1
Effect of st-HFD on testicular morphometric parameters.
Evaluation of testicular morphometric parameters of control and st-HFD fed rat testis.All the values are expressed as means ± SEM from 5 animals in each group.*p < 0.05; **p < 0.01. | 7,989.8 | 2023-10-27T00:00:00.000 | [
"Biology",
"Environmental Science",
"Medicine"
] |
Helical inchworming: a novel translocation mechanism for a ring ATPase
Ring ATPases perform a variety of tasks in the cell. Their function involves complex communication and coordination among the often identical subunits. Translocases in this group are of particular interest as they involve both chemical and mechanical actions in their operation. We study the DNA packaging motor of bacteriophage φ29, and using single-molecule optical tweezers and single-particle cryo-electron microscopy, have discovered a novel translocation mechanism for a molecular motor.
Multimeric ring ATPases encompass a wide family of proteins whose members carry out a plethora of different tasks in the cell (Snider et al. 2008). Their study poses interesting questions about how the individual subunits of these systems communicate and coordinate with each other throughout their mechanochemical cycle. Moreover, because of their common phylogeny, these questions and their answers are likely to be applicable across the group as a whole (Erzberger and Berger 2006). One such enzyme is the DNA packaging motor of bacteriophage φ29, a ring ATPase that displays impressive coordination among its subunits and which we have been studying using advanced single molecule biophysical methods such as optical tweezers. Moreover, the parallel use of advanced single-particle cryo-electron microscopy has permitted the collection of high-quality, substrate-engaged maps of the φ29 ATPase without the imposition of symmetry. We have combined the information gleaned from these two approaches to formulate a novel translocation mechanism for this molecular motor.
The DNA packaging motor of bacteriophage φ29 is a homopentameric ring ATPase that uses the energy of ATP hydrolysis to package its genome during viral assembly. Using optical tweezers, we have studied its translocation and have come to understand its mechanochemical cycle comprising a "dwell-burst" mechanism in which all subunits exchange ATP during a "dwell" or idle phase and a "burst" phase during which the ATPs of all five subunits are sequentially hydrolyzed to translocate 10 bp of DNA in four steps ). We showed previously that the 10 bp burst is made of four steps of 2.5 bp each, with the remaining subunit and that one of the five subunits does not perform a mechanical task but rather a regulatory one (Chistol et al. 2012). Since 10 bp is very close to the periodicity of dsDNA (10.4 bp), we wondered whether the size of the burst is determined by the periodicity of the substrate being internalized in the capsid. How might the motor adapt to a polymer of differing helical periodicity? To this end, we tested the ability of the motor to package dsRNA and DNA:RNA hybrids which, surprisingly, were tolerated by the motor. Our results show that the amount of substrate translocated during the burst on these alternative substrates is * Carlos Bustamante<EMAIL_ADDRESS>reduced to match the substrates' shorter helical periodicities (Castillo et al. 2021). Significantly, we find that the motor reduces its burst size with dsRNA and DNA:RNA hybrids by conserving the step size it uses for dsDNA during the first three steps and by shortening the fourth, rather than evenly reducing the size of all four. In a parallel development, our collaborators solved the structure of the substrate-bound, ATP-full form of the motor using cryo-EM and found that the motor adopts a lock-washer structure that follows one strand of the DNA (the tracking strand) and spans one period (Woodson et al. 2021). What is then the mechanism of translocation of the bacteriophage φ29 packaging motor? Combining these structural results with the biophysical ones, we note that, if the lock-washer shape were to be maintained throughout the translocation phase or burst, the motor would lose grip of its substrate after just one of its four steps, as the helix of the motor would become unaligned with the helix of the substrate. This situation would not permit the motor to generate the 60 pN of force against which it can translocate in our single molecule assays (Chemla et al. 2005). Hence, we propose that the motor cycles between the lock-washer architecture and a planar one, which allows constant contact with the DNA, in a translocation mechanism that we term the helical inchworm model (Fig. 1a). Here we first describe the operation of the motor when packaging its normal substrate, dsDNA. At the beginning of the dwell phase, the motor is saturated with ADP and has adopted a planar configuration. In this state, only the regulatory or special subunit contacts a DNA phosphate, a contact that has both load-bearing and regulatory functions. As each subunit successively exchanges its ADP for ATP during the dwell, the ring opens in a step-wise manner until it is fully bound to ATP, attaining a lock-washer configuration. During this "opening" process, the subunits contact successive phosphates in the tracking strand of the dsDNA, conforming its lock-washer structure to the periodicity of the substrate. At this point, the motor has the strongest grip on the substrate as its subunits make electrostatic contacts with phosphates of the double helix. To start the burst, the special subunit Fig. 1 Mechanisms for lock-washer ring ATPase translocases. Shown above are cartoons of translocation mechanisms for a pentameric DNA packaging motor. The substrate is shown as a spiral of blue spheres, representing the phosphates of one strand of the DNA. The motor is represented by an assembly of larger spheres, colored by nucleotide state (green/yellow/black for ATP/ADP/apo, respectively) and the subunit connectivity is depicted by cylinders. The capsid is shown in gray above the motors, and the direction of packaging is towards the capsid. a In the helical inchworm mechanism, the motor first exchanges ADP for ATP while the ring opens to span one pitch of the DNA (D1-D6). Then, hydrolysis in the special subunit (marked with an S) causes the hydrolysis cascade (B1-B2), translocating 2.5 bp of DNA in four steps (B2-B6). A phosphate of the DNA is colored purple and dotted lines 2.5 bp apart are drawn as a guide. b In the hand-over-hand mechanism, we start from a fully ATP-bound motor (T). Hydrolysis of the uppermost subunit translocates 2 bp of DNA (D). Subsequent nucleotide exchange of this subunit causes it to relocate to the bottom of the ring (A, T). Now the cycle restarts, as the current state can be related to the original one via a rotation hydrolyzes its ATP first, signaling the mechanical subunits to do the same in a sequential and ordinal fashion. In this process, phosphate release causes translocation via ring closing. At the end of this process, the motor is again in a planar state and the dwell can begin again. This model is consistent with the previous observations that identify the DNA phosphates as the moiety upon which the motor grips the substrate ), and the higher grip of the motor when ATP bound as seen in other viral translocases (Ordyan et al. 2018).
On the other substrates, the motor has more difficulty to conform its lock-washer structure to their shorter helical periods. Moreover, this shorter distance means that, during ring opening, the last subunit grabs its phosphate in the substrate before it can completely open. This smaller cocking of the last subunit eventually results in a fourth translocation step during the burst that is smaller than those of the other three other subunits (which are similar to those made by the motor on dsDNA, as is experimentally observed). This model provides a mechanism by which the motor can "measure" and adapt its burst to the size of its substrate's periodicity.
The presiding model for most of the other ring ATPase translocases that have been also shown to adopt lock-washer structures is the "hand over hand" mechanism. It differs from the helical inchworm model in that, instead of cycling between planar and helical architectures, the motor instead maintains its lock-washer architecture and subunits "jump" the gap in the ring as it translocates its substrate (Fig. 1b). These hand-over-hand motors have been proposed for ring ATPases that operate on disordered substrates, such as polypeptides or ssDNA on which they impose their helical geometry during translocation (Gao et al. 2019;de la Peña et al. 2018). The φ29 DNA packaging motor, however, must package a substrate possessing a pre-existing helical structure and it must be capable to adapt to the shortened helical periods of DNA:RNA hybrid and dsRNA. We liken the difference to that of a person climbing a rope versus a ladder. In the former case, one must deform the rope in order to better grip it with one's hands and legs, while in the latter the grip points are predetermined by the rungs of the ladder. In the hand-over-hand mechanism, the helical structure of the motor imposes helicity onto the substrate in order to maintain grip on it, while in the helical inchworm mechanism, the motor instead alters its operation to fit the pre-existing structure of the substrate, as a climber would do with a ladder possessing closer or more separated rungs. The rope/ladder metaphor also explains the higher forces (60 pN) that the φ29 translocase can exert compared to hand-overhand protein translocases (15 pN) (Maillard et al. 2011) in terms of the relative grip strength of a climber that uses a ladder instead of a rope.
Declarations
Ethics approval This article does not contain experimentation with human or animal participants.
Conflict of interest The authors declare no competing interests.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/. | 2,353.4 | 2021-11-24T00:00:00.000 | [
"Biology",
"Chemistry",
"Physics"
] |
Four New Dicaffeoylquinic Acid Derivatives from Glasswort (Salicornia herbacea L.) and Their Antioxidative Activity
Four new dicaffeoylquinic acid derivatives and two known 3-caffeoylquinic acid derivatives were isolated from methanol extracts using the aerial parts of Salicornia herbacea. The four new dicaffeoylquinic acid derivatives were established as 3-caffeoyl-5-dihydrocaffeoylquinic acid, 3-caffeoyl-5-dihydrocaffeoylquinic acid methyl ester, 3-caffeoyl-4-dihydrocaffeoylquinic acid methyl ester, and 3,5-di-dihydrocaffeoylquinic acid methyl ester. Their chemical structures were determined by nuclear magnetic resonance and electrospray ionization-mass spectroscopy (LC-ESI-MS). In addition, the presence of dicaffeoylquinic acid derivatives in this plant was reconfirmed by LC-ESI-MS/MS analysis. The isolated compounds strongly scavenged 1,1-diphenyl-2-picrylhydrazyl radicals and inhibited cholesteryl ester hydroperoxide formation during rat blood plasma oxidation induced by copper ions. These results indicate that the caffeoylquinic acid derivatives may partially contribute to the antioxidative effect of S. herbacea.
Introduction
Excessive reactive oxygen species cause oxidative damage in the body and are often the result of various human diseases including atherosclerosis, cancer, and aging [1]. Many studies have indicated that intake of foods such as vegetables and fruits containing plenty of antioxidants can reduce the risk of various diseases such as cardiovascular disease and cancer [2]. Therefore, the clinical importance of antioxidant-rich foods has received considerable attention [3,4].
Halophytes, which are grown in a saline environment, are regarded as a potentially useful medicinal and food source [5,6]. Halophytes are constantly exposed to salt-triggered oxidative stress. Consequently, these plants synthesize and accumulate various secondary metabolites including antioxidative phenolics and flavonoids with multiple biochemical defensive capabilities to maintain ion homeostasis and protect their cell functions in response to saline stress [7]. The compounds act as a The chemical shifts of C-3 and C-5 overlapped. at δ 6.67 (H-5"), 6.66 (H-2"), and 6.54 (H-6") and two methylene protons at δ 2.79 (H-7") and 2.60 (H-8") were observed (Table 1). These results suggested that 3 consists of dihydrocaffeic acid, caffeic acid, and quinic acid. From the 13 C-NMR spectrum, the results were further supported by the presence of 25 carbon signals for dicaffeoylquinic acid including three carboxylic carbon signals at δ 175.9 (C-7), 174.2 (C-9"), and 169.0 (C-9 ) ( Table 2). The proton signals of quinic acid including two methylenes at δ 2.11-2.27 (H-2, 6) and three oxygenated methines at δ 5.29 (H-5), 3.88 (H-4), and 5.37 (H-3) were detected in the 1 H-NMR spectrum. The H-5 at δ 5.29 (1H, m) was downfield shifted by 1.16 ppm when compared to the 1 H-NMR spectrum of 1, suggesting that two of the three oxygenated methine groups in quinic acid were conjugated with dihydrocaffeic acid and caffeic acid. The quinic acid moiety was assigned based on their multiplicity and coupling patterns in the 1 H NMR spectrum and the proton-proton correlations in the correlation spectroscopy ( 1 H-1 H COSY) spectrum. In the Overhauser effect (NOE) experiments for 3, the signals for H-2 at δ 2.27 and 2.14 and H-4 at δ 3.88 were enhanced by irradiation of H-3 at δ 5.37. The signal for H-6 at δ 2.07 was also enhanced by irradiation of H-5 at δ 5.29. In addition, the signals for H-3 at δ 5.37, H-2 at δ 2.27, H-5 at δ 5.29, and H-6 at δ 2.11 were enhanced by irradiation of H-4 at δ 3.88. These data indicated that H-2 at δ 2.27, H-4 at δ 3.88, H-5 at δ 5.29, and H-6 at δ 2.07 were axial positions while H-2 at δ 2.14, H-3 at δ 5.37, and H-6 at 2.11 were equatorial positions. The connectivity of 3 was further confirmed by heteronuclear single quantum correlation (HSQC) and heteronuclear multiple-bond correlation (HMBC) experiments. The HMBC correlations (arrows) of δ 5.37 (H-3) to δ 169.0 (C-9 ) and δ 5.29 (H-5) and δ 174.2 (C-9") indicated that caffeic acid and dihydrocaffeic acid was esterified respectively with the C-3 and C-5 of quinic acid ( Figure 1). Consequently, the structure of 3 was unambiguously determined to be 3-caffeoyl-5-dihydrocaffeoylquinic acid, which is a new compound ( Figure 1).
The molecular formula of 4 was determined to be C 26 H 28 O 12 (MW 532) by negative HRESI-MS data (m/z 531.1500 [M − H] -). The 1 H-NMR spectrum of 4 was closely related to that of 3, except for a methoxyl group (δ 3.72) ( Table 1). These results were also supported by the presence of 26 carbon signals assignable to dicaffeoylquinic acid coupled with a methoxyl group including three carboxylic carbons at δ 175.9 (C-7), 174.0 (C-9"), and 169.0 (C-9 ) and a methoxyl carbon at δ 53.1 (-OCH 3 ) detected in the 13 C-NMR spectrum ( Table 2). Based on the spectroscopic data from MS and 1 H-NMR, 4 was proposed to be 3-caffeoyl-5-dihydrocaffeoylquinic acid methyl ester. The quinic acid moiety was assigned by 1 H-1 H COSY and NOE experiments, although the axial/equatorial features of H-6 at δ 2.11 could not be distinguished because the proton signals of H-6ax and H-6eq were overlapped. To the best of our knowledge, this compound has not been previously reported in nature. Therefore, the actual structure of 4 was determined by HSQC, 1 H-1 H COSY, and HMBC experiments. From the results of 2D-NMR spectra, 4 was determined to have the same structure as 3 except for the presence of a methoxyl group. In particular, a cross peak (arrow) between δ 3.72 (-OCH 3 ) and δ 175.9 (C-7) was detected in the HMBC spectrum (Figure 1), indicating that the methyl group was esterified with the C-7 of quinic acid. Therefore, compound 4 was determined to be 3-caffeoyl-5-dihydrocaffeoylquinic acid methyl ester ( Figure 1).
The molecular formula of 5 was determined to be C 26 H 28 O 12 (MW 532) by negative HRESI-MS data (m/z 531.1507 [M − H] -). The 1 H-and 13 C NMR spectra of 5 were closely related to those of 4, except for different chemical shifts for the quinic acid moiety (Tables 1 and 2). In particular, the H-4 at δ 5.02 (1H, dd, J = 7.5, 3.5 Hz) was shifted downfield by 1.5 ppm and the H-5 at δ 4.26 (1H, m) was shifted upfield by 0.99 ppm when compared to the 1 H NMR spectrum of 4, suggesting that dihydrocaffeic acid is attached to the C-4 of 3-caffeoylquinic acid methyl ester. The contiguous protonated carbons (C-2-C-6) of the quinic acid moiety were assigned based on their proton-proton correlations detected in the 1 H-1 H COSY spectrum. In particular, a cross peak (arrow) between δ 5.02 (H-4) and δ 174.2 (C-9") was observed in the HMBC spectrum (Figure 1), indicating that the dihydrocaffeic acid was esterified with the C-4 of quinic acid. Therefore, compound 5 was determined to be 3-caffeoyl-4-dihydrocaffeoylquinic acid methyl ester, which is also a new compound ( Figure 1). a The chemical shifts of C-3 and C-5 overlapped.
The molecular formula of 6 was determined to be C 26 H 30 O 12 (MW 534) by negative HRESI-MS data (m/z 533.1655 [M − H] -). The 1 H-and 13 C-NMR spectra of 6 were closely related to those of 4. However, the olefinic carbon proton signals for the caffeic acid in 4 were not observed. When the 1 H-and 13 C-NMR spectra of 6 were compared to those of 4, a partial structure assignable to dihydrocaffeoylquinic acid was confirmed, similar to 4, and the presence of a dihydrocaffeic acid other than caffeic acid, which was a partial structure of 4, was suggested. That is, the dihydrocaffeic acid was assigned by the presence of proton signals including tri-substituted aromatic ring protons at δ 6.67 (H-5 ), 6.66 (H-2 ), and 6.55 (H-6 ) and 2 methylene protons at δ 2.79 (H-7 ) and 2.60 (H-8 ) in the 1 H-NMR spectrum ( Table 1). The 1 H-NMR data was also supported by the 13 C-NMR spectrum ( Table 2). From the MS and 1D-NMR spectra, 6 was proposed to be di-dihydrocaffeoylquinic acid methyl ester. In particular, correlations (arrow) from δ 5.18 (H-3, 5) to δ 174.6 (C-9 ) and 173.9 (C-9") were observed in the HMBC spectrum ( Figure 1), indicating that two dihydrocaffeic acids are esterified, respectively, with the C-3 and C-5 of quinic acid methyl ester. Therefore, compound 6 was determined to be 3,5-di-dihydrocaffeoylquinic acid methyl ester, a new compound ( Figure 1).
Qualification and Quantitation of 3-6 in the Aerial Parts of S. herbacae
In this study, dicaffeoylquinic acid derivatives 3-6 were isolated from the aerial parts of S. herbacae. Of these, compounds 4-6 were in methyl-esterified forms, suggesting that these compounds are esterified with MeOH under acidic conditions during extraction and purification. Therefore, to confirm the presence of dicaffeoylquinic acid derivatives 3-6 including the methyl-esterified derivatives 4-6 as native compounds in S. herbacae, the EtOAc fraction obtained after the ethanol (EtOH) extraction of the aerial parts of S. herbacae was analyzed by selective multiple reaction monitoring (MRM) detection and high performance liquid chromatography/electrospray ionization tandem mass spectrometer (HPLC-ESI/MS). The dicaffeoylquinic acid derivatives isolated in this study were used as external standards. Compounds 3-6 were detected at t R 10.5, 12.3, 12.5, and 11.8 min on the MRM chromatogram ( Figure 2). These data were in agreement with the retention times of the compounds (3)(4)(5)(6). The compounds (3)(4)(5)(6) were also quantitated by selective MRM detection and MS/MS. The external calibration curve for each compound was linear (R 2 > 0.99) and the recovery rate ranged from 97.0% to 105.5%. Among the dicaffeoylquinic acid derivatives, 3 (75.6 ± 2.3 mg/100 g fresh wt.) was most abundantly found in the aerial part of S. herbacae. The other compounds, 4 (69.3 ± 1.4 µg/100 g fresh wt.), 5 (71.9 ± 1.9 µg/100 g fresh wt.), and 6 (171.9 ± 1.5 µg/100 g fresh wt.), were present in smaller amounts in the aerial part of this plant compared to 3. These results confirm that dicaffeoylquinic acids (3)(4)(5)(6) are unambiguously present in S. herbacae.
DPPH Radical-Scavenging Activity of the Isolated Compounds
The radical-scavenging activities of the isolated compounds (final concentration, 10 μM) were evaluated using the DPPH radical. As shown in Figure 3, dicaffeoylquinic acid derivatives 3-6, which contain two catechol groups in the partial structure, showed significantly higher DPPH radical-scavenging activity than the monocaffeoylquinic acid derivatives 1 and 2 as well as caffeic acid, which contains a catechol group. The radical-scavenging activities of dicaffeoylquinic acid derivatives 3-6 did not significantly differ (p < 0.05), regardless of structural differences like the presence or absence of the olefinic double bond or the binding position of the caffeic and dihydrocaffeic acids to quinic acid. It was reported previously that the catechol structure in phenolic compounds is an important factor for the radical-scavenging effect. These results indicate that the catechol group may be the main contributor to the radical-scavenging activities of the isolated compounds. This pattern agrees with similar results for the DPPH radical-scavenging activities of dicaffeoylquinic acid derivatives isolated from this plant reported in our previous study [19].
DPPH Radical-Scavenging Activity of the Isolated Compounds
The radical-scavenging activities of the isolated compounds (final concentration, 10 µM) were evaluated using the DPPH radical. As shown in Figure 3, dicaffeoylquinic acid derivatives 3-6, which contain two catechol groups in the partial structure, showed significantly higher DPPH radical-scavenging activity than the monocaffeoylquinic acid derivatives 1 and 2 as well as caffeic acid, which contains a catechol group. The radical-scavenging activities of dicaffeoylquinic acid derivatives 3-6 did not significantly differ (p < 0.05), regardless of structural differences like the presence or absence of the olefinic double bond or the binding position of the caffeic and dihydrocaffeic acids to quinic acid. It was reported previously that the catechol structure in phenolic compounds is an important factor for the radical-scavenging effect. These results indicate that the catechol group may be the main contributor to the radical-scavenging activities of the isolated compounds. This pattern agrees with similar results for the DPPH radical-scavenging activities of dicaffeoylquinic acid derivatives isolated from this plant reported in our previous study [19].
Inhibitory Effect of the Isolated Compounds on Copper Ion-Induced Rat Plasma Oxidation
Cholesteryl ester hydroperoxide (CE-OOH) produced by oxidation in healthy human plasma is present at a concentration of ca. 3 nM [21]. CE-OOH accumulates in atherosclerotic plaques with the progression of lesion development [22,23]. For this reason, the compound has been used as an index of lipid peroxidation to evaluate the inhibitory effect of antioxidants on lipids oxidized in blood plasma. Therefore, the antioxidant activity of the isolated compounds 1-6 (final concentration, 10 μM) in the copper ion induced-blood plasma oxidation system was examined by measuring the CE-OOH content. As shown in Figure 4, caffeoylquinic acid derivatives 1-6 considerably inhibited CE-OOH formation when compared to the control (no external addition of antioxidant). In particular, the dicaffeoylquinic acid derivatives 3-6 showed a relatively higher ability at inhibiting CE-OOH formation than monocaffeoylquinic acid derivatives 1 and 2 and caffeic acid. This pattern is in good agreement with the results from measuring DPPH radical scavenging.
Inhibitory Effect of the Isolated Compounds on Copper Ion-Induced Rat Plasma Oxidation
Cholesteryl ester hydroperoxide (CE-OOH) produced by oxidation in healthy human plasma is present at a concentration of ca. 3 nM [21]. CE-OOH accumulates in atherosclerotic plaques with the progression of lesion development [22,23]. For this reason, the compound has been used as an index of lipid peroxidation to evaluate the inhibitory effect of antioxidants on lipids oxidized in blood plasma. Therefore, the antioxidant activity of the isolated compounds 1-6 (final concentration, 10 µM) in the copper ion induced-blood plasma oxidation system was examined by measuring the CE-OOH content. As shown in Figure 4, caffeoylquinic acid derivatives 1-6 considerably inhibited CE-OOH formation when compared to the control (no external addition of antioxidant). In particular, the dicaffeoylquinic acid derivatives 3-6 showed a relatively higher ability at inhibiting CE-OOH formation than monocaffeoylquinic acid derivatives 1 and 2 and caffeic acid. This pattern is in good agreement with the results from measuring DPPH radical scavenging.
In this study, four new dicaffeoylquinic acid derivatives isolated from the EtOAc layer of S. herbacea were determined to be 3-caffeoyl-5-dihydroquinic acid, 3-caffeoyl-5-dihydrocaffeoylquinic acid methyl ester, 3-caffeoyl-4-dihydrocaffeoylquinic acid methyl ester, and 3,5-dihydrocaffeoylquinic acid methyl ester (Figure 1). In addition, two known compounds, 3-caffeoylquinic acid and 3-caffeoylquinic acid methyl ester, were isolated and identified. To the best of our knowledge, these compounds were identified here for the first time in this plant.
Caffeoylquinic acid derivatives including dicaffeoylquinic acid analogues have been reported to show various biological effects, including antioxidant [19,24], anticancer [25], and anti-inflammatory [26] activities. In this study, the results of the antioxidative evaluation indicated that the caffeoylquinic acid derivatives 1-6 significantly scavenged DPPH radicals and inhibited CE-OOH formation during rat blood plasma oxidation induced by copper ions. It is well known that the catechol group has high free radical-scavenging and metal-chelating effects [27,28]. In addition, five other dicaffeoylquinic acid derivatives from the same plant and their high antioxidative activities were reported in our previous study [19]. In this study, we found that DPPH radical-scavenging activities and metal-chelating effects of caffeoylquinic acid derivatives were proportionally correlated with the number of catechol group. Our present and previous observations indicate that the caffeoylquinic acid derivatives 1-6 containing a catechol group may act as excellent radical scavengers and metal-chelating agents. In addition, various (di)caffeoylquinic acid derivatives may be abundant in glasswort [29]. These results indicate that the high antioxidative activity of S. herbacea may be influenced by various caffeoylquinic acid derivatives.
Inhibitory Effect of the Isolated Compounds on Copper Ion-Induced Rat Plasma Oxidation
Cholesteryl ester hydroperoxide (CE-OOH) produced by oxidation in healthy human plasma is present at a concentration of ca. 3 nM [21]. CE-OOH accumulates in atherosclerotic plaques with the progression of lesion development [22,23]. For this reason, the compound has been used as an index of lipid peroxidation to evaluate the inhibitory effect of antioxidants on lipids oxidized in blood plasma. Therefore, the antioxidant activity of the isolated compounds 1-6 (final concentration, 10 μM) in the copper ion induced-blood plasma oxidation system was examined by measuring the CE-OOH content. As shown in Figure 4, caffeoylquinic acid derivatives 1-6 considerably inhibited CE-OOH formation when compared to the control (no external addition of antioxidant). In particular, the dicaffeoylquinic acid derivatives 3-6 showed a relatively higher ability at inhibiting CE-OOH formation than monocaffeoylquinic acid derivatives 1 and 2 and caffeic acid. This pattern is in good agreement with the results from measuring DPPH radical scavenging.
General Experimental Procedures
NMR was recorded on a unity INOVA 500 spectrometer (Varian, Walnut Creek, CA, USA). Mass spectra were acquired on a hybrid SYNAPT G2 (Waters, Cambridge, UK), which was equipped with an electrospray ionization source. Data acquisition took place over the mass range of m/z 50 to m/z 1200 for MS mode. The sample was introduced into the ESI source at a constant flow rate of 20 µL/min using an external syringe pump (Harvard 11Plus). Thin-layer chromatography (TLC) was carried out using silica gel TLC plates (silica gel 60 F254, 0.25 mm thickness, Darmstadt, Germany) and the fractions were visualized by UV and 1% cerium (IV) sulfate ethanol solution spray. Silica gel column (2.5 cm × 50 cm, Kieselgel 60, 70-230 mesh, Merck, Kenilworth, NJ, USA) and Sephadex LH-20 column (3.5 cm × 55 cm, 25-100 mesh, GE Healthcare Bio-Sciences AB, Uppsala, Sweden) were used for column chromatography. Fractions were purified by HPLC equipped with a Shim-pack Prep-ODS (H) Kit (5 µm, 20 mm × 250 mm; Shimadzu, Kyoto, Japan). The flow rate was 9.9 mL/min, and eluents were monitored at 254 nm.
Materials and Chemicals
Aerial parts of glasswort were collected in June from Younggwang County, located on the southwestern coast of Korea [19]. A voucher sample has been deposited in the warm-temperate forest arboretum located in Bogil Island, Chonnam National University. Solvents used for analyses were of HPLC grade and were purchased from Fisher Scientific Korea. Methanol-d 4 (CD 3 OD) was obtained from Merck. Trifluoroacetic acid, DPPH, caffeic acid, and chlorogenic acid were purchased from Sigma-Aldrich Chemical Co. (St. Louis, MO, USA). All other chemicals and reagents used in this study were of analytical grade.
Extraction and Partition
The MeOH extraction from glasswort and its partition has been reported in our previous study [19]. Briefly, glasswort (8 kg) was homogenized with MeOH (13 L, 2 times). After extraction at room temperature for 24 h, the mixture was filtered through No. 2 filter paper (Whatman International) and concentrated by vacuum evaporation at 38 • C. The MeOH extracts (417.3 g) were suspended in H 2 O (3 L) and partitioned with n-hexane (3 L, three times), chloroform (CHCl 3 , 3 L, three times), EtOAc (3 L, three times), and water-saturated n-butanol (3 L, three times). Each fraction was evaporated in vacuo at 38 • C.
HPLC ESI MS/MS Analysis of Four Dicaffeoylquinic Acid Derivatives Identified in S. herbacea
Fresh aerial components of S. herbacea (10 g) were homogenized in EtOH (150 mL). The mixture was filtered under vacuum through No. 2 filter paper (Whatman). The residue was homogenized in 80% EtOH (150 mL) and filtered through No. 2 filter paper. The EtOH and 80% EtOH solutions were combined and concentrated under a vacuum at 38 • C. The extracts were suspended in distilled water (100 mL), the pH was adjusted to 2.6 with 1.0 M HCl solution, and partitioning was performed with n-hexane and EtOAc (100 mL, three times). The EtOAc fraction was evaporated under a vacuum at 38 • C and dissolved in 100% MeOH (10 mL). The EtOAc fraction was analyzed using a LC-ESI/MS (Shimadzu). The isolated compounds (3)(4)(5)(6) were separated under the chosen HPLC conditions [column, MG III (C18, 3 µm, 3.0 mm × 100 mm) (Shiseido, Tokyo, Japan); column temperature, 35 • C; flow rate, 0.3 mL/min]. The sample was eluted using a gradient system of H 2 O (solvent A) to acetonitrile (solvent B) (both containing 0.1% formic acid), starting with 10% B for 1 min, increasing to 23% B for 2 min, holding at 23% A for 5 min, increasing to 33% A for 7.5 min, holding at 33% A for 18 min, increasing to 90% A for 18.5 min, and holding at 90% A for 23 min. The contents of 3-6 in S. herbacea were quantitatively analyzed by LC-ESI-MS/MS. Sample and standard solutions were prepared just before analysis. The calibration curves (n = 6) were constructed using compounds 3-6 (0.1-50 ng) isolated from this plant. Accuracy and reproducibility were evaluated using the standard spike method. External standards of 3-6 were added to aliquots of the aerial parts of S. herbacea at three concentrations to determine the precision. The quantification and quantitation of 3-6 in the aerial parts of S. herbacea were performed in triplicate.
Assay of DPPH Radical-Scavenging
The assay for purification of the antioxidative compounds was conducted by TLC using the method described by Takao et al. [30], with slight modifications. Briefly, all fractions obtained in the purification process were spotted on a silica gel TLC and developed using a mixture of n-BuOH/acetic acid/H 2 O = 4:1:1 (v/v/v). The developed TLC was sprayed with 200 µM DPPH free radical EtOH solution and the decolorized spots were considered to be antioxidative compounds.
The free radical-scavenging activities of the isolated compounds, with caffeic acid as a positive control, were also evaluated by ODS-HPLC analysis as in previous research but with slight modifications [31]. Briefly, an ethanol solution (50 µL) of each compound (final concentration, 10 µM) was mixed with DPPH radical ethanol solution (150 µL; final concentration, 100 µM). After standing for 20 min in the dark, the mixture was transferred to a HPLC system connected to a TSK-gel Octyl-80Ts column (5 µm, 4.6 mm × 25 cm; Tosoh, Tokyo, Japan). The elution was carried out with an isocratic system of acetonitrile/H 2 O = 60:40 (v/v). The flow rate was 1.0 mL/min and the remaining DPPH radical was monitored at 517 nm. The DPPH radical-scavenging activities of each sample were determined as the percentage decrease compared to the peak area of the DPPH radical in a blank sample.
Determination of the Inhibitory Effect of the Isolated Compounds against Copper Ion-Induced Rat Plasma Oxidation
The antioxidative activities of the isolated compounds were evaluated by measuring their inhibitory effects against CE-OOH formation in copper ion-induced oxidation of diluted rat blood plasma [19]. Sprague-Dawley rats (male, 6 weeks age, 180-200 g) (Samtako Bio Korea) were kept at 23 • C under a 12 h dark/light cycle and fasted for 12-15 h prior to blood collection. After anesthesia with diethyl ether, the abdomen wall was opened, and blood was collected from the abdominal aorta into heparinized tubes. Rat plasma was isolated by centrifugation (1500× g) at 4 • C for 20 min and used immediately for experiments or stored at -40 • C for no longer than 1 week. All experimental procedures were approved by the Institutional Animal Care and Use Committee of Chonnam National University (no. CNU IACUC-YB-R-2013-4). The plasma was diluted four-fold in PBS buffer (pH 7.4) and the diluted plasma was mixed with an EtOH solution (final 1%) of the isolated compounds (final concentration, 10 µM). The mixture was oxidized with the addition of 0.1 mL of CuSO 4 PBS solution (final concentration, 100 µM). After incubation at 37 • C for 7 h with continuous shaking, an aliquot (100 µL) was mixed with 3 mL of MeOH containing 2.5 mM 2,6-di-tert-butyl-4-methylphenol and partitioned with n-hexane (3 mL, 2 times). The upper layer (n-hexane) was concentrated under a vacuum and then dissolved with 100 µL of MeOH/CHCl 3 (95:5, v/v). The dissolved solution was transferred to a HPLC system (Shimadzu) equipped with a TSK-gel Octyl-80Ts column (Tosoh). Elution was performed with an isocratic system of MeOH/H 2 O (97:3, v/v). The flow rate was 1.0 mL/min, and the CE-OOH produced was monitored at 235 nm. The concentration of CE-OOH was calculated from a standard curve for cholesteryl linoleate hydroperoxide. Detailed procedures for preparation of the cholesteryl linoleate hydroperoxide standard have been published elsewhere [32].
Statistical Analysis
The data for the antioxidative activity of the isolated compounds were expressed as mean ± SD using the Statistical Package for Social Sciences 19.0 package programs (IBM, Armonk, NY, USA). Statistical differences were measured by one-way ANOVA followed by Duncan's multiple comparison test. Significant difference was set at p < 0.05.
Conclusions
Four new dicaffeoylquinic acid derivatives and two known compounds were isolated from the aerial parts of Salicornia herbacea. In addition, the presence of dicaffeoylquinic acid derivatives in this plant was reconfirmed by LC-ESI-MS/MS analysis. The caffeoylquinic acid derivatives 1-6 scavenged DPPH radicals and inhibited the CE-OOH formation in copper ions-induced rat blood plasma oxidation. S. herbacea can be viewed as a promising health-promoting vegetable and medical source because of the high antioxidative activity of the various caffeoylquinic acid derivatives found in the plant. | 5,777.8 | 2016-08-01T00:00:00.000 | [
"Chemistry",
"Agricultural And Food Sciences"
] |
Vector-like quark interpretation for the CKM unitarity violation, excess in Higgs signal strength, and bottom quark forward-backward asymmetry
Due to a recent more precise evaluation of Vud and Vus, the unitarity condition of the first row in the Cabibbo-Kobayashi-Maskawa (CKM) matrix: |Vud|2 +|Vus|2 +|Vub|2 = 0.99798 ± 0.00038 now stands at a deviation more than 4σ from unity. Furthermore, a mild excess in the overall Higgs signal strength appears at about 2σ above the standard model (SM) prediction, as well as the long-lasting discrepancy in the forward-backward asymmetry AFBb\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ {\mathcal{A}}_{\mathrm{FB}}^b $$\end{document} in Z→bb¯\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ Z\to b\overline{b} $$\end{document} at LEP. Motivated from the above three anomalies we investigate an extension of the SM with vector-like quarks (VLQs) associated with the down-quark sector, with the goal of alleviating the tension among these datasets. We perform global fits of the model under the constraints coming from the unitarity condition of the first row of the CKM matrix, the Z -pole observables AFBb\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ {\mathcal{A}}_{\mathrm{FB}}^b $$\end{document}, Rb and Γhad, Electro-Weak precision observables ∆S and ∆T , B-meson observables Bd0−B¯d0\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ {B}_d^0-{\overline{B}}_d^0 $$\end{document} mixing, B+→ π+ℓ+ℓ− and B0→ μ+μ− and direct searches for VLQs at the Large Hadron Collider (LHC). Our results suggest that adding VLQs to the SM provides better agreement than the SM.
Introduction
The Standard Model (SM) particle content includes three families of fermions under the identical representation of the gauge symmetries SU(3) c × SU(2) L × U(1) Y . Each fermion family includes a quark sector (up-type and down-type quarks) and a lepton sector (charged leptons and a neutrino). The well-known quark mixing in crossing between the families is an indispensable ingredient in flavor physics. One can rotate the interaction eigenbasis to the mass eigenbasis in the quark sector through a unitary transformation, and it generates nonzero flavor mixings across the families in the charged-current interactions with the W boson. The quark mixing for the three generations in the SM can be generally parameterized by the 3×3 Cabibbo-Kobayashi-Maskawa (CKM) matrix V SM CKM [1,2]. Since V SM CKM is composed of two unitary matrices, unitarity of the CKM matrix shall be maintained. The existence of additional quarks beyond the three SM families shall extend the CKM matrix to a larger dimension. In such a case, the unitarity of original 3 by 3 submatrix will no longer hold.
JHEP05(2020)117
The recent updated measurements and analyses of V ud and V us are briefly outlined as follows. The most precise determination of |V ud | is extracted from the superallowed 0 + −0 + nuclear β decay measurements [3,4] |V ud | 2 = 0.97147 (20) 1 where ∆ V R accounts for short-distance radiative correction. Recently, according to the dispersion relation study with experimental data of neutrino-proton scattering, the inner radiative correction with reduced hadronic uncertainties ∆ V R = 0.02467 (22) was reported in ref. [5]. It significantly modified the value of |V ud | = 0.97370 (14) [4]. On the other hand, one can use various kaon decay channels to independently extract the values of |V us | and |V us /V ud |. Based on the analysis of semileptonic Kl3 decays [8] and the comparison between the kaon and pion inclusive radiative decay rates K → µν(γ) and π → µν(γ) [9], the values of |V us | = 0.22333(60) and |V us /V ud | = 0.23130(50) are obtained in ref. [4]. As a result, the matrix-element squared of the first row of V SM CKM |V ud | 2 + |V us | 2 + |V ub | 2 = 0.99798 ± 0.00038 , (1.2) which deviates from the unitarity by more than 4σ [4,5]. 1 If this deviation is further confirmed, it may invoke additional quarks to extend the CKM matrix. 2 After the final piece of the SM, Higgs boson, has been discovered in 2012 [13,14], the precise measurements of its properties become more and more important. The SM can fully predict the signal strengths of this 125 GeV scalar boson so that deviations from the SM predictions can help us to trace the footprint of new physics beyond the SM. Recently, the average on the Higgs-signal strengths from both ATLAS and CMS Collaborations indicated an excess at the level of 1.5σ. 3 If one looks more closely into each individual signal strength channel, one would find that mild 1σ excesses appear in the majority of channels. After taking into account of all available data from the Higgs measurements, the average of the 125 GeV Higgs signal strengths was obtained [17] µ Higgs = 1.10 ± 0.05 . (1.3) One simple extension of the SM with an SU(2) doublet of vector-like quarks (VLQs) with hypercharge −5/6 can be introduced to account for the excess by reducing the bottom 1 Reduction in the extracted value of V ud is due to the reduction of uncertainty in ∆ V R , which is made possible by a dispersion-relation based formulation of the γ −W box contribution to the neutron and nuclear beta decays [6]. However, the value is to be taken cautiously before jumping to a conclusion, because one has to include properly the quasielastic contribution from one-nucleon knock-out as well as advanced correction from two-nucleon knock-out. On the other hand, a recent proposal to study ∆ V R on lattice can be found in ref. [7]. 2 Another explanation for this deviation involves new physics in the neutrino sector with lepton-flavor
JHEP05(2020)117
Yukawa coupling at about 6% from its SM value [17]. Since the h → bb mode takes up around 58% of the 125 GeV Higgs total decay width, the above extension can reduce the total Higgs width and universally raise the signal strengths by about 10% to fit the data. Finally, the measurement of the forward-backward asymmetry A b F B of the bottom quark at the Z 0 pole has exhibited a long-lasting −2.4σ deviation from the SM prediction [9]. Again, this anomaly can be reconciled by introducing an SU(2) doublet VLQs with hypercharge −5/6. The mixing between the isospin T 3 = 1/2 component of VLQs and the right-handed SM bottom quark with mixing angle sin θ R 0.2 can enhance the right-handed bottom quark coupling with Z boson. Meanwhile, the left-handed bottom quark coupling remains intact [17]. However, the mixing between VLQs and the SM bottom quark is under severe restrictions from other Z 0 -pole observables, for example, the Z hadronic decay width Γ had and the ratio of Z partial width into bb relative to the total hadronic width, R b , are both consistent with SM predictions. Earlier attempts in this direction can be found in refs. [18,19].
All the above three discrepancies can be explained with additional heavy quarks, which mix with the SM bottom quark. In order to guarantee the anomaly-free condition, one economical way is to introduce VLQs. The review of various types of VLQs can be found in ref. [20]. In this study, we need to modify both left-handed and right-handed downquark sectors in order to alleviate the above three anomalies. In general, both left-handed and right-handed mixing angles are generated and related to each other for each type of VLQs though one may be suppressed relative to another. It means that we need at least two types of VLQs to simultaneously explain these anomalies. We show that the minimal model requires coexistence of both doublet and singlet VLQs, B L,R and b L,R . This paper is organized as follows. In section 2, we first write down the general model and study the interactions between VLQs and SM particles, especially the modifications of couplings to W , Z, and h bosons. Then we boil down to the requirements of the minimal model. The various constraints from relevant experimental observables are discussed in section 3. In section 4, we perform the chi-square fitting and show numerical results, in particular we discuss the allowed parameter space that can explain all three anomalies. We summarize in section 5.
Standard model with extra vector-like quarks
In this work, a doublet and singlet of vector-like quarks (VLQs) are introduced: with hypercharges (Y /2) B L,R = −5/6 and (Y /2) b L,R = −1/3, respectively, under the SM U(1) Y symmetry. The upper component of the doublet and the singlets have the same quantum numbers as the SM down-type quarks, and thus they are allowed to mix with the SM down-type quarks if nontrivial Yukawa interactions exist among them. It was pointed out that the Yukawa interaction between B L and b R will induce a mixing between the JHEP05(2020)117 right-handed b R and b R , and so reduce the bottom Yukawa coupling. At the same time, it will increase the coupling of the Z boson to the right-handed b quark [17]. The reduction in the bottom Yukawa coupling gives rise to a decrease in the Higgs total decay width, and thus can help alleviate the overall Higgs signal-strength excess, while the increase in the Z coupling to the right-handed b quark can bring the prediction of the forward-backward asymmetry A b F B down to the experimental value. On the other hand, the mixing between b L and b L is suppressed due to the absence of Yukawa interaction between B R and b L , and so the modification of CKM matrix is negligible. However, the Higgs-induced Yukawa interaction between b L,R and the SM down quarks will give a larger left-handed mixing than the right-handed one. Thus, the non-negligible left-handed mixing can further modify the original 3 × 3 CKM matrix and the extra VLQs can extend the CKM matrix to 5 × 5 to restore the unitarity.
Yukawa couplings and fermion masses
The generalized interactions between VLQs, SM quarks, and the Higgs doublet are expressed as where U, D represent the SM up-and down-quarks with i, j = 1, 2, 3 as the flavor indices, and superscript 0 indicates flavor eigenstates, for which the SM Yukawa matrix y u,d have been diagonalized. Note the implicit sum over the repeated indices in the above equation. The dual of Higgs field H ≡ iτ 2 H * carries Y /2 = −1/2, where τ 2 is the Pauli matrix. After the electroweak symmetry breaking (EWSB), H = (0, v/ √ 2) T , the mass matrix of the down-type quarks becomes Since both MM † and M † M are symmetric matrices, they can be diagonalized as
JHEP05(2020)117
where the mass eigenstates are related to the flavor eigenstates via the unitary matrices V R,L . Similarly, for the up-type quarks the mass eigenstates are related to the flavor eigenstates by Since the VLQs do not mix with up-type quarks, the up-type quark mass matrix remains the same as in SM. Due to the discrepancies between the mass matrix and Higgs interaction matrix, the Higgs couplings of down-type quarks will be modified from the SM Yukawa couplings, The coupling for b L b R h can be extracted out from the matrix element (Y) 33 , for example. Since we only introduce the vector-like quarks that can mix with the bottom quarks, the Higgs couplings to the up-type quarks will stay the same as the SM ones.
Modifications to the W couplings with SM quarks
The charged-current interactions via the W boson with the SM quarks and vector-like quarks are where P L,R = 1∓γ 5 2 . We define the 5 × 5 CKM matrix as
JHEP05(2020)117
Since the VLQs do not modify the up-quark sector, we simply extend the 3 × 3 matrix W L in eq. (2.9) to a 5 × 5 matrix. The exact parameterization of V 5×5 CKM will be shown in appendix A.
We further parameterize the charged current interactions in the following simple form [21], where q includes all SM quarks and VLQs. A L ij and A R ij are summarized as follows where α = 1 to 3, β = 1 to 5, and (U 1 , where Q f (T 3f ) is the electric charge (third component of isospin) of quarks, the gauge coupling g Z = g 2 / cos θ w , x w = sin 2 θ w is the sine-square of the Weinberg angle θ w . Again, the Z boson couplings to the SM up-type quarks are exactly the same as in the SM and are not modified by VLQs. We further parameterize the Z boson couplings with SM down-type quarks and VLQs in the following simple form [21], where X L ij and X R ij are summarized below, JHEP05(2020)117
Minimal models
In this subsection, we would like to narrow down to the most relevant couplings to the experimental anomalies. First, we consider non-zero couplings g B 3 , g b 1 , while M 1,2 are at TeV scale. According to ref. [17], the tensions of Higgs signal strength and A b FB can be alleviated by the g B 3 coupling from the doublet VLQ. Then the CKM unitarity violation mainly due to the |V ud | is relevant to g b 1 from the singlet VLQ. Other parameters in eq. (2.2) are set to zero. It simplifies the down-type quark mass matrix and V L,R as Here we have taken the liberty that the first two generations of the SM down-type quark masses are set at zero. If the couplings g .
(2.17)
According to eq. (2.7), the coupling for (h/v)b L b R is given by .
(2.18)
This gives rise to a reduction factor in the Higgs Yukawa coupling by C hbb ≡ c R 34 / 1 + (∆ 2 /M 2 1 ), and thus the enhancement of Higgs signal strengths. The modification of the CKM matrix is indicated by eq. (2.9). The first row of first three elements of V 5×5 CKM violates unitarity as However, the unitarity for the first row of V 5×5 CKM can be restored with the other two elements If s L 15 ∼ s L 34 , we anticipate the contribution from V ub will be dominant.
JHEP05(2020)117
Finally, from eq. (2.13) the Zbb couplings are modified as Since s R 34 enhances (g b ) R , it alleviates the tension between A b F B observation and SM prediction.
Second, we include one more non-zero coupling g b 3 . Then the mass matrix and unitary transformations matrices are (2.23) Here we diagonalize MM † via a 4-step block diagonalization procedure. We have used rotation matrices with the order of R(θ 15 ), R(θ 35 ), R(θ 34 ), and R(θ 45 ) to block diagonalize MM † in each step and finally V L and V R can be approximated by eq. (2.22). The mass of the bottom quark which is the same as eq. (2.18). The first three elements in the first row of V 5×5 CKM violate unitarity as
JHEP05(2020)117
Similarly, the unitarity in the fist row of V 5×5 CKM can be restore by the other two elements Once again, the contribution from V ub is the dominant one. Then the Zdd, Zbb, Zdb couplings are given by 4 The FCNC is generated from (g db ) L and shall be constrained by More details are shown in the following sections.
Z boson measurements
Once the d, s, b couplings to the Z boson are modified, we find that the following observables are modified: JHEP05(2020)117 1. Total hadronic width. At tree level, the change to the decay width into dd, ss, or bb is given by With this modification, the total hadronic width is changed to The R b is the fraction of hadronic width into bb, which is given by There is a large tension in the forward-backward asymmetry of b quark production at the Z resonance between the experimental measurement and the SM prediction, The couplings of fermions to the Z boson are basically given by T 3 − Qx w in the SM. For the electron it is simply It was pointed out in ref. [17] that the interaction term g B 3 B 0 L H b 0 R from the doublet vector-like quark B L,R is able to reconcile this tension.
For the second minimal model, where g B 3 , g b 1,2 are non-zero couplings, the modifications of (g b ) L and (g b ) R can be found from eq. (2.27). If we further assume s L 15 , s L Both s R 34 and s L 35 can reduce the the forward-backward asymmetry A b FB of the quark at Z-pole. They are good to fit the measured A b FB at a lower value from the SM prediction.
JHEP05(2020)117
On the other hand, s L 35 reduces R b but s R 34 increases R b . We can use both to maintain R b at the SM value. This is achieved in the leading order by Unfortunately, we will see from the Fit-2b in section 4 that the B-meson observables are too restrictive to fulfill this relation. Subsequently, mixing angles are chosen to fit the anomaly in A b FB .
125 GeV Higgs precision measurements
The data for the Higgs signal strengths for the combined 7 + 8 TeV data from ATLAS and CMS [22] and all the most updated 13 TeV data were summarized in ref. [23]. The overall average signal strength is µ Higgs = 1.10 ± 0.05 [23], which is moderately above the SM prediction. Using a total of 64 data points, the goodness of the SM description for the Higgs data stands at χ 2 /d.o.f. = 53.81/64, which gives a goodness of fit 0.814. A reduction in the total Higgs decay width can provide a better description of the Higgs data with χ 2 /d.o.f. = 51.44/63, corresponding to a goodness of fit 0.851 [23]. The p-value of the hypothesis of the single-parameter fit (∆Γ tot ) equals 0.12 when the SM is the null hypothesis. Although it is not significantly enough to say they are different, it may still give a hint that the single-parameter fit is indeed better than the SM. In this work, the reduction in the Higgs total width is achieved by a slight reduction in the RH bottom Yukawa coupling which can be found from the matrix element (Y) 33 in eq. (2.7) and predominately from the doublet vector-like bottom quark interaction term g B 3 B 0 L H b 0 R . 6
Electro-Weak Precision Observables(EWPOs)
The Electro-Weak Precision Observables (EWPOs) can be another important indirect constraint for the mixings and masses of the VLQs. The EWPOs can be represented by a set of oblique parameters S, T and U . We apply the data from Particles Data Group (PDG) 2018 review [9] with a fixed U = 0, and the best fits of S and T parameters are ∆S = 0.02 ± 0.07, ∆T = 0.06 ± 0.06. (3.6) where ∆S and ∆T are defined as We consider the 3σ allowed regions of ∆S and ∆T parameters in our fitting. 6 Once vector-like bottom quarks are heavier than 1 TeV, their contributions to gg → h and h → γγ are tiny. We will ignore these effects in our fitting.
JHEP05(2020)117
The general form of S parameter can be represented as [21,24,25] , M q i are the quark masses, and A L,R ij , X L,R ij are defined in eqs. (2.11) and (2.14) respectively. On the other hand, the functions inside S are The contributions from t and b quarks in the SM for the S parameter can be represented as Similarly, the general form of T parameter can be represented as [21,24,26] where the functions inside T are θ + (y 1 , y 2 ) = y 1 + y 2 − 2y 1 y 2 y 1 − y 2 log y 1 y 2 (3.12) The contributions from t and b quarks in the SM for the T parameter can be represented as in s-channel. The overall expression including the SM t-W box diagram and Z boson FCNC is [27] x where U 2 std−db is from the SM contribution of top-W box diagram, and −U db ≡ V * L35 V L15 from the Z boson FCNC induced by the singlet VLQ. On the other hand, the FCNC contribution from the doublet VLQ, V * L34 V L14 , is much smaller than that from the singlet VLQ, because the pattern of the mass matrix which suppresses the left-handed mixing angle for doublet VLQ with down and bottom quarks [17]. The prefactor was obtained by substituting the numerical values: the √ B B f B = 225 ± 9 MeV [9] from lattice calculation; the QCD correction η B = 0.55 [28]; the B d lifetime τ B d = 1.520(4) ps = 2.31 × 10 12 GeV −1 and mass m B d = 5.27963(15) GeV [9]; and Fermi constant G F . The expression for SM contribution is given by [29] where y t ≡ m 2 t /m 2 W and the loop function [29] f 2 (y) ≡ 1 − 3 4 Taking the most updated experimental values of |V tb | = 1.019 ± 0.025 and |V td | = (8.1 ± 0.5) × 10 −3 [9], the SM reproduces the central value of the current experimental measurement [9] x d | exp = 0.770 ± 0.004 . (3.17) However, the theoretical uncertainty is much larger than the experimental one. For conservative limit we require the new physics contribution to be less than the SM contribution, which implies that is much weaker than the constraints from B + → π + + − and B 0 → µ + µ − in the next two subsections. In addition, due to large theoretical uncertainties we do not use this data in our global analysis. On the other hand, the mixings between the second generation quarks and new VLQs are irrelevant in this study. In order to avoid the stringent constraints from the mixing of D 0 -D 0 , K 0 -K 0 , and B 0 s -B 0 s mesons, we suppress all the interaction terms between the second generation quarks and new VLQs for simplicity. 7 The more general study can be found in ref. [32].
(3.24)
In the following chi-square fitting, we combine both the experimental error and 30% theoretical uncertainty from the SM [30] to give conservative constraints.
The
operator also contributes to the B 0 → µ + µ − through the expression [31] Br where f B = 225 MeV. In our framework, the (g db ) R = 0 from eq. (2.27) guarantees no mixing among the right-handed d and b quarks and thus C 10 defined in ref. [31] is zero.
Direct searches for the vector-like bottom quarks
The vector-like bottom quarks can be pair produced by QCD processes or singly produced via a t-channel Z boson exchange at hadron colliders. Assuming that the new vectorlike bottom quarks can only decay to SM particles, there are three possible decay modes: The searches for pair production of vectorlike bottom quarks only depend on their masses, decay patterns, and branching ratios. According to ref. [36], the ATLAS Collaboration has published their combined searches for pair production of vector-like bottom quarks with the above three decay modes. The SU(2) singlet vector-like bottom quark b is excluded for masses below 1.22 TeV, and the SU(2) doublet vector-like bottom quark B = (b −1/3 , p −4/3 ) T is excluded for masses below 1.14 TeV. Other recent searches for pair production of vector-like bottom quarks from CMS Collaboration can be found in refs. [37,38], and those constraints are similar to ref. [36].
On the other hand, the searches for single production of vector-like bottom quarks depend not only on their masses, but also on their mixing with SM down-type quarks. Recently, the ATLAS Collaboration has published their searches for single production of vector-like bottom quark with decays into a Higgs boson and a b quark, followed by H → γγ in ref. [39]. Again, this constraint is roughly the same as the above ones. Similarly, the searches for pair production and single production of vector-like quark p with electric charge −4/3 can be found in refs. [40,41]. A lower mass limit about 1.30 TeV at 95% confidence level is set on the p . In order to escape the constraints from these direct searches at the LHC, we can increase m b , m p , and m b to be above the lower bounds of the mass constraints. Therefore, we safely set their masses at 1.5 TeV in the analysis.
Fitting
Five data sets are considered in our analysis. Totally, we used 75 data points: 64 from 125 GeV Higgs signal strengths; four from CKM; three from A b FB , R EXP b , Γ had each; two from ∆S, ∆T ; and two from Br(B + → π + + − ) and Br(B 0 → µ + µ − ). They are summarized in table 1.
The SM CKM matrix is parameterized using the Wolfenstein parameters [9] quoted from the global fit [9]. The SM values of |V SM us |, |V SM us /V SM ud |, |V SM ud |, and |V ub | are listed in table 1, and the uncertainties from global fit in SM are included in our chisquare analysis. In fact, the SM does not fit well to the above datasets, as it gives a total χ 2 (SM)/d.o.f. = 88.946/75, which is translated into a goodness of fit only 0.130. Note that during the parameter scan, the unitarity condition of i=d,s,b,b ,b" |V ui | 2 = 1 is always held from our analytical parameterization. The unitary violation only happens on i=d,s,b |V ui | 2 .
According to the minimal model of additional VLQs with various options on the parameters in subsection 2.4, we perform several fittings to investigate if these models can provide better explanations for the data. Without loss of generality we fix the VLQs mass at 1.5 TeV, which is above the current VLQs mass lower bounds from ATLAS and CMS searches [36,39,[41][42][43][44].
For Fit-1, keeping g b 3 = 0 can guarantees the flavor-changing coupling (g db ) L from eq. (2.27) to be zero. Therefore the constraints from B 0 d -B = 0.789. Comparing with the SM fit Fit-1 has a p-value of 2.5 × 10 −6 against the SM null hypothesis. It is shown in both table 2 and figure 1 that the best-fit points prefer a non-zero value of g B 3 = ±1.177 and g b 1 = ±0.335 at a level more than 2.5σ and 4σ from zero, respectively. Furthermore, the bottom-quark Yukawa coupling deviates from the SM prediction by more than 2σ, and the best-fit points give C hbb = 0.98, which is about 2% smaller than the SM value. It helps to enhance the overall Higgs signal strengths. In fact, the Higgs signal-strength dataset prefers bottom Yukawa coupling 6% smaller than the SM value [17]. Since the R EXP b was quite precisely measured and consistent with the SM prediction, the deviation of the bottom-Yukawa coupling cannot exceed more than a couple of percent. From the (V L15 , V R34 ) panel of figure 1, since V L15 s L 15 ∝ g b 1 and V R34 s R 34 ∝ g B 3 , it does not show correlation between g B 3 and g b 1 . In the (V R34 , ∆S) and (V R34 , ∆T ) panels, they show that the best-fit regions are consistent with the oblique parameters from electroweak precision measurements.
In Fit-2, both couplings g b 1 and g b 3 can vary from zero. In this case, according to eq. (2.27), flavor-changing coupling (g db ) L is induced and therefore is constrained B + → π + + − and B 0 → µ + µ − (B 0 d -B 0 d mixing is not included in any of the fits.) In figure 2 for Fit-2a, which has not included these flavor-changing constraints in the global fit, it allows both couplings g b 1 and g b 3 to significantly deviate from zero. Indeed, we see that the best-fit points prefer g B 3 = ±1.651 and g b 3 = ±0.614, and (s R 34 ) 2 5(s L 35 ) 2 are correlated in (V L35 , V R34 ) panel. This is in accordance with our discussion at end of subsection 3.2, where the VLQs contributions to R b cancel among themselves, meanwhile A b F B anomaly is explained by (g b ) L . Since the VLQs contributions to R b are canceled, the bottom-Yukawa coupling now is allowed to deviate from the SM by more than 6%, and the best-fit points give C hbb = 0.96, which deviates form the SM prediction by more than 3σ. Hence, Fit-2a can further lower the minimal chi-square than Fit-1, and gives mixing, B + → π + + − and B 0 → µ + µ − , which will restrict simultaneously large non-zero values of g b 1 and g b 3 . In order to study the effects from those B physics constraints, we further include both B + → π + + − and B 0 → µ + µ − in the Fit-2b.
In figure 3 for Fit-2b, we can understand how the constraints from B + → π + + − and B 0 → µ + µ − affect the allowed parameter region. In the (g b 3 , ∆χ 2 ) panel, the coupling g b 3 is restricted to be small within 3σ, more precisely, it requires |g b 3 | ≤ 0.076. Since g b 3 is restricted close to zero, the best-fit points and the corresponding C hbb of Fit-2b overlap with Fit-1. In the same panel, we can observe there are two local minima at g b 3 ±0.6 at 4σ, which is correlated to g b 1 0 in (g b 1 , ∆χ 2 ) panel. From the (U db , ∆χ 2 ) panel, we know that the flavor constraints from B + → π + + − is more stringent than B 0 d -B 0 d mixing due to more precise theoretical uncertainty in the former. Around the minimum, we can identify the two-tine fork shape structure, and it is due to the interference between JHEP05(2020)117 VLQs and SM contributions for B + → π + + − from eq. (3.20). Finally, comparing with B + → π + + − , the B 0 → µ + µ − gives similar but weaker constraint on (g db ) L . We can also find in table 2 that both the values of Br(B + → π + + − ) and Br(B 0 → µ + µ − ) in Fit-2b are largely reduced by three orders of magnitude compared with Fit-2a. On the other hand, we observe that the value of Br(B + → π + + − ) in Fit-2b is indeed closer to the measurement from LHCb in eq. (3.24) than the SM prediction in eq. (3.23), because the central value in eq. (3.23) is more than 1σ larger than the central value in eq. (3.24). Once both theoretical and experimental uncertainties are reduced in the future with almost the same central value in Br(B + → π + + − ), it will be a smoking-gun signature for adding VLQs to the SM.
For discovery prospects of the doublet+singlet VLQs, there are some signatures which can be searched for at the LHC. The VLQs can be pair produced via QCD processes, such JHEP05(2020)117 Here, we assume the mass degeneracy of b and p from the doublet VLQ to avoid the decay mode b → p W or p → b W . Even though there is slight mass splitting between b and p of order O(10) GeV due to the mixing effect, the decay p → b W or b → p W can only give very soft leptons or jets, which are very difficult for detection at the LHC. The decay branching ratios of VLQs, for example, from the best-fit points for Fit-1 and Fit-2b from table 2 are
JHEP05(2020)117
and for Fit-2a, The above relation BR(b → Zb) BR(b → hb) comes from the equivalence theorem, in which the longitudinal mode of gauge bosons behaves like the Goldstone boson in the limit M b ,b ,p m Z,h . Therefore, one clear signature at the LHC from pair produced where X could be either h or Z. Such charged lepton pair(s) plus jets searches have been performed at the 13 TeV LHC [37,44]. Here we roughly estimate the current sensitivity on the lower mass limit of b . The event rate with at least one charged lepton pair is where = 0.0028 taking into account the branching ratios of b and Z → + − . Then using L = 36.1 fb −1 and requiring N < 2, we obtain By adopting the VLQ pair production cross section, the above upper limit translates into the lower mass limit of M b 1.1 TeV. Other decay modes of VLQs from pair production have been searched for by ATLAS and CMS Collaborations in refs. [36,38,42,43]. The lower mass limits of VLQs are around 1 TeV from these searches. Single VLQ production via the electroweak interaction, which depends on the size of mixing between VLQ and SM quark, was investigated in refs. [41,45]. We emphasize the predicted g B 3 and g b 1 values in table II all give s R 34 g B 3 v/( √ 2M 1 ) 0.14 and s L 15 0.04, that can be measured from the single VLQ productions via Zb(W u) fusion and ready to be tested in the near future. For example, the single p produced from the W b fusion has been studied by the ATLAS [41]. Assuming BR(p → W b) = 100% and varying |s R 34 | between 0.17 and 0.55, the lower mass limit of p can be set from 800 to 1800 GeV.
A distinctive signature of our proposed model from other phenomenological models is the singlet VLQ decay mode b → W − u. On the other hand, most of experimental searches at the LHC were focused on the mixing between VLQs and the third generation quarks. Hence, we stress the searches for the mixing between VLQs and the first generation quarks are also well-motivated in this work. The sizeable or dominant BR(b → W − u) can be a distinguishable feature of our scenario.
Discussion
We have advocated an extension of the SM with vector-like quarks, including a doublet and a singlet, in aim of alleviating a few experimental anomalies. An urgent one is a severe JHEP05(2020)117 unitarity violation in the first row of the CKM matrix standing at a level more than 4σ due to a recent more precise evaluation of V ud and V us . Another one is the long-lasting discrepancy in the forward-backward asymmetry A b FB in Z → bb at LEP. Furthermore, a mild excess in the overall Higgs signal strength appears at about 2σ above the standard model (SM) prediction, In this work, we have performed global fits of the model under the constraints coming from the unitarity condition of the first row of the CKM matrix, the Z-pole observables A b FB , R b and Γ had , Electro-Weak precision observables ∆S and ∆T , B-meson observables B 0 d -B 0 d mixing, B + → π + + − and B 0 → µ + µ − , and direct searches for VLQs at the LHC. We found that the extension with a VLQ doublet and a singlet can improve the fitting to the datasets, especially the improvement to the unitarity condition of the first row of the CKM matrix with two additional entries in the first row.
We offer the following comments before closing.
1. By extending the CKM matrix to 5 × 5 with the extra VLQs, the unitarity condition in the first row is fully restored.
2. Without taking into account the B-meson constraints the best-fit (see Fit-2a) can allow the bottom-Yukawa coupling to decrease by about 6%, which can then adequately explain the 2σ excess in the Higgs signal strength. At the same time, it can also account for the A b FB without upsetting R b due to a nontrivial cancellation between two contributions. However, the resulting branching ratios for B + → π + + − and B 0 → µ + µ − become exceedingly large above the experimental values.
3. However, including the B-meson constraints the allowed parameter space in g b 3 is restricted to be very small due to the presence of the FCNC in Z-b-d.
4.
Last but not least, the extra 5 physical CP phases in V 5×5 CKM matrix can be a trigger for electroweak baryogenesis. In order to generate the strong first-order electroweak phase transition, one needs to add an extra singlet complex scalar [47,48]. On the other hand, adding extra Z boson as in the ref. [31] would be possible to cancel the FCNC contributions from VLQs. Therefore, a gauge U(1) extension of our minimal model with a singlet complex scalar may simultaneously alleviate the constraints from B meson observables and explain the matter-antimatter asymmetry of the Universe. However, this extension is beyond the scope of this work and we would like to study this possibility in the future.
We first parameterize the original 3 × 3 CKM matrix in the usual form with s ij = sinθ ij and c ij = cosθ ij [46]. Then we can further parameterize the full 5 × 5 CKM matrix based on V 3×3 CKM as
JHEP05(2020)117
Notice that there is some freedom to arrange the positions of extra 5 CP phases in those matrices. We assign there is no CP phase in the rotation matrices of θ 34 and θ 35 in this study. On the other hand, since we don't involve the vector-like up-type quarks t , t inside the model, only the measurable 3 × 5 sub-matrix of V 5×5 CKM is corresponding for our study here.
Open Access. This article is distributed under the terms of the Creative Commons Attribution License (CC-BY 4.0), which permits any use, distribution and reproduction in any medium, provided the original author(s) and source are credited. | 9,119.6 | 2020-05-01T00:00:00.000 | [
"Materials Science"
] |
Wiring surface loss of a superconducting transmon qubit
Quantum processors using superconducting qubits suffer from dielectric loss leading to noise and dissipation. Qubits are usually designed as large capacitor pads connected to a non-linear Josephson junction (or SQUID) by a superconducting thin metal wiring. Here, we report on finite-element simulation and experimental results confirming that more than 50% of surface loss in transmon qubits can originate from Josephson junctions wiring and can limit qubit relaxation time. We experimentally extracted dielectric loss tangents of qubit elements and showed that dominant surface loss of wiring can occur for real qubits designs. Finally, we experimentally demonstrate up to 20% improvement in qubit quality factor by wiring design optimization.
INTRODUCTION
Quantum processors and simulators comprising tens or even hundreds superconducting qubits have recently been demonstrated [1][2][3][4][5] .Quantum gates errors hinder further size and complexity growth of superconducting circuits and quantum algorithms.On the one hand, reducing two-qubit gate errors to less than 0.1% opens a practical way to implement quantum error correction codes 6,7 .Meanwhile, quantum error correction is mathematically difficult and requires enormous qubit resources.On the other hand, with a reduced gate errors a useful quantum advantage can be achieved near term using special task quantum algorithms and error mitigation 1,8 .But superconducting quantum bits have natural internal sources of noise and decoherence limiting quantum gates fidelity.
A large part of qubits loss is due to microscopic tunneling defects, which form parasitic two-level quantum systems (TLS) 9,10 and resonantly absorb electric energy from the qubit mode dissipating it into phonons or quasiparticle bath [11][12][13][14] .It is well-known, that such defects reside in the interfaces and surface native oxides around qubit electrodes: metal-substrate (MS), substrate-air (SA), metal-air (MA) [15][16][17][18][19] .This source of qubit loss could be mitigated by reducing the amounts of lossy dielectrics (minimizing Josephson junction area 20,21 , using better materials and defect-free fabrication techniques [22][23][24] ).Another approach for loss mitigation is increasing qubits footprint [25][26][27] by minimizing an electric field in the interfaces and preventing TLS excitation due to coupling with their dipole moment.
Qubit relaxation caused by dielectric losses could be decomposed into participations from each material and qubit components: where 1 , and are the relaxation time, angular frequency and quality factor of the qubit, tan is the dielectric loss tangent of the i th material or component, and is their participation ratio defined as the fraction of electric field energy stored within inside this material or component.
One can imagine a superconducting transmon qubit 28 as a non-linear LC oscillator, where the Josephson junction or SQUID define a non-linear inductance and the superconducting metal pads define a capacitor.Josephson junction or SQUID loop electrodes have to be electrically connected to the capacitor pads.Such a connection is commonly realized as a thin metal wire, which we call leads in this paper.The design of the frequency tunable two-padded floating transmon qubit which is investigated in this study is shown in Fig. 1a.Usually, in order to improve qubit relaxation time (dilute an electromagnetic field and lower interfaces participation ratio), the gap (G, Fig. 1a) between the capacitor pads is increased.However, it requires an increased length of the Josephson junction connecting wires.Moreover, in case of a fluxtunable qubit the wiring length becomes even longer to form a SQUID loop and move it closer to the gap edge and flux-control line.Figure 1b demonstrates qubit pads and qubit wiring (leads and SQUID loop) participation ratios versus qubit gap width.One can notice, as the gap width increases the capacitor pads participation ratio decreases, but the leads with SQUID participation ratios increase.When the gap width is more than 110 µm, then the participation ratio of the leads with SQUID become dominant and further gap widening is impractical.Thus, a relaxation time of a properly designed qubit is limited by the leads and SQUID loss, if their loss tangent is comparable to the capacitor pads one.To further increase the qubit relaxation time, we optimize the leads width as illustrated in Fig. 1c.Fig. 1 Flux-tunable floating transmon qubit participation ratios.a Sketch of transmon qubit with a standard wiring and SQUID loop (black) close to control line.Capacitor pad dimensions and gap width (G) are proportional to make qubit symmetrical in both directions.b Normalized participation ratio of the pads (blue dots) and wiring (orange dots, include leads and SQUID loop) vs. gap width (G).For each gap width capacitor pad width is adjusted to make qubit charge energy equal to 220 MHz.Leads width is 2.5 µm.c Normalized participation ratios vs. leads width.Gap width (G) and pad width (W) are chosen equal to 120 µm and 204 µm, correspondingly.Curves flatten out when the participation ratio of the SQUID loop in wiring becomes dominant.
We calculated the participation ratios using a similar method as in Ref. 25 , but with modifications to be able to analyze asymmetrically located SQUID loop with long leads (see Supplementary for details).When we calculate the participation ratios of qubit components, we consider bulk superconductor and crystalline dielectric to be lossless ( < 10 −6 for silicon 29 (4) In order to mitigate surface dielectric loss in qubits, previous work are primarily focused on capacitor pads design modifications [25][26][27]31 , investigation of Josephson junction contribution with so-called bandages and fabrication improvements [32][33][34][35][36] . Depite the remarkable results achieved in these works, contribution from the Josephson junction wiring has mainly remained ignored with rare exceptions.In Ref. 25 there were extracted participation ratios for junction leads, but only for 3D cavity qubits with a single Josephson junction.Recent study 37 has analytically predicted, that a significant fraction of surface loss comes from the wiring that connects the qubit to the capacitor. In ths work, we experimentally studied the contribution of leads and SQUID to the overall qubit surface dielectric loss.We performed finite-element electromagnetic simulations of the transmon qubits with different leads geometries in order to analyze their contribution to the surface losses.Then, we experimentally measured qubits relaxation times and extracted the dielectric loss tangents of the qubit elements associated with the capacitors, leads and SQUID.We also compared two different methods for leads fabrication: etch and lift-off.Then, we demonstrated good agreement between the measured qubit quality factors and the proposed model, so it could be used to further qubit design optimization reducing surface dielectric losses. Th dielectric loss model and loss tangents extracted in this study can be applied to improve perspective superconducting qubit, e.g.fluxoniums 38 .
RESULTS AND DISCUSSION
To analyze the contribution of capacitor pads, leads and SQUID in the total qubit loss, we fabricated six tunable floating transmon qubits on the same chip, so we can assume the same loss tangents of each interface for all the qubits.The fabricated qubits have the same design, except different wiring geometry, as shown in Figure 2a.They were designed to accentuate participation in the Josephson junction leads.Therefore, we refer to the designs as "long leads", "regular leads" and "wide leads".The calculations of participation ratios were performed considering 3 nm thick dielectric interface layers MA, SA and MS with fixed dielectric constant = 10.Table 1 summarizes the parameters of the fabricated qubits.Qubit capacitor pads electrodes and ground plane were wet-etched in 120 nm Al film.The Josephson junctions were defined using electron beam lithography and shadow-mask evaporation.Then, Al bandages were deposited in order to either short the stray junctions or connect the leads with the capacitor pads.Finally, superconducting airbridges were fabricated to suppress the parasitic microwave modes.See Methods for more fabrication details.
In order to estimate the effect of fabrication process to the leads surface losses, we compared two methods of leads patterning.Three qubits on the chip have the leads fabricated together with the SQUID loop using lift-off process.The leads of the other three qubits were patterned using optical lithography and, then, wet-etched together with the capacitor pads.A scanning electron microscope (SEM) image of the sample with six floating transmon qubits is shown in Fig. 2b.All the qubits are individually coupled to /4 resonators with different frequencies ranging from 6 up to 6.45 GHz for dispersive readout.State-dependent dispersive shifts, qubit-resonator detuning and resonator widths are designed to both distinguish the readout signals and push the Purcell-limited relaxation time as high as possible (> 1 ms), so it does not affect the qubits relaxation time ( 1 ).See Methods for more experimental setup details.
In this work we spectrally resolve 1 of the frequency-tunable floating transmons.We recalculate 1 in quality factors (Q-factor) and determine their mean value and confidence interval for each qubit.The experimental pulse sequence we used to measure 1 at a single frequency is the follows: the qubit is initialized in the |1⟩ state by a microwave drive pulse, flux-tuned to the frequency of interest, where we wait for a varied delay time, and then measure the qubit state.To obtain one 1 curve, we repeat this experiment for 21 equally-spaced delays with 4000 shots each in the range 10 to 400 μs.Using this sequence, we swept the qubit frequency over a range of at least 300 MHz with 1 MHz step.It took us approximately 6 hours to measure the entire 300 MHz spectrum.Qubit's spectra, converted into quality factors, and distributions are presented in Fig. 3.We notice that qubit spectra have Lorentzian-like regions with strong relaxation (red dots in Fig. 3a and in Fig. 3b).We attributed these resonances to the modes of qubit flux control lines or microwave package cavity modes, as their shape maintained after repeated cooldowns.These peaks can be successfully suppressed with IR eccosorb-filters in flux control cables 39 .However, these filters may slightly limit 1 , that is why in this study we don't use IR filtering for the flux control cables.We excluded Lorentzian regions from the analysis, as they are not connected with surface dielectric loss.Fig. 3 Measured qubits quality factors vs. qubit frequency.Orange, blue and green dots correspond to the "long leads", "regular leads" and "wide leads", respectively.Error bars are quality factor fit errors.All the dots with big errors (> 10%) are excluded from the plot.Parasitic modes dots are colored in red and are excluded from the distribution plots.Solid horizontal lines show the median quality factor values. a Qubits with lift-off leads.Median Q-factors are QQ1 = 1.78 × 10 6 , QQ3 = 3.17 × 10 6 , QQ5 = 2.99 × 10 6 .b Qubits with etched leads.Median Q-factors are QQ6 = 1.98 × 10 6 , QQ4 = 2.76 × 10 6 , QQ2 = 3.30 × 10 6 .
We compared the qubit components loss tangents extracted with surface loss extraction (SLE) 19 process of lift-off and wet etch leads (see Table 2).Qubit quality factor, written through Eq. 1, can be represented in matrix form as a function of participation matrix [] (number of columns equal to number of qubit components, including pads, leads, SQUID; number of rows equal to number of designs) and loss tangent vector [tan] (number of columns equal to the number of qubit components): In order to determine the uncertainty of the extracted loss tangents, we perform Monte Carlo simulations.We sampled 10000 Q-factors using mean values and standard deviations taken from the experimental data.Then, taking each sampled Q-factors and calculated participation ratio in the matrix [], we found the least square solution for the loss tangents using Eq. 6.Based on the extracted loss tangents and calculated participation ratios in Eq.1, we determined the predicted Q-factors, shown in the Fig. 4a as dashed lines.Dark blue dots with solid lines (Fig. 4a) show median values of the measured Q-factors versus leads design for lift-off leads (orange dots with solid lines for etched leads).The horizontal and the vertical errors bars correspond to the standard deviation error of the measured and simulated Q respectively.One can notice quality factors improvement as Josephson junction leads participation ratio decreases.This trend shows that leads with a high participation ratio suppress significantly the qubit quality factor.Qubits with "wide" shortened leads have a higher median quality factor (QQ2 = 3.30 × 10 6 , QQ5 = 2.99 × 10 6 ) compared to qubits with narrower and longer "regular" leads (QQ3 = 3.17 × 10 6 , QQ4 = 2.76 × 10 6 ), the worst case is "long" lead (QQ1 = 1.78 × 10 6 , QQ6 = 1.98 × 10 6 ).The only exception in this experiment is the qubit (Q3) with lift-off "regular" leads, which median quality factor become a slightly higher than for qubit (Q5) with "wide" leads.We assume this as a statistical error, as the qubit (Q3) has the worst scatter in 1 , quality factor measurements (probably, due to dynamics of strongly coupled TLS 40,41 ).Moreover, the predicted quality factor curves for lift-off and etched leads have the same shape and correspond well to measured curve for etched leads, which may also indicate a statistical error for qubit (Q3) measurements.We attribute the uncertainty of the extracted loss tangents to several sources.First, the scatter in qubit's quality factor measurement statistics results in wider range of possible linear equations solutions.Second, although we strived to maximize the participation ratio of Josephson junction leads, we were not able to make a design with the participation of the specific component totally prevailing over the other qubit components.The maximum participation ratio of Josephson junction leads relative to the total ratio of all the qubit components in our set of designs was 0.364.As we expected, the loss tangents of capacitive pads and SQUIDs for qubits with lift-off and wet etched leads are almost the same, since they were fabricated on the same chip.The loss tangent of etched leads is smaller than lift-off one.We attribute this to a degraded MS interface due to e-beam resist residuals 42 and a worse MA interface, which was additionally passivated after shadow deposition (a thicker amorphous oxide can occur).We also note that the cross-section of shadow-evaporated structures is quite complex in practice.There are both exposed oxidized areas of bottom electrode and areas of top electrode metal covering the bottom one.This feature distorts the cross-section of lift-off wiring and SQUID, making them different from a rectangle, which introduces additional MA interfaces.As it is quite complicated, we do not take it into account in our model for participation ratios simulation, that could affect the results accuracy.One can see, that loss tangent of the SQUID loops is lower than the leads for both fabrication routes.The SQUID loops have much smaller footprint area, but still non-negligible TLS defect (see our TLS spectrum analysis below).Surface losses mainly arise from the interaction of qubits with a bath of incoherent TLS defects.The relaxation rate depends on the number of defects coupled to the qubit mode and its coupling strength.Here, we show that elements with even a relatively small footprint and, therefore, a small number of coupled TLS defects can significantly affect the quality factor of the qubit and have loss tangents close to those we obtained in this work.We simulate a qubit coupled with i TLS defects with the strength , having relaxation rate Г 1, .We assume Markovian decoherence and calculate the qubit relaxation rate Г 1 27,40 where ∆ is detuning.The model assumes the limit Г 1, > , where Г 1, is the defect relaxation rate.We simulate the distribution of electromagnetic field throughout the qubit using the same method as for calculating the participation ratios, applying the root mean square voltage to the qubit capacitive pads.We then sample random TLS defects from the dataset of Gaussian distributed dipole moments with mean dipole moment = 2.6 and standard deviation = 1.6 in accordance with the recent study 43 .We assign the decoherence rate to the defects, measured in Ref. 14,40,[44][45][46] , and a random detuning in the considered range of 300 MHz.We then place the sampled TLSs in the interface at a random position on the qubit to determine their coupling strength ( = ), and then using Eq.7 calculate the spectrum of the qubit relaxation rate.To determine the amount of TLS, we took the previously obtained TLS density in amorphous alumina 35 ~ 1800 ( 3 • ) −1 and set the effective thicknesses of the MA, MS, SA interfaces to 2, 0.3 and 0.36 nm, respectively.We assumed the MA interface thickness to be approximately equal to the native Al oxide interface thickness.The density of defects in the materials of the MS and SA interfaces is unknown, therefore the thicknesses of the MS and SA interfaces are set in accordance with the proportions of the interfaces loss tangents, experimentally extracted in Ref. 19 (tan = 3.9 × 10 −3 , tan = 7.1 × 10 −4 , tan = 5.9 × 10 −4 ), since the loss tangent scales with the defect density 47 tan = 2 (3) −1 .Note that the model according to Eq. 7 is used for strongly coupled defects, so we use it for the outer perimeter of capacitive pads and the ground plane, leads and SQUID, where the electric field is strong enough.For the inner areas of the pads with a weak electric field, we enter the background relaxation rate level, calculated using the Eq. 1 with known loss tangents 29 .After simulating the relaxation rate spectra of qubits with three lead designs ("long leads", "regular leads", "wide leads"), we apply the SLE process as in the experiment and extract the loss tangents of the qubit elements: tan = (8.4± 4.4) × 10 −4 , tan = (6.8± 4.8) × 10 −4 , tan = (3.2± 2.8) × 10 −4 .
The loss tangents obtained from TLS simulation are similar to the experimentally extracted ones.The proportions between the loss tangents of the elements also correspond to experimental data.A comparison of the proposed model and experimental data indicates that even a small fraction of TLS's located within the wiring interfaces of about 18% (total number of defects in the leads ~ 4300/ and in the SQUID loop ~ 950/) have a significant contribution to total qubit dielectric loss and limit qubit relaxation time.See Supplementary for simulated qubit quality factor spectra.
Summary
We have studied the wiring surface dielectric loss of superconducting transmon qubits and have found out that relaxation time of qubits with a large footprint is limited by dielectric loss in the leads.We considered three different transmon geometries and experimentally extracted the loss tangents of the capacitive pads, leads and SQUIDs using the SLE process.It is demonstrated, that for a commonly used tunable floating transmon qubit design, which we called "regular" in the work, leads and SQUIDs contribute about 50% of the total dielectric loss.We do not include variation of the capacitor pads participation ratios, which could improve the accuracy of extracted loss tangents, meanwhile, we confirmed, that leads can be a limiting factor for qubit relaxation times.
Wiring leads are often fabricated together with Josephson junctions using lift-off process, which introduce additional surface dielectric loss.We experimentally extracted the loss tangents for wet etched ( = (7.9± 3.8) × 10 −4 ) and lift-off ( = (9.2± 4.5) × 10 −4 ) leads.In order to minimize internal qubit surface loss and improve relaxation time, one should fabricate wiring leads together with capacitor pads using etching process.A further leads loss tangent reduction may be achieved by reducing their participation ratio with a substrate trenching 19,29,31,48 .
We simulated qubit spectra by randomly sampling TLS in the qubit interfaces.The simulation shows that only a small fraction of coupled defects is located within the wiring interfaces (~ 18%), however, their coupling strength is much higher due to the stronger electric fields, which results in non-negligible loss of the wiring elements.
Finally, we demonstrated that electric field dilution by increasing the wiring width improves qubit performance up to 20% for the considered qubit designs.Further optimization of leads design by tapering 37 , for example, may increase a relaxation time even more.In addition, one should pay attention to SQUID design, as it has a high participation ratio and loss tangent.
Sample fabrication
The sample is fabricated on 525 thick high-resistivity silicon.Firstly, the substrate is cleaned with a Piranha solution at 80 ℃, followed by dipping in 2% hydrofluoric bath.Then 120 nm aluminum film is deposited using e-beam evaporation in an ultrahigh vacuum deposition system.After that, 600- thick positive photoresist is spin coated.Then ground plane, resonators and qubit capacitors are defined using a laser direct-writing lithography system.Josephson junction wiring of the qubits of the «wet etched» group are also patterned in this step.Patterned features are then wet etched using commercial Al etchant solution.The photoresist is stripped in N-methyl2-pyrrolidone at 80 ℃ for 3 hours and rinsed in IPA (isopropyl alcohol).
Josephson junctions, SQUID loops and wiring (for the «lift-off» group of the qubits) are then defined using Niemer-Dolan method.The process is described in details in Ref. 49,50 .The substrate is spin-coated with a resist bilayer composed of 500 nm MMA (methyl methacrylate) and 300 nm PMMA (poly methyl methacrylate).The development is performed in a bath of MIBK/IPA 1:3 solution followed by rinsing in IPA.Josephson junctions and wiring are patterned using an electron beam lithography system and then electrodes are shadow-evaporated in an ultra-high vacuum deposition system.First evaporated Al junction electrode is 25-nm thick and the second is 45-nm.Then aluminum bandages are defined and evaporated using the same process as for the junctions with an in-situ Ar ion milling.Lift-off is performed in a bath of N-methyl2-pyrrolidone with sonication at 80 ℃ for 3 h and rinsed in a bath of IPA with sonication.
Finally, aluminum free-standing crossovers are fabricated for the suppression of parasitic modes, using a common fabrication process.3 photoresist is spincoated and then the sacrificial layer is patterned using a direct laser writing system.A 300 nm of Al is then evaporated with an in-situ Ar ion milling to remove the native oxide.The second layer of 3 photoresist is used as a protective mask and the excess metal is wet etched.A damaged layer of photoresist is then removed in oxygen plasma and both layers of photoresist are stripped of N-methyl2-pyrrolidone at 80 ℃.
Fig. 2b.Etch and lift-off are methods of leads fabrication.Participation ratios of the elements are multiplied by 10 4 . , and are the experimentally measured qubit "sweet spot" frequency and anharmonicity.
Fig. 2
Fig. 2 Tunable floating transmon qubit designs and fabricated sample.a Designs of three transmon qubits with "long leads", "regular leads" and "wide leads".Dimensions of square SQUID loop and capacitor pads are fixed for all three designs.Gaps are shown in grey and capacitor pads with the surrounding ground plane in white.b False-colored SEM image of the sample featuring six qubits coupled to readout resonators.Qubits are initialized and readout is performed via the single feedline.The qubits with etched Josephson junction leads are colored in orange and with lift-off leads in blue.
Fig. 3
Fig. 3 Measured and predicted qubits quality factors.a Qubits quality factors and normalized participation ratio as a function of leads geometry.Each data point represents median quality factor obtained from measured qubit 1 spectra (solid lines) or sampled quality factors (dashed lines).Dark blue and dark orange points represent quality factors for measured qubits with lift-off and etched leads, respectively.Light blue and light orange points represent predicted quality factors for lift-off and etched leads, respectively.Plotted normalized participation ratio of leads (solid green line) shows that leads have non-negligible effect.b Predicted quality factors compared to measured quality factors of qubits with etched (dark gray) and lift-off (purple) leads.The blue line represents perfect agreement between the measured and predicted quality factors.All the error bars on the plots correspond to 68% confidence interval. | 5,441.8 | 2023-11-28T00:00:00.000 | [
"Physics"
] |
Sulfonolipids as novel metabolite markers of Alistipes and Odoribacter affected by high-fat diets
The gut microbiota generates a huge pool of unknown metabolites, and their identification and characterization is a key challenge in metabolomics. However, there are still gaps on the studies of gut microbiota and their chemical structures. In this investigation, an unusual class of bacterial sulfonolipids (SLs) is detected in mouse cecum, which was originally found in environmental microbes. We have performed a detailed molecular level characterization of this class of lipids by combining high-resolution mass spectrometry and liquid chromatography analysis. Eighteen SLs that differ in their capnoid and fatty acid chain compositions were identified. The SL called “sulfobacin B” was isolated, characterized, and was significantly increased in mice fed with high-fat diets. To reveal bacterial producers of SLs, metagenome analysis was acquired and only two bacterial genera, i.e., Alistipes and Odoribacter, were revealed to be responsible for their production. This knowledge enables explaining a part of the molecular complexity introduced by microbes to the mammalian gastrointestinal tract and can be used as chemotaxonomic evidence in gut microbiota.
High pressure liquid chromatography-based separation and fractionation
Fractionation experiments were performed on an Agilent 1290 Infinity LC system using an Acquity Xbridge™ column (5µm, 4.6 x 250 mm, Waters, Germany). A gradient of water/acetonitrile (A, 5 millimolar ammonium acetate/0.1% acetic acid in water; B, acetonitrile) was used for the fractionation experiments. The gradient used was 65% (B) for 8.40 min, and was increased to 99% (B) within 30 min and then held for 2.40 min. Recondition was done for 5 min with a pre-runtime of 8 min to 65% (B). The flow rate, the column temperature and the injection volume were 1 mL/min, 40°C and 100µL, respectively. Sample manager was cooled to +4°C. The fractions were collected every minute with addition of trimethylsilyl-tetradeuteropropionic acid (TSP), as a reference standard. One dimensional proton ( 1 H)-NMR spectra were acquired on a Bruker 800 MHz spectrometer (Bruker Biospin, Rheinstetten, Germany) operating at 800.35 MHz with a quadruple inverse cryoprobe at 300 K. A standard 1D pulse sequence [recycle delay (RD)-90°-t1-90°-tm-90°-acquire free induction decay (FID)] was acquired, with water suppression irradiation during RD of 2 s, mixing time (tm) set on 100 ms, and a 90° pulse set to 10.13 μs, collecting 800 scans into 64,000 data points with a spectral width of 12 ppm. In addition, a 2D total correlation spectroscopy (TOCSY) analysis was performed, using a 1H-1H phase-sensitive sensitivity-improved 2D pulse sequence with water suppression by gradient tailored excitation (3-9-19) and DIPSI-2. 19,228 × 1,024 data points were collected using 32 scans per increment, an acquisition time of 1 s, and 16 dummy scans. Spectral widths were set to 12 ppm in the F2 and F1 dimensions. Processing of spectrum was performed using TopSpin 3.2 (Bruker BioSpin).
FIDs were multiplied by an exponential decaying function corresponding to a line broadening of 0.3 Hz (F1) and 2.5 Hz (F2) before Fourier transformation, manual phasing, baseline correction and calibration to TSP (δ 0.00) was also performed in TopSpin. Chemical shifts, multiplicity and Jcoupling constants were compared to Kamiyama et al. 1 and predicted spectra using ACD/NMR prediction software (ACD/Labs, Toronto, ON, Canada).
Statistical analysis
SIMCA-P version 9.0 (Umetrics, Umea, Sweden) was used for the principal component analysis
Metagenomics
Metagenomic studies were made only from C57BL/6NTac mouse group fed a safflower enriched high fat diet. In total 10 metagenomes were prepared: from 6 mice cecal samples were used (6 biological replicates), 4 of them could be prepared as duplicates (4 technical replicates). Genomic DNA was extracted from cecal luminal content (30mg) using an extraction kit NucleoSpin96 for Soil according to protocol. DNA was quantified using Quant-iT™ PicoGreen® dsDNA Kit. Sequencing was done by applying a whole-genome sequencing approach on the GS-FLX+ Titanium™ sequencing platform from Roche (Roche Diagnostics GmbH, Mannheim, Germany). DNA libraries were prepared for each metagenome on 1 µg of sample DNA following manufacturer's instructions. After nebulisation DNA fragments were processed by end repair, adapter ligation and size selection. Products were purified and quantified. Quality assessment of libraries on an Agilent Bioanalyzer High Sensitivity DNA Chip (Agilent Technologies, Santa Clara, USA) determined fragment lengths of sequence libraries of around 1400 bp, which were taken for further sequencing. By titration mainly a 6-12 DNA-copies per bead ratio was determined. After emulsion PCR and subsequent bead recovery enrichments of 790 000 DNA-beads were pooled per sample and loaded onto each quarter of a PicoTiter-Plate. Sequencing of long fragments was applied by selecting a 200 cycles sequencing run. Metagenome sequence data are available on Sequence Read Archive (SRA) under BioProject ID PRJNA299870. For quality control, prinseq-lite was used: Three bases were trimmed from the 5' end, bases with a quality score <20 in a window of 3 bases were trimmed from the 3' end, and sequences with a mean quality <20 were discarded 2 . Minimum length of all sequences was restricted to 150 bp, maximum length to 500 bp.
Contaminating mouse DNA sequences were detected by a sequence similarity search using BLASTN (NCBI BLAST 2.2.26+ max. e-value 0.1, DUST filter off) of all sequences against mouse reference genome on NCBI (build GRCm38.p1) 3,4 . An alignment length of ≥80% of the query sequence and evalues ≤10 -4 were used as cutoff criteria; sequences matching these criteria were considered presumptive mouse sequences and removed. To determine taxonomic origin of sequences and associated gene functions, a sequence similarity search was performed using BLASTX (NCBI BLAST 2.2.26 with the -w 15 parameter set, allowing for frameshifts in alignments, max. e-value 10) against the NCBI non-redundant (NR) database (downloaded 07/19/2013) 3 . Output was imported into MEGAN 5 (version 5.7.1), using parameters min. bitscore 50, max. e-value ≤10 -2 . Functional gene annotation was performed in MEGAN using KEGG classification of reads 5 . Based on RefSeq-IDs mapping to KEGG orthology (KO) groups, each read was mapped to a gene with KO identification.
Taxonomical assignment of genes involved in sphingolipid metabolism was determined by MEGAN 6 . also Table S1). A: Typical chromatogram of SL2 (fraction 10) and SL3 (fraction 11) as isolated from OSP pellet, using a combination of UHPLC coupled to ion trap mass spectrometer. Insert structures concern SL2 (B) and SL3 (C) compounds as characterised by NMR spectroscopy (see Figs. S7-9 and Tables S5-6). Tables Table S1. Overview of putative SLs, detected in cecal samples by means of FT-ICR-MS analysis, including experimental and theoretical mass signal values, molecular formulas, mean intensities, and database (ChemSpider) annotation.
Class
Nr Table S2.
Summary of all eighteen SLs with their measured retention time (RT), theoretical mass signal values and molecular formulas of parent and fragment ions and applied collision energies in eV. MS/MS were performed in negative electrospray ionization mode. This table also represents information about major parent-fragment ions that were used for all MS/MS experiments that are highlighted in Figure 3. Figure S7). H NMR data for SL3 (fraction 11) in DMSO-d 6 at 800-MHz (see also Figure S8).
SL RT in min
Position δH in ppm (multiplicities, coupling constants in Hz), measured δH in ppm (multiplicities), predicted in ACD/Labs Table S7.
Arithmetic mean for analyzed SL1-SL9 (normalized peak areas (weight of wet cecal content)) in GF, SPF and Alistipes mice. | 1,684.2 | 2017-09-08T00:00:00.000 | [
"Biology"
] |
The Regulatory Network of Pseudomonas aeruginosa
Background Pseudomonas aeruginosa is an important bacterial model due to its metabolic and pathogenic abilities, which allow it to interact and colonize a wide range of hosts, including plants and animals. In this work we compile and analyze the structure and organization of an experimentally supported regulatory network in this bacterium. Results The regulatory network consists of 690 genes and 1020 regulatory interactions between their products (12% of total genes: 54% sigma and 16% of transcription factors). This complex interplay makes the third largest regulatory network of those reported in bacteria. The entire network is enriched for activating interactions and, peculiarly, self-activation seems to occur more prominent for transcription factors (TFs), which contrasts with other biological networks where self-repression is dominant. The network contains a giant component of 650 genes organized into 11 hierarchies, encompassing important biological processes, such as, biofilms formation, production of exopolysaccharide alginate and several virulence factors, and of the so-called quorum sensing regulons. Conclusions The study of gene regulation in P. aeruginosa is biased towards pathogenesis and virulence processes, all of which are interconnected. The network shows power-law distribution -input degree -, and we identified the top ten global regulators, six two-element cycles, the longest paths have ten steps, six biological modules and the main motifs containing three and four elements. We think this work can provide insights for the design of further studies to cover the many gaps in knowledge of this important bacterial model, and for the design of systems strategies to combat this bacterium.
Background
Pseudomonas aeruginosa is a metabolically versatile Gramnegative bacterium, able to express a wide variety of virulence factors. These allow P. aeruginosa to grow in soil and marine habitats, as well as on plant and animal tissues. It is also a significant source of bacteraemia in burn victims, urinary-tract infections, hospital-acquired pneumonia and predominant cause of morbidity and mortality in cystic fibrosis patients [1]. All these makes P. aeruginosa the most studied bacterial model regarding the control of pathogenic determinants and the third bacterial model more studied with respect to their molecular biology -after Escherichia coli and Bacillus subtilis-. The genome sequence of P. aeruginosa strain PAO1 was reported in 2000 [1], and since then numerous databases and genomic resources have been implemented to study their molecular and pathogenic biology [2][3][4].
It is well know the importance of gene regulation on the organisms' performance as this process defines their metabolic, adaptive and pathogenic capabilities. In this work, we report a collection of known regulatory network interactions connecting transcription factors (TFs), sigma factors (σ), and anti-sigma factors to their target genes in P. aeruginosa. This transcriptional regulatory network (TRN) constitutes the third largest one of any bacteria reported to date. We proceed to analyze the main topological properties of this network and the main functional interactions among their regulatory components. We hope these results will provide insights and guide future studies to increase our knowledge on this important bacterium.
Results and discussion
The transcriptional regulatory network (TRN) of Pseudomonas aeruginosa With the aim of summarizing all the documented action of the regulatory machinery over the genes encoded in the genome of P. aeruginosa, the available published data was searched using a combined strategy: 1) regulatory interactions from dedicated biological databases were extracted [2][3][4] and, 2) searches in the original literature were performed (see Figure 1 for the general strategy). Both sources of information were verified by analyzing the corresponding papers. Methods frequently used to study transcriptional regulation included microarray analyses and their validation, promoter activity through transcriptional fusions, RT-PCR, EMSA assays and DNA-foot printing [Additional file 1 which contains a complete description of the network interactions along with their experimental evidences and references]. As of May 2010 our curated network consisted of 1020 regulatory interactions among 690 genes products, including 76 transcription factors, 14 sigma factors (nine of these defined with extra-cytoplasmic functions -ECF-), 7 antisigma factors, and 593 target genes ( Figure 2, a poster version of this figure is available as additional file 2). Given the 5,570 predicted protein coding genes of P. aeruginosa PAO1 (the strain on which most of the network reconstruction is based) our network represents roughly 12% of these genes. On the other hand, the regulatory machinery predicted in this bacterium comprises around 500 proteins: 26 sigma factors (one σ 54 , eight σ 70 , and 17 of ECF families), the rest corresponding to transcription factors distributed in at least 44 families [5]. This network then, represents roughly 54% of sigma factors and 16% of all the TFs encoded by this bacterium. In the following sections we will report the structural and the functional properties of this network.
Topological description of the TRN in P. aeruginosa Degree distribution In network and graph theory, the degree (k) of a node (gene) is defined as the number of interactions that it has with other nodes. Here we determined the gene's mean degree as the arithmetic average of all the nodedegrees (k) in each network [Additional file 1]. This result implies that each gene in the TRN of P. aeruginosa is connected, on average, with 3 other genes. In directed networks, as in the case of regulatory networks, we can define input (kin) and output degree (kout) as the number of arrows that enter and leave genes respectively, which corresponds to the number of TFs that regulate a certain gene, and the number of genes that a TF is regulating. The degree distribution gives the probability P(k), of finding a node with degree k [6]. This measure quantifies the diversity of gene-degrees in a network and allows determining which theoretical network is more similar to the type of network we are working with (i.e. classical random networks, scale free, small-world, etc. [7,8]). Some authors claim that biological networks present a well-known distribution closer to a power law P(k) = Ak −γ which indicates that a few genes are highly connected (they are called hubs), while most of them have low connectivity [9]. The constant A ensures that the P(k) values are normalized towards 1, and γ is a parameter that provides information about the network structure; for networks with γ>3 a lot of the properties for scale-free networks are not present, which are present for 2<γ<3, where there is a hierarchy in the degree of nodes, from the most to less connected ones. However, for γ= 2, the highest degree node influences a large fraction of all nodes [9]. In the case of the TRN of P. aeruginosa, we find A = 0.8856 and, 2<γ<3 for the input degree distribution ( Figure 3A), but without a good trend of this type for the output and overall degree distributions. Because of this, we show instead their corresponding cumulative distributions P(kout ≤ Kout), P(k ≤K), (Figures 3B and 3C respectively). Overalls the fact that a few genes are highly connected remains valid.
Clustering coefficient
The clustering coefficient C, is a measure that indicates the probability that two genes with a common neighbor in a graph are also interconnected; that is to say, the clustering coefficient quantifies what so much the local neighborhood of a gene is as member of a group of genes. It is common for networks to exhibit a decreasing value of C(k) with respect to the degree k, such that in small groups or modules of genes the elements are well Figure 1 Strategy to compile the network of P. Aeruginosa. General strategy for gathering information about transcriptional regulation used to construct the TRN of P. aeruginosa connected, but as the group increases in size the elements are progressively less connected. The regulatory network of P. aeruginosa shares this general clustering property in ( Figure 3D).
Connectivity
Connectivity in a network refers to the associations between every pair of genes.
Connections can be via a direct link or indirectly through a series of intermediate interactions. Connected components are defined for undirected networks, and give us information about how much are connected the elements in a network and their modular structure. Sometimes it is necessary to consider the network as undirected, since it allows us to capture different types of information to perform a better analysis. In the case of the TRN in P. aeruginosa there are 12 connected components, with one giant component containing 650 genes, while the rest contain at most six genes. Each connected component in the TRN possesses at least one TF or σ. A skeleton of 65 TFs and 13 σ maintain cohesive this giant component (the 12% of its components). We consider that a connected component is composed by n nodes, and calculate the relative frequency P(n) for every possible n, which give us the distribution of the number of nodes in a connected component ( Figure 3E), [see also Additional file 1].
Functional organization of the regulatory network
In order to discern the functional organization of a regulatory network we can study the following aspects of the TRN: i) the regulatory mode and connectivity of each component of the transcriptional machinery and, ii) the manner in which endogenous and exogenous information, relevant for transcriptional regulation, enter and pass through the regulatory machinery until conclude on promoters of target genes. All these computes should be associated with the biological functions of the respective genes. In this sense, some interesting findings in the TRN of P. aeruginosa are discussed below.
Activation is the dominating activity in the TRN
Analysis of the mode of regulation (excluding interactions by σ and anti-σ) showed, that activation is by far the dominating regulatory activity in P. aeruginosa (Table 1). For comparison, the E. coli network also shows a higher tendency for activation instead of repression although in P. aeruginosa this difference is more Figure 2 The TRN of P. Aeruginosa. Different types of genes are represented by nodes with different colors; TF (yellow), σ (red), ECF (orange) anti-σ factors (brown), and non-regulatory genes (blue). Arrows represent the modes of regulation; transcriptional activation (green), transcriptional repression (red), transcriptional dual regulation (blue), and undefined (gray); transcription by σ (orange) and σ control by anti-σ (black). The network was drawn using the Cytoscape software [28]. For a better clarity, a poster version of this figure is available as additional file 2.
pronounced. This dominant mode of regulation is also evident in the sub-network consisting of TFs and σ factors in P. aeruginosa (Figure 4), and the same was also observed in E. coli [10]. Positive regulation in these networks might explain why once a biological process is triggered it can run from the beginning to the end of defined regulatory pathways, perhaps giving place to conditioned memory as has been observed in E. coli and Saccharomyces cerevisae [11,12].
Most of the TFs are positively auto-regulated
The mode in which regulatory genes are auto-regulated is important for network dynamics. It is known that auto-repressions are important controllers to keep homeostatic levels of biological functions, while autoactivation is a condition to reach multiple steady states and differentiation [13]. Operatively, the TFs are important points of genetic control since they are the master switches whose regulatory activity extends over many genes. Consequently, it is not surprising to realize that nature has developed self-regulation on these genes as a quick and effective mode of control over a wide range of physiological processes. In contrast to what happens in E. coli and B. subtilis, auto-regulation is mostly positive for the TFs of P. aeruginosa (Table 2 and Figure 4). Given that negative self-regulation maintains homeostasis, is normal to observe this mode as dominant in regulatory networks as in E. coli [10,14], and B. subtilis The number and sign of interactions and other topological measures for the whole network and the sub-network, which includes only the regulatory machinery ofP. aeruginosa, are shown ( Figure 4). Figure 4 Transcriptional machinery sub-network in P. aeruginosa (TF and sigma factors). Nodes and arrows are the same as in Figure 2 with the exception that it lacks of the non-regulatory genes. The network is presented in a hierarchical structure, in order to better appreciate the hierarchical organization among TFs and σ. and flagella formation in E. coli where self-activation is enriched [10]. This observation is in agreement with the postulation that auto-activation is normally found at the core of differentiation and development processes. Dynamic analysis shows that auto-activation causes a slow and delayed response compared to auto-repression and simple regulation [15]. It is supposed that this delayed response gives time enough for regulation pass through different check points, before the differentiation processes goes ahead. Given the bias of knowledge toward the study of only a minor part of the network in P. aeruginosa (virulence and pathogenic processes), it is premature to conclude whether that distinction should be a property of the whole network, or if, it mainly represents an evolutionary design for the execution of pathogenesis and virulence actions by this bacterium. Using the database of Reciprocal Best Hits (RBH) orthologs genes in bacteria [16], we found orthologs for seven regulatory genes from P. aeruginosa in E. coli, in both organisms with experimental evidence about their mode of self-regulation [Additional file 1]. From these, the three negatively auto-regulated genes in P. aeruginosa conserve this mode of self-regulation in E. coli. For the remaining four positively auto-regulated in P. aeruginosa, one conserved their positive mode of auto-regulation in E. coli and the rest three changes; two are dual and one self-regulate negatively in E. coli. Although this information is scarce, it might be possible that positive auto-regulation in TFs might be effectively selected in P. aeruginosa.
Path lengths
A path in a TRN refers to a chain of regulatory interactions between the genes constituting it. The longest path in the TRN of P. aeruginosa consists of 11 steps. This is almost the same size than in their E. coli counterpart where the longest paths include 14 steps. The size of paths in P. aeruginosa is interesting if we consider that the TRN is far from complete, therefore it is reasonable to expect that longer paths can be found as the network will be further characterized. The longer regulatory path in P. aeruginosa goes along biological processes that include alginate biosynthesis, iron metabolism and pioverdine synthesis, implying that these processes could be physiologically connected (Figures 2 and 5). The most frequent path-size (1102 times) found in the P. aeruginosa TRN consists of five steps [Additional file 1].
Short paths are common in metabolic and signal transduction networks, since this arrangement ensures fast and efficient response to changes in food use and to environmental perturbations [6]. The longest paths in the TRN of E. coli include regulatory process for biofilm formation and flagella assembly [10], both of these are considered development processes and are also amongst the longest paths in P. aeruginosa.
Cycles
Biological systems frequently contain positive or negative feedback loops. Multi-element biological cycles (with two or more components) can also be positive or negative (depending on the product of the signs of the constitutive interactions).The existence of positive cycles is a necessary condition to have multiple steady states or attractors. Negative cycles are important to keep homeostasis, since they maintain the system functioning through periodic orbits. Besides self-regulation, multi-element circuits in a TRN can be defined as selfenclosed paths. Until now, excluding self-regulation, there are only seven cycles with two regulatory factors in P. aeruginosa ( Figure 6A); six negatives (algU-muc A and algU-mucB, for the control of alginate synthesis; exsA-exsD, for the control of secretion systems; flgM-fliA, for motility control; pprB-vqs R which enhance The highest values of G coefficient obtained from the TFs of whole network are shown. Figure 5 Functional modules. Determination of biological modules more represented in the regulatory machinery of P. aeruginosa. A) Using a short path metric distance 1/D 2 where D is the distance between two nodes. White color represents interactions with D = 1, purple color shows the interactions with D = 2, interactions with D = 3 are displayed in blue, with diminishing intensity as distance increases until a light blue color, representing interactions that do not exist. B) The major clusters of biological functions were also identified by analyzing the scientific literature: quorum sensing (pink), alginate biosynthesis (orange), iron metabolic (violet), nitrogen metabolic (cyan), motility (yellow), antibiotics resistance (red), expression of virulence factors (sky blue), biofilm formation (purple) and amino acid metabolism (gray). For the sake of comparison between both approaches, the same colors were used for the regulatory machinery in both figures. exotoxin A production and; ptxR-ptx S that control protease and pyocyanine synthesis) and one positive (anr-dnr, for the control of aerobic/anaerobic respiration), and no cycles with more TF-elements are present in the TRN. The dynamics of positive and negative two-node cycles has been widely studied. They represent important core components for network dynamics acting as robust switches to respond to signals from environmental conditions [17].
Motifs
A motif in a TRN is a topological structure that is more frequented than expected [18]. The most represented motifs in the P. aeruginosa network are those formed by three and four genes (Figure 6), [Additional file 1]. Previous research suggests that motifs represent elements for optimal network design given their relationship with the network dynamics and structural stability. The prevalence of certain types of motifs has been considered a product of the evolution acting on the organization of biological networks [19][20][21]. In particular, motifs such as feed-forward loops (FFL; networks with three vertex, composed of two input transcription factors, one of which regulates the other, both jointly regulating a target gene) have a higher abundance in TRN than expected from random networks with the same number of nodes and arrows [18,22]. The dynamic behavior of feed-forward loops has been extensively analyzed [23]; these studies revealed that FFL have two main functions: a) to speed up the response time of the target gene (incoherent FFL, when the signs of the direct and indirect regulation are opposites) and, b) to act as sign-sensitive delays for one of the two TFs (coherent FFL, with the same sign for both the direct and indirect regulation). Considering all the biological process where they participate, FFL are also implicated in pulse generation and cooperativity. In P. aeruginosa the most common motifs are those of three nodes known as coherent feedforward loops [23], where the sign of the interactions is the same, positive in this case ( Figure 6B). This type of motif is present 89 times in the P. aeruginosa regulatory network. Additionally, we found that the most common motifs of four nodes, which occurred 3832 in the network, are those known as bi-fan, where two TFs each positively co-regulate to two target genes ( Figure 6C). This motif is also frequent in other organisms such as S. cerevisiae and E. coli [24].
Hierarchical organization of the TRN of P. aeruginosa
A hierarchical organization is given by a directed informational flux, beginning from the most influencing regulators. In this way, the TFs constitute the skeleton and the non-regulatory genes are the leaves in a hierarchical network (Figure 2 andAdditional file 1). The first level is populated by 33 TFs and 2 sigma factors. The origons [25], which are the points of informational inputs into the network, are set at this level. The second level is the most populated level but includes a high proportion of non-regulatory genes. Most of σ are set at higher levels, except for those involved in iron metabolism, which, as also observed for E. coli, are in the lowest levels as dedicated sigma factors for specific functions.
Most influencing regulators in the TRN of P. aeruginosa
The most influencing regulators in a regulatory network are called "global regulators" and are defined by a series of operative properties, including: i) they should regulate a large number of genes; ii) should regulate other sigmas and regulators; iii) should co-regulate together with many TFs and, iv) their target genes should have promoters using more than one kind of σ [26]. All these properties were computed for regulators found in P. aeruginosa (see Methods section) and the top ten are shown in Table 2. A coefficient G was introduced here, which indicates if a regulator is more or less global taking into account the regulatory criteria mentioned above. The most influencing regulators in Pseudomonas have a lower qualification than the corresponding seven global TFs in E. coli. This might be due to the limited knowledge of the transcriptional regulation in P. aeruginosa compared to E. coli.
Biological processes in P. aeruginosa TRN
Defining functional modules in a formal computational way is a difficult task. However, it has been shown that employing a simple metric of shorter distances among TFs in the E. coli network, it might be possible to recover modules with a good approximation to those that are manually defined, on the basis of the knowledge of biological functions of their products [27]. In this work we used this metric for the TFs and σ -anti-σ sub-network of P. aeruginosa (Figure 4), and get the following biological modules: alginate biosynthesis, quorum sensing, iron capture and metabolism, production of virulence factors, antibiotic resistance and motility ( Figure 5A). This finding was corroborated by manual inspection of TFs participating in the same biological processes ( Figure 5B). It is clear that the processes that are more thoroughly studied in P. aeruginosa correspond to those related to pathogenesis and virulence properties while little attention has been given to biological processes, such as, central metabolism, membrane biogenesis, cell-division, etc. Most of the best-studied biological processes are connected, beginning from alginate biosynthesis to quorum sensing, and from there to those involved in the production of virulence factors. Additionally, there is a directed regulatory connection from alginate biosynthesis to iron metabolism and to some mechanisms of antibiotic resistance ( Figure 5B). Since these processes act cooperatively during infection and pathogenesis, it is very important to give a detailed characterization of P. aeruginosa regulatory network. The latter may lead to the development of strategies to disrupt its connectivity, thus, possibly decreasing the pathogenicity of this bacterium.
Conclusions
Here we report the topological and functional organization of the third largest regulatory network in bacteria. From our analysis, it is evident that the study of regulation in P. aeruginosa is biased towards particular biological processes, involved in pathogenesis and virulence. These processes include alginate and biofilm formation, production of virulence factors and antibiotic resistance, many of which are coordinated by quorum sensing in the bacterial population. Current data suggests, that motility, iron metabolism and anaerobic respiration might be less connected to these processes by now. All these processes are connected in the network via a hierarchical organization with 11 levels, and the connected parts of the network form a giant component with 650 genes, the 10% of them corresponded to TFs. Overall, the network has degree distribution and structural organizations as other biological networks known to date. A peculiar property of this network is the fact that its TFs are mainly auto-activated. This is the first time this mode of self-regulation is reported as dominant in a bacterial TRN. It remains to be revealed whether this property is really a characteristic of the entire network in this bacterium, or is just is property of this part of the network, which clearly controls adaptive, pathogenic and virulence processes. As it can be observed, regulatory information related to several important biological processes of P. aeruginosa is lacking; for instance, the regulation on the uptake of carbon sources and their metabolism, amino acid biosynthesis or cell-division. This bias makes difficult a complete analysis on the regulatory network of this bacterium and better compare it with regulatory networks of bacteria most characterized such as E. coli or B. subtilis. It might be that studying basic biological functions on this organisms we can understand the basis of their versatile metabolism, adaptiveness and pathogenecity. In special it is lacking the knowledge of the activity of the housekeeping sigma and transcription factors controlling activities of central metabolism. Because of this it will be very important for the community working on the biology of P. aeruginosa to study additional biological processes in order to have a more complete picture of the regulatory network in this bacterium. We hope this analysis will give insights in this direction to guide future work, with the aim of covering the many gaps of knowledge on this important bacterial model.
Biological data and representation
The general strategy for the curation of regulatory interactions is shown in Figure 1. Briefly, we searched PubMed with relevant key words, such as: P. aeruginosa, sigma or transcription factor, transcriptional regulation, etc. Data on regulatory networks were obtained from the literature and compiled in an Excel table including experimental evidence and references. The Additional file 1 shows the complete information for the interactions of the entire network. The regulatory interactions were drawn in a form of network using the Cytoscape software [28].
Transcriptional machinery sub-network
With the aim of analyzing the regulatory behavior of the transcriptional machinery of P. aeruginosa, the regulatory interactions present only among TFs, sigmas and anti-sigmas from the whole network were extracted ( Figure 4).
Computational analysis of the regulatory network
All the computational analyses on the network were made using the Octave free software http://www. octave.org. Analyses of degree, centrality, clustering coefficient, connectivity, cycles, paths and hierarchical levels were made according to previous definitions and following the approach as in [10]. Motif determination was made following the work by Uri Alon and coworkers calculating the probability of finding the same motif in a random network as the average of the motifs found in 1000 randomized networks, maintaining the same number of nodes, edges and the proportion of the type of regulatory interactions (positive, negative, dual) [24].
The G coefficient for global regulators
We computed the coefficient G, which indicates the global activity of a TF in a TRN as follows: Where, N TF indicates the total number of TFs (in the known network in each case), N G is the number of nonregulatory genes, and N SF is the number of sigma factors in the whole network. Additionally, TFR and GR, represents the number of TFs and non-regulatory genes regulated by each TF, respectively; SF represents the distinct sigma factors used by the promoters of genes regulated by each TF; and CR represents the number of TFs each TF co-regulates with.
Determination of biological modules/processes in the regulatory machinery network
With the aim of determining biological modules in the TRN we used a shortest path metric criteria among TFs and sigmas (we used the relation 1/D 2 where D is the distance between two nodes), as reported for E. coli [27]. Additionally, we manually grouped TFs and sigma factors in agreement to the functional classification of their regulated genes [4]. | 6,258 | 2011-06-14T00:00:00.000 | [
"Biology"
] |
Conditional probability, three-slit experiments, and the Jordan algebra structure of quantum mechanics
Most quantum logics do not allow for a reasonable calculus of conditional probability. However, those ones which do so provide a very general and rich mathematical structure, including classical probabilities, quantum mechanics as well as Jordan algebras. This structure exhibits some similarities with Alfsen and Shultz's non-commutative spectral theory, but these two mathematical approaches are not identical. Barnum, Emerson and Ududec adapted the concept of higher-order interference, introduced by Sorkin in 1994, into a general probabilistic framework. Their adaption is used here to reveal a close link between the existence of the Jordan product and the non-existence of interference of third or higher order in those quantum logics which entail a reasonable calculus of conditional probability. The complete characterization of the Jordan algebraic structure requires the following three further postulates: a Hahn-Jordan decomposition property for the states, a polynomial functional calculus for the observables, and the positivity of the square of an observable. While classical probabilities are characterized by the absence of any kind of interference, the absence of interference of third (and higher) order thus characterizes a probability calculus which comes close to quantum mechanics, but still includes the exceptional Jordan algebras.
Introduction
The interference manifested in the two-slit experiments with small particles is one of the best known and most typical quantum phenomena. It is somewhat surprising therefore that quantum mechanics rules out third-order interference. This was discovered by Sorkin [1] considering measures on the "sets of histories" with experimental set-ups like the well-known two-slit experiments, but with three and more slits. He introduced the interference terms I 2 and I 3 and detected that, although the second-order interference is a typical quantum phenomenon (I 2 ≠0), the third-order interference does not occur in quantum mechanics (I 3 =0).
In the present paper, Sorkin's interference terms I 2 and I 3 are ported to the framework of quantum logics with unique conditional probabilities which was introduced by the author in [2] and [3]. In [3] it was shown that each such quantum logic can be embedded in an order-unit space where a specific type of positive projections then represent the probability conditionalization similar to the Lüders -von Neumann quantum measurement process.
In this general framework, the identity I 3 =0 is not automatically given, and its role in a reconstruction of quantum mechanics from a few basic principles or in an axiomatic access to quantum mechanics based on a few interpretable postulates, are analysed in the paper. It is shown that the absence of third-order interference (I 3 =0) has some important consequences. It entails the existence of a product in the order-unit space generated by the quantum logic, which can be used to characterize those quantum logics that can be embedded in the projection lattice in a Jordan algebra. Most of these Jordan algebras can be represented as operator algebras on a Hilbert space, and a reconstruction of quantum mechanics up to this point is thus achieved.
Besides the identity I 3 =0, two further typical properties of quantum mechanics distinguishing it from more general theories are identified; these are a novel bound for quantum interference and a symmetry property of the conditional probabilities. This latter property was discovered by Alfsen and Shultz who used it as a postulate to derive the Jordan product for the quantum mechanical observables from it in their approach [4], and it was used in a similar way in [3], but a physical justification for it is hard to find. With the main result of the present paper, it can now be replaced by another postulate with a clearer physical meaning -namely the absence of third-order interference (I 3 =0).
The next two sections summarize those parts of [2] and [3] which are relevant for the subsequent sections. In section 4, the second-and third-order interference terms (I 2 and I 3 ) are considered and ported to the quantum logics with unique conditional probabilities. The bound for quantum interference and the symmetry property of the quantum mechanical conditional probabilities are studied in sections 5 and 6. In section 7, a useful type of linear maps is introduced, which is used in section 8 to analyse the case I 3 =0. A certain mathematical condition -the Jordan decomposition property -is outlined in section 9 and then used in section 10 to derive the product in the order-unit space from the identity I 3 =0. Section 11 finally addresses the question under which further conditions the order-unit space becomes a Jordan algebra.
Quantum logics with unique conditional probabilities
A quantum logic is the mathematical model of a system of quantum events or propositions. Logical approaches use the name "proposition", while the name "event" is used in probability theory and will also be preferred in the present paper. The concrete quantum logic of standard quantum mechanics is the system of closed linear subspaces of a Hilbert space or, more generally, the projection lattice in a von Neumann algebra.
Usually, an abstract quantum logic is assumed to be an orthomodular partially ordered set and, very often, it is also assumed that it is lattice. For the purpose of the present paper, however, a more general and simpler mathematical structure without order relation is sufficient. Only an orthocomplementation, an orthogonality relation and a sum operation defined for orthogonal events are needed. The orthocomplementation represents the logical negation, orthogonality means mutual exclusivity, and the sum represents the logical and-operation in the case of mutual exclusivity. The precise axioms were presented in [2] and look as follows.
The quantum logic E is a set with distinguished elements 0 and 1, an orthogonality relation ⊥ and a partial binary operation + such that the following six axioms hold for e,f,g∈E: Then 0'=1 and e''=e for e∈E. Note that an orthomodular partially ordered set satisfies these axioms with the two definitions (i) e⊥f iff f≤e' (ii) The sum e+f is the supremum of e and f for e⊥f.
The supremum exists in this case due to the orthomodularity.
A state is a map µ:E→[0,1] such that µ(1)=1 and µ(e+f) = µ(e) + µ(f) for orthogonal pairs e and f in E. Then µ(0)=0 and µ(e 1 +...+e k ) = µ(e 1 )+...+µ(e k ) for mutually orthogonal elements e 1 ,...,e k in E. Denote by S the set of all states on E. With a state µ and µ(e)>0 for an e∈E, another state ν is called a conditional probability of µ under e if ν(f) = µ(f)/µ(e) holds for all f∈E with f⊥e'. Furthermore, the following axioms were introduced in If these axioms are satisfied, E is called a UCP space -named after the major feature of this mathematical structure which is the existence of the unique conditional probability -and the elements in E are called events. The unique conditional probability of µ under e is denoted by µ e and, in analogy with classical mathematical probability theory, µ(f|e) is often written instead of µ e (f) with f∈E. The above two axioms imply that there is a state µ∈S with µ(e)=1 for each event e≠0, that the difference d in (OS6) becomes unique, and that e⊥e iff e⊥1 iff e=0 (e∈E).
Note that the following identity which will be used later holds for convex combinations of states µ,ν∈S (0<s<1): A typical example of the above structure is the projection lattice E in a von Neumann algebra M without type I 2 part; E = {e∈M: e*=e=e 2 }. The conditional probabilities then have the shape with e,f∈E, µ∈S and µ(e)>0. Note that on M is the unique positive linear extension of the state µ originally defined only on the projection lattice; this extension exists by Gleason's theorem [5] and its later enhancements to finitely additive states and arbitrary von Neumann algebras [6], [7], [8], [9]. The linear extension does not exist if M contains a type I 2 part. For the proof of equation (2), suppose that the state ν on E is a version of the conditional probability of the state µ under e and use the identity f=efe+efe'+e'fe+e'fe'. From ν(e')=0 and the Cauchy-Schwarz inequalty applied with the positive linear functional it follows 0 = efe' = e ' fe = e ' fe ' and thus ν(f) = efe. By the spectral theorem, efe can be approximated (in the norm topology) by linear combinations of elements in {d∈E:d⊥e'} = {d∈E:d≤e} for which ν coincides with µ/µ(e). The continuity of (due its positivity) then implies ν(f) = efe = efe/ e. Therefore the conditional probability must have this shape and its uniqueness is proved. Its existence follows from efe≥0 and efe=f for f≤e, since then ν(f) := efe/ e indeed owns all the properties of the conditional probability. Equation (2) reveals the link to the Lüders -von Neumann quantum measurement process. The transition from a state µ to the conditional probability µ e is identical with the transition from the state prior to the measurement to the state after the measurement where e represents the measurement result.
The embedding of the quantum logic in an order-unit space
A quantum logic with a sufficiently rich state space as postulated by (UC1) can be embedded in the unit interval of an order-unit space. In the present section, it will be shown that the existence and the uniqueness of the conditional probabilities postulated by (UC2) give rise to some important additional structure on this order-unit space, which was originally presented in [3].
A partially ordered real vector space A is an order-unit space if A contains an Archimedean order unit 1 [10], [11], [12]. The order unit 1 is positive and, for all a∈A, there is t>0 such that -t1 ≤ a ≤ t1. An order unit 1 is called Archimedean if na ≤ 1 for all n∈ℕ implies a≤0. An order-unit space A has a norm given by a = inf {t>0: -1t ≤ a ≤ 1t}. Each x∈A can be written as x=a-b with positive a,b∈A (e.g., choose a = ||x||1 and b = ||x||1 -x). A positive linear functional : A ℝ on an orderunit space A is bounded with ||ρ||=ρ(1) and, vice versa, a bounded linear functional ρ with ||ρ||=ρ(1) is positive.
The order-unit space A considered in the following is the dual space of a base-norm space V and, therefore, the unit ball of A is compact in the weak-*-topology σ(A,V). For ρ∈V and x∈A define x := x ; the map is the canonical embedding of V in its second dual V ** =A * . Then ρ∈V is positive iff is positive on A. For any set K in A, denote by lin K the σ(A,V)-closed linear hull of K and by conv K the σ(A,V)closed convex hull of K. For a convex set K, denote by ext K the set of its extreme points which may be empty unless K is compact. A projection is a linear map U:A→A with U 2 =U and, for a≤b, define Define e':=1-e and call e,f∈E orthogonal if e+f∈E. Then E satisfies the axioms (OS1),...,(OS6), and the states on E and the state space S can be considered as in section 2.
Proposition 3.1:
Suppose that A is an order-unit space with order unit 1 and that A is the dual of the base-norm space V. Moreover, suppose that E is a subset of [0,1] satisfying the three above conditions (a), (b), (c) and that the following two conditions hold: Proof. For e,f∈E with e≠f there is ρ∈V + with ρ(e-f)≠0. The restriction of ρ/ρ(1) to E then yields a state µ∈S with µ(e)≠µ(f). Therefore (UC1) holds.
Suppose e∈E and µ∈S with µ(e)>0. It is rather obvious that the map g → U e g / e on E provides a conditional probability of µ under e. Now assume that ν is a further conditional probability of µ under e. Then ν(e)=1 and thus = U e . From U e g∈lin{f∈E: f≤e} it follows that ν(g) = U e g = U e g / e for g∈E. Therefore, (UC2) holds as well. Note that the linear extension in (i) is unique since A = lin E. It shall now be seen that the situation of Proposition 3.1 is universal for the quantum logics with unique conditional probabilities; i.e., each UCP space has such a shape as described there.
Theorem 3.2:
Each UCP space E is a subset of the interval [0,1] in some order-unit space A with predual V as described in Proposition 3.1.
Then |ρ(e)| ≤ ||ρ|| for every e∈E. Let A be the dual space of the base-norm space V and let be the canonical embedding of µ∈V in V ** =A * . If x ≥ 0 for all µ∈S, the element x∈A is called positive and in this case write x≥0. Equipped with this partial ordering, A becomes an order-unit space with the order unit 1:=π(1), and the order-unit norm of an element x∈A is ||x|| = sup{ | x | : µ∈S }. With e∈E define π(e) in A via π(e)(ρ) := ρ(e) for ρ∈V. Then 0 ≤ ||π(e)|| ≤ 1, and π(e+f)=π(e)+π(f) for two orthogonal events e and f in E. Moreover, A is the σ(A,V)-closed linear hull of π(E).
If µ(e)=1 for µ∈S, then µ=µ e and U e x = (U e x)(µ) = x ; i.e., = U e . Thus, (U e U e x)(µ) = e e U e x = e e x = (U e x)(µ) for all µ∈S and hence for all ρ∈V. Therefore U e U e =U e , i.e., U e is a projection. Its positivity, σ(A,V)-continuity as well as U e 1=π(e) and U e π(f)=π(f) for f∈E with f≤e follow immediately from the definition.
Proof. Suppose e≤f and µ∈S. Then 1=µ e (e)≤µ e (f)≤1 for the conditional probability µ e implies µ e (f)=1 and hence Hence e=U e ′ f =U e (1-f)=e-U e f, and U e f=0. In the same way it follows that U f e=0. Therefore U f vanishes on U e A = lin{d∈E:d≤e} and U f U e =0. The identity U e U f =0 follows in the same way.
This identity now holds for all states µ and therefore U e' U f ' =U e f ' . In the same way it follows that The projections U e considered here are similar to, but not identical with the so-called Pprojections considered by Alfsen and Shultz in their non-commutative spectral theory [13]. A Pprojection P has a quasicomplement Q such that Px=x iff Qx=0 (and Qx=x iff Px=0) for x≥0. If U e x=x, then U e' x=U e' U e x=0 by Lemma 3.3, but U e' x=0 does not imply U e x=x.
An element P(1) with a P-projection P is called a projective unit by Alfsen and Shultz. In the case of spectral duality of a base-norm space V and an order-unit space A, the system of projective units in A is a UCP space if each state on the projective units has a linear extension to A (as with the Gleason theorem, or in condition (i) of Proposition 3.1).
The interference terms I 2 and I 3
With two disjoint events e 1 and e 2 , the classical conditional probabilities satisfy the rule f |e 1 e 2 e 1 e 2 = f |e 1 e 1 f |e 2 e 2 . Only because violating this rule, quantum mechanics can correctly model the quantum interference phenomena observed in nature with small particles.
For instance, consider the two-slit experiment and let e 1 be the event that the particle passes through the first slit, e 2 the event that it passes through the second slit, and f the event that it is registered in a detector located at a fixed position somewhere behind the screen with the two slits. Then f |e 1 e 1 is the probability that the particle is registered in the detector when the first slit is open and the second one is closed, f |e 2 e 2 is the probability that the particle is registered in the detector when the second slit is open and the first one is closed, and f |e 1 e 2 e 1 e 2 is the probability that the particle is registered in the detector when both slits are open. If the above rule were valid, it would rule out the interference patterns observed in the quantum physical experiments and correctly modelled by quantum theory. Therefore, a first interference term is defined by where µ is a state, f any event, and e 1 ,e 2 an orthogonal pair of events. While I 2 , f e 1 , e 2 = 0 in the classical case, it is typical of quantum mechanics that I 2 , f e 1 , e 2 ≠ 0. With three orthogonal events e 1 ,e 2 ,e 3 , a next interference term can be defined in the following way: Similar to the two-slit experiment, now consider an experiment where the screen has three instead of two slits. Let e k be the event that the particle passes through the k th slit (k=1,2,3) and f again the event that it is registered in a detector located somewhere behind the screen with the slits. Then f |e k e k is the probability that the particle is registered in the detector when the k th slit is open and the other two ones are closed, f |e i e j e i e j is the probability that the particle is registered in the detector when the i th slit and the j th are open and the remaining third slit is closed, and f |e 1 e 2 e 3 e 1 e 2 e 3 is the probability that the particle is registered in the detector when all three slits are open. The interference term I 3 , f e 1 , e 2 , e 3 is the sum of these probabilities with negative signs in the cases with two slits open and positive signs in the cases with one or three slits open. The sum would become zero if these probabilities were additive in e 1 ,e 2 ,e 3 as they are in the classical case. However, this is not the only case; I 3 , f e 1 , e 2 , e 3 =0 means that the detection probability with three open slits is a simple linear combination of the detection probabilities in the three cases with two open slits and the three cases with one single open slit. If the situation with three slits involves some new interference, the detection probability should not be such a linear combination and I 3 , f e 1 , e 2 , e 3 should not become zero. The fascinating question now arises whether I 3 , f e 1 , e 2 , e 3 = 0 or not. Considering experiments with three and more slits, this third-order interference term and a whole sequence of further higher-order interference terms were introduced by Sorkin [1], but in another form. He used probability measures on "sets of histories". When porting his third-order interference term to conditional probabilities, it gets the above shape. The same shape is used by C. Ududec, H. Barnum and J. Emerson [14] who adapted Sorkin's third-order interference term into an operational probabilistic framework. The further higher-order interference terms will not be considered in the present paper. It may be interesting to note that the absence of the third-order interference implies the absence of the interference of all orders higher than three.
Using the identity f |e= U e f /e , the validity of I 3 , f e 1 , e 2 , e 3 = 0 for all states µ is immediately equivalent to the identity If this shall hold also for all events f, this means The term I 3 e 1 , e 2 , e 3 does not any more depend on a state or the event f, but only on the orthogonal event triple e 1 ,e 2 ,e 3 . Note that I 3 e 1 , e 2 , e 3 is a linear map on the order-unit space A while I 3 , f e 1 , e 2 , e 3 = I 3 e 1 , e 2 , e 3 f is a real number. This interference term shall now be studied in a von Neumann algebra where the conditional probability has the shape f |e= efe/e for projections e,f and a state µ with µ(e)>0, and hence U e f=efe. Then Therefore, in a von Neumann algebra and in standard quantum mechanics, the identity I 3 , f e 1 , e 2 , e 3 = 0 always holds, which was already seen by Sorkin. This becomes a first interesting property of quantum mechanics distinguishing it from the general quantum logics with unique conditional probabilities (UCP spaces). It is quite surprising that quantum mechanics has this property because there is no obvious reason why I 3 , f e 1 , e 2 , e 3 should vanish while I 2 , f e 1 , e 2 does not. Likewise surprising are the bounds which quantum mechanics imposes on I 2 , f e 1 , e 2 and which will be presented in the next section.
A bound for quantum interference
Suppose that E is a UCP space. By Theorem 3.2, it can be embedded in an order-unit space A such that Proposition 3.1 holds. For each event e∈E define a linear map S e on A by S e x :=2 U e x2 U e ' x−x (x∈A).
Then S e 1 = 1. Furthermore, S e S e' x=S e ' S e x= x for x∈A; i.e., S e' is the inverse of the linear map S e and S e is a linear isomorphism. If it were positive, it would be an automorphism of the order-unit space A, but this is not true in general. It shall now be studied what the positivity of the maps S e would mean for the conditional probabilities.
Suppose e,f∈E. Then 0S e f means f 2U e f 2U e ' f , and this is equivalent to the following inequality for the conditional probabilities holding for all states µ. Exchanging f by f ' yields from 0S e f ' and thus a second inequality The above inequalities (5) and (7) introduce an upper bound and a lower bound for the interference term I 2 , f e , e ' . How these bounds limit the interference, is shown in Figure 5.1 using the inequalities (4) and (6) and not this specific interference term. The dashed diagonal line represents the classical case without interference, while the inequalities (4) and (6) allow the whole corridor between the two continuous lines and forbid the area outside this corridor. In a von Neumann algebra, U e f=efe for projections e,f and hence S e x=2exe+2e'xe'-x=(e-e')x(e-e') with x in the von Neumann algebra. Therefore the maps S e are positive and the above inequalities for the conditional probabilities ((4) and (6) -illustrated in Figure 5.1) and the resulting bounds for the interference term I 2 , f e , e ' (inequalities (5) and (7)) hold in this case. This is a second interesting property of quantum mechanics distinguishing it from other more general theories.
A symmetry property of the quantum mechanical conditional probabilities
Alfsen and Shultz introduced the following symmetry condition for the conditional probabilities in [4] and used it to derive a Jordan algebra structure from their non-commutative spectral theory: This condition arose mathematically and a physical meaning is not immediately at hand. Alfsen and Shultz's interpretation was: "The probability of the exclusive disjunction of two system properties is independent of the order of the measurements of the two system properties." However, the exclusive disjunction is not an event or proposition. The first summand on the left-hand side μ(f'|e)μ(e) is the probability that a first measurement testing e versus e' provides the result e and that a second successive measurement testing f versus f' then provides the result f' (= "not f"). The second summand on the left-hand side and the two summands on the right-hand side can be interpreted in the same way with exchanged roles of e,e',f,f '. Condition (A1) now means that the probability that a particle passes through the measuring arrangement shown in Figure 6.1 is identical with the probability that a particle passes through the measuring arrangement shown in Figure 6.2.
The symmetry condition for the conditional probabilities (A1) also plays a certain role in the study of different compatibility/comeasurability levels in [15].
Condition (A1) shall now be rewritten using the interference term I 2 , f e 1 , e 2 for two orthogonal events e 1 and e 2 . To remove the dependence on f and the state µ, first define in analogy to equation (3) I 2 e 1 , e 2 := U e 1 e 2 −U e 1 −U e 2 (8) which is a linear operator on the order-unit space A generated by an UCP space E. Note that I 2 , f e 1 , e 2 = I 2 e 1 , e 2 f , recalling the identity f |ee = U e f for states µ and events e and f. The validity of (A1) for all states µ is immediately equivalent to the identity Reconsidering the von Neumann algebras where U e f=efe, equation (9) becomes e(1-f)e + (1-e)f(1-e) =f(1-e)f + (1-f)e(1-f). Both sides of this equation are identical to e+f-ef-fe. Therefore (A1) holds for all states and all events in a von Neumann algebra and in the standard model of quantum mechanics. This is a third interesting property of quantum mechanics distinguishing it from the general quantum logics with unique conditional probabilities (UCP spaces). However, it is a mathematical property without a clear physical reason behind it and the absence of third-order interference (I 3 =0) is the more interesting property from the physical point of view.
The linear maps T e
In addition to the U e , a further useful type of linear maps T e on the order-unit space A generated by a UCP space E shall now be defined for e∈E: In a von Neumann algebra, this becomes T e x = (ex+xe)/2, which is the Jordan product of e and x. This is a first reason why some relevance is expected from the maps T e on the order-unit space A. A second reason is that (A1) holds for all states µ if and only if T e f=T f e. This follows from equation (9). Thus, using the linear maps T e , the symmetry condition (A1) for the conditional probabilities is transformed to the very simple equation T e f=T f e. Actually, this is what brought Alfsen and Shultz to the discovery of (A1).
Some further important characteristics of these linear maps shall now be collected. Suppose e∈E and x∈A. Using the above definition of T e immediately yields T e xT e ' x=x. Moreover, Originally, the T e were derived from the U e , and equation (12) and therefore ||U e -U e' ||≤1. Hence ||T e ||≤1. Since T e e=e and ||e|| = 1 for e≠0, it follows that ||T e ||=1 unless e=0 and T e =0.
Lemma 7.1: If two events e and f in an UCP space E are orthogonal, then the linear maps T e and T f on the order-unit space A generated by E commute: T e T f =T f T e .
Proof. By Lemma 3.3 the four projections U e , U e' , U f , U f' commute pairwise. The linear maps T e and T f then commute by equation (11). An important link between the linear maps T e (e∈E) and Sorkin's interference term I 3 will be considered in the next sections.
Quantum logics with I 3 = 0
The interference term I 2 , f e 1 , e 2 vanishes for all states µ and all events f if and only if I 2 e 1 , e 2 = U e 1 e 2 −U e 1 U e 2 = 0. The general validity of this identity for all orthogonal event pairs e 1 ,e 2 means that the map e → U e is orthogonally additive in e. It will later be seen in Proposition 8.2 that the general validity of I 3 e 1 , e 2 , e 3 = 0 for all orthogonal event triples e 1 ,e 2 ,e 3 means that the map e → T e is orthogonally additive in e. This will follow from the next lemma.
Lemma 8.1:
Suppose that E is a UCP space and that A is the order-unit space generated by E. (i) If e 1 ,e 2 ,e 3 are three orthogonal events in E, then I 3 e 1 , e 2 , e 3 =U e 1 e 2 e 3 I 3 e 2 e 3 ' , e 2 , e 3 .
(ii) If e and f are two orthogonal events in E, then
and thus Proof. The implication (i) ⇒ (ii) is obvious, the implication(ii) ⇒ (iii) follows from Lemma 8.1 (ii) and the implication (iii) ⇒ (i) from Lemma 8.1 (i) and (ii) both. It now becomes clear that the symmetry condition (A1) for the conditional probabilities implies the absence of third-order interference, i.e., that I 3 e 1 , e 2 , e 3 =0 for all orthogonal events e 1 , e 2 , e 3 . Recall that (A1) implies the identity T e f = T f e for all events e and f. Therefore, T e is orthogonally additive in e and I 3 e 1 , e 2 , e 3 =0 follows from Proposition 8.2.
Jordan decomposition
In this section, the orthogonal additivity of T e in e (by Lemma 7.2) will be used to define T x for all x in the order-unit space A. The T e are not positive and this involves some difficulties, which can be overcome when the real-valued bounded orthogonally additive functions on the UCP space E satisfy the so-called Jordan decomposition property or the stronger Hahn-Jordan decomposition property. These decomposition properties are named after the French mathematician Camille Jordan (1838 -1922) who originally introduced the first one for functions of bounded variation and signed measures. The Jordan algebras are named after another person; this is the German physicist Pascual Jordan (1902 -1980). Consider functions ρ: Ε → ℝ which are orthogonally additive (i.e., ρ(e+f)=ρ(e)+ρ(f) for orthogonal elements e and f in E) and bounded (i.e., sup{|ρ(e)| : e∈E} < ∞). Let R denote the set of all these functions ρ on E. Then R comprises the state space S. The UCP space E is said to have the Jordan decomposition property if each ρ∈R can be written in the form ρ=sµ−tν with two states µ and ν in S and non-negative real number s and t. It has the ε-Hahn-Jordan decomposition property if the following stronger condition holds: For each ρ∈R and every ε>0 there are two states µ and ν in S, non-negative real number s and t and an event e in E such that ρ=sµ−tν and µ(e)<ε as well as ν(e')<ε. This ε-Hahn-Jordan decomposition property was studied in the framework of quantum logics by Cook [16] and Rüttimann [17]. The usual Hahn-Jordan decomposition property for signed measures is even stronger requiring that µ(e)=0=ν(e').
Note that the projection lattices of von Neumann algebras have the Jordan decomposition property and the ε-Hahn-Jordan decomposition property, but this is not obvious. It is well-known that theses types of decomposition are possible for the bounded linear functionals on the algebra, but they are needed for the orthogonally additive real functions on the projection lattice. Bunce and Maitland Wright [18] showed that each such function on the projection lattice has a bounded linear extension to the whole algebra (this is the last step of the solution of the Mackey-Gleason problem which had been open for a long time) and then the decomposition of this extension provides the desired decomposition by considering the restrictions of the linear functionals to E. Lemma 9.1: Suppose that E is a UCP space with the Jordan decomposition property and I 3 e 1 , e 2 , e 3 =0 for all orthogonal events e 1 , e 2 , e 3
in E. Then, for each x in the order-unit space A generated by E, the map e→T e x from E to A has a unique σ(A,V)-continuous linear extension y→T y x on A.
Proof. Consider V as defined in the proof of Theorem 3.2. In the general case, only the inclusion V⊆R holds, but the Jordan decomposition property ensures that V=R.
For each ρ in V and each x in A, define a function ρ x on E by x e := T e x . By Proposition 8.2, ρ x is orthogonally additive in e. Moreover, | x e |∥∥∥T e ∥∥x∥∥∥ ∥x∥ for e∈E. Thus ρ x is bounded and lies in R=V. Let x be its canonical embedding in V ** =A * and, with y∈A, consider the real-valued bounded linear map ρ → x y on V. It defines an element T y x in V * =A such that the map y → T y x is linear as well as σ(A,V)-continuous on A and coincides with the original T e x for y=e∈E.
The product on the order-unit space
Under the assumptions of the last lemma, a product can now be defined on the order-unit space A by yx:=T y x. It is linear in x as well as in y, but there is certain asymmetry concerning its σ(A,V)continuity. The product yx is σ(A,V)-continuous in y∈A with x∈A fixed, and it is σ(A,V)continuous in x∈A with y=e∈Ε fixed, but generally not with other y∈Α. Moreover, 1x=x and y1=y, ee=e and ||ex||≤||x|| for the elements x and y in A and the events e in E. However, the inequality ||yx||≤||y|| ||x|| is not yet available; this requires the ε-Hahn-Jordan decomposition property and will follow from the next lemma. Suppose ε>0. Due to the ε-Hahn-Jordan decomposition property, there are two states µ and ν in S, non-negative real number s and t and an event f in E such that ρ=sµ−tν and µ(f)<ε as well as Recall from the proof of Theorem 3.2 that ||ρ|| = inf{ r ∈ℝ : r≥0 and ρ ∈ r conv(S∪-S)}. The case s=t=0 cannot occur since this would imply ρ=0. Therefore write Proof. Suppose e∈E and x∈A with ||x||≤1. I.e., x∈ [-1,1]. Then ||(e-e')x|| = ||ex -e'x|| = ||T e x -T e' x||. Moreover, T e x -T e' x = U e x -U e' x by equation (11) and U e x -U e' x ∈[-1,1] by (13). Therefore, ||(e-e')x|| ≤1. This holds for all e∈E. Lemma 9.1 and Lemma 10.1 then imply that ||yx||≤1 for all y∈ [-1,1]. So far it has been seen that the Jordan decomposition property and the absence of third-order interference entail a product yx on the order-unit space A generated by the UCP space E which is linked to the conditional probabilities via the identity U e x = 2e(ex) -ex holding for the events e and the elements x in A. This follows from equation (12).
The product yx is neither commutative nor associative and thus far away from the products usually considered in mathematical physics. The common product of linear operators is not commutative, but associative and the Jordan product a•b:=(ab+ba)/2 is not associative, but commutative.
Jordan algebras
The elements of a UCP space E represent the events or propositions; they can be considered also as observables with the simple discrete spectrum {0,1} representing a yes/no test experiment. However, what is the meaning of the elements x in the order-unit space A generated by E? One might expect that they represent other observables with a larger and possibly non-discrete spectrum and that x is the expectation value of the observable represented by x in the state µ. If it is assumed that they do so, one would also expect a certain behaviour.
First, one would like to identify the elements x 2 =xx and, more generally, x n (inductively defined by x n+1 :=xx n ) in A with the application of the usual polynomial functions t→t 2 or t→ t n to the observable.
Second, the expectation value of the square x 2 should be non-negative; this means that x 2 0 for all x in A and for all states µ, and therefore x 2 ≥0 in the order-unit space A. Third, one would like to have the usual polynomial functional calculus allocating an element p(x) in A to each polynomial function p such that p1(x)p 2 (x)=q(x) whenever p1 and p 2 are two polynomial functions and the polynomial function q is their product. Since the product in A is not associative, x n x m need not be identical with x n+m . If x n x m =x n+m holds for all x in A and for all natural numbers n and m, A is called power-associative. The availability of the polynomial functional calculus for all elements in A means that A is power-associative, and vice versa.
The following theorem shows that these requirements make A a commutative Jordan algebra; i.e., the product is Abelian and satisfies the Jordan condition x(x 2 y)=x 2 (xy). The Jordan condition is stronger than power-associativity and, in general, power-associativity does not imply the Jordan condition.
∥.
A second commutative product is now introduced on A by x•y := (xy + yx)/2. Note that, due to the power-associativity, x n is identical with the two products. Equipped with this new commutative product, A then becomes a Jordan algebra. This follows from a result by Iochum and Loupias [19] (see also [20]).
Since A is the dual of V, A with • is a so-called JBW algebra [12]. The JBW algebras represent the Jordan analogue of the W * -algebras and these are the same as the von Neumann algebras, but Suppose e∈E. If µ is any state on E with µ(e)>0, the map f→ 2 e°e°f −e°f /e for f∈E defines a version of the conditional probability (which was shown in [2]) and must coincide with U e f /e . Therefore U e f =2 e°e°f −e°f and for all e and f in E. The product on a JBW algebra is σ(A,V)-continuous in each component. It then follows in a first step that ey = e•y for all e in E and y in A, and in the second step that xy = x•y for all x and y in A. Note that the order of the two steps is important because of the asymmetry of the σ(A,V)-continuity of the product xy. Thus finally xy = yx. When the starting point is the projection lattice E in a JBW algebra M (e.g., the selfadjoint part of a von Neumann algebra with the Jordan product) without type I 2 part, the order-unit space A generated by E is the second dual A = M ** of M. It contains M by the canonical embedding in its second dual, but is much larger (unless M has a finite dimension); M is the norm-closed linear hull of E in A, while A is the σ(A,V)-closed linear hull of E.
A rich theory of Jordan algebras is available and most of them can be represented as a Jordan sub-algebra of the self-adjoint linear operators on a Hilbert space. The major exception is the Jordan algebra consisting of the 3×3 matrices with octonionic entries, and the other exceptions are the socalled exceptional Jordan algebras which all relate to this one [12].
A reconstruction of quantum mechanics up to this point has thus been achieved from a few basic principles. The first one is the absence of third-order interference and the second one is the postulate that the elements of the constructed algebra exhibit an behaviour which one would expected from observables. The third one, the ε-Hahn-Jordan decomposition property, is less conceptional and more a technical mathematical requirement.
Conclusions
The combination of a simple quantum logical structure with the postulate that unique conditional probabilities exist provides a powerful general theory which includes quantum mechanics as a special case. It is useful for the reconstruction of quantum mechanics from a few basic principles as well as for the identification of typical properties of quantum mechanics that distinguish it from other more general theories. Three such properties have been studied: a novel bound for quantum interference, a symmetry condition for the conditional probabilities, and the absence of third-order interference (I 3 =0); the third property has been the major focus of this paper.
In the framework of the quantum logics with unique conditional probabilities, the absence of third-order interference (I 3 =0) has some important consequences. It entails the existence of a product in the order-unit space generated by the quantum logic, which can be used to characterize those quantum logics that can be embedded in the projection lattice in a Jordan algebra. Most of these Jordan algebras can be represented as operator algebras on a Hilbert space, and a reconstruction of quantum mechanics up to this point is thus achieved.
As the identity I 2 =0 distinguishes the classical probabilities, the identity I 3 =0 thus characterizes the quantum probabilities. It may be expected that there are other more general theories with I 3 ≠0, and the quantum logics with unique conditional probabilities may provide an opportunity to establish them. For the time being, however, the projection lattices in the exceptional Jordan algebras are the only known concrete examples which do not fit into the quantum mechanical standard model, but still have all the properties discussed in the present paper and do not exhibit third-order interference. Further examples can be expected from Alfsen and Shultz's spectral duality, but unfortunately all the known concrete examples of this theory are either covered by the Jordan algebras or do not satisfy the Gleason-like extension theorem (part (i) of Proposition 3.1). Besides the examples with I 3 ≠0, it would also be interesting to find examples where the identity I 3 =0 holds, but where the product on the order-unit space is not power-associative or where the squares are not positive or where the symmetry condition (A1) for the conditional probabilities does not hold.
The possibility that no such examples exist is not anticipated, but cannot be ruled out as long as no one has been found. It would mean that every quantum logic with unique conditional probabilities can be embedded in the projection lattice in a Jordan algebra and that third-order interference never occurs. Moreover, the further postulates concerning the behaviour of the observables (power-associativity, positive squares) would then as well become redundant in the reconstruction of quantum mechanics. If this could be proved, the reconstruction process could be cleared up considerably.
In any case, it seems that third-order interference can play a central role in the reconstruction of quantum mechanics from a few basic principles, in an axiomatic access to quantum mechanics with a small number of interpretable axioms, as well as in the characterisation of the projection lattices in von Neumann algebras or their Jordan analogue -the JBW algebras -among the quantum logics. Mathematically, the symmetry property (A1) of the conditional properties (see section 6) can play the same role, which was shown by Alfsen and Shultz [4] and by the author [3]. However, it has a less clear physical meaning than the third-order interference and, therefore, the approach of the present paper seems to be superior form a physical point of view. | 10,598 | 2009-12-01T00:00:00.000 | [
"Physics"
] |
Transforming Commerce: A Bibliometric Exploration of E-Commerce Trends and Innovations in the Digital Age
ABSTRACT
INTRODUCTION
In the wake of the digital revolution, the commerce landscape has undergone a major transformation, giving rise to the widespread phenomenon of e-commerce.The emergence of the Internet and its integration into daily life has not only revolutionized communication and information sharing, but also fundamentally changed the way business transactions are conducted.E-commerce, characterized by the buying and selling of goods and services through electronic networks, has emerged as a cornerstone of modern commerce, fundamentally changing the traditional commercial paradigm [1]- [3].The digital age has ushered in an era of unprecedented connectivity, accessibility, and convenience.Consumers now have the power to browse, compare, and purchase products from the comfort of their homes, transcending geographical barriers and time zones.Similarly, businesses have harnessed the potential of e-commerce to reach global markets with minimal physical infrastructure, thus transforming supply chains, distribution networks, and customer engagement strategies [4]- [6].
A bibliometric analysis of previous research on e-commerce trends and innovations in the digital age is not yet available.However, there are several studies that focus on various aspects of innovation and trends in the digital age, which can provide insights into the e-commerce domain.Some of these studies include: Patterns of Introducing Innovations in the Digital Age and Their Impact on Managerial Staff and Employees [7]: This study examines the pattern of introducing innovation in the digital age and its impact on managers and employees.The authors analyze how innovation patterns affect the components of managers' and employees' labor potential at the physical, mental, and intellectual levels.Innovation Development Trends in International Tourism: A Content and Bibliometric Analysis [8]: This article aims to determine current trends in innovation development in international tourism through the systematization of scientific literature and analytical and bibliometric analysis of the term "innovation in tourism.While this study does not specifically focus on e-commerce trends and innovations, it provides valuable insights into various aspects of innovation and trends in the digital age.These insights can be useful for understanding the broader context of ecommerce trends and innovations in our research.
However, the e-commerce landscape is not static; it is constantly evolving in response to technological advancements, changing consumer behavior and market demands.The rapid pace of innovation in the digital realm has led to the emergence of new business models, transformative technologies and innovative strategies.From mobile commerce (m-commerce) to social commerce, from artificial intelligence-driven personalization to blockchain-enabled secure transactions, the e-commerce ecosystem is a hotbed of dynamic trends and cutting-edge innovations [13]- [20].This research aims to embark on a comprehensive journey through the terrain of e-commerce in the digital era.Using the powerful lens of bibliometric analysis, we aim to investigate the scientific literature that has accumulated over the years.Our focus is not only to provide a historical account of the evolution of e-commerce, but also to unpack the intricacies of the trends and innovations that have shaped this domain.Through careful examination of publications, authors, keywords, and citations, we seek to uncover the intellectual structures underlying e-commerce research and shed light on the critical insights emerging from scholarly discourse.
The purpose of this research is twofold.First, it aims to provide a comprehensive overview of the transformation of commerce, outlining the pivotal role that e-commerce has played in reshaping the traditional business paradigm.Second, it seeks to distill the essence of the scholarly dialog that surrounds e-commerce, pinpointing the driving forces behind its evolution, the emerging research themes, and the prominent contributors who have shaped the discourse.By tracing the intellectual landscape of e-commerce, it seeks to illuminate the past, present, and potential future trajectories of commerce in the digital age.
LITERATURE REVIEW
The digital revolution has brought about a paradigm shift in the world of commerce, giving rise to the phenomenon of electronic commerce, commonly known as ecommerce.Over the past few decades, ecommerce has evolved from a novel concept to a dominant force in the global business landscape.This section delves into the existing literature to provide a comprehensive overview of the key trends and innovations that have shaped e-commerce in the digital age.
Evolution of E-Commerce
The beginnings of ecommerce can be traced back to the early 1990s when the internet started to gain traction as a means of communication and information sharing.
E-commerce initially centered on online retail, which allowed consumers to purchase goods online and have them delivered to their doorstep [21]- [23].However, the scope of e-commerce has expanded far beyond online shopping.
Today, e-commerce encompasses a wide array of activities, including business-tobusiness (B2B) transactions, digital services, and even the exchange of intangible products such as information and knowledge [24]- [27].
Key Trends in Electronic Commerce
Mobile Commerce (M-Commerce): The proliferation of smartphones and mobile devices has given rise to mobile commerce, which allows consumers to engage in ecommerce transactions on the go.Mcommerce has opened up new opportunities for businesses to reach consumers through mobile apps, responsive websites and locationbased services [28]- [30].Social Commerce: The integration of social media platforms into e-commerce strategies has given birth to social commerce.Consumers can now discover, share, and purchase products directly from social media platforms, blurring the lines between social interactions and commercial transactions [31]- [33].
Personalization and Recommendation
Systems: Ecommerce platforms leverage advanced data analytics and artificial intelligence to offer a personalized shopping experience.Recommendation systems analyze user behavior to suggest products tailored to individual preferences, increasing user engagement and driving sales [34], [35].
Blockchain Technology: The decentralized and secure nature of Blockchain is already being used in ecommerce, especially in supply chain transparency and secure payment systems.This technology has the potential to revolutionize trust and accountability in online transactions [36]- [39].
Innovations in E-Commerce Research
Virtual Reality (VR) and Augmented Reality (AR): VR and AR technologies are making inroads into e-commerce, offering immersive shopping experiences that allow consumers to visualize products before buying.Virtual showrooms and try-before-you-buy features are changing the way customers interact with products online.Artificial Intelligence (AI) and Machine Learning: AI is revolutionizing ecommerce through chatbots, customer service automation and predictive analytics.AI-based personalization enhances customer experience and improves decisionmaking for businesses [36], [40].
Voice commerce: Voiceactivated devices and virtual assistants are changing the way consumers interact with e-commerce platforms.Voice commerce allows users to make purchases, reorder items, and retrieve information using natural language.Subscription Models: Subscription-based ecommerce models are growing in popularity, offering convenience to consumers and a continuous flow of products or services.This model spans a wide range of industries, from beauty and fashion to digital media and software.
Scientific Dialogue on E-Commerce
The literature on e-commerce reflects the interdisciplinary nature of the field, covering aspects of technology, marketing, economics, psychology, and more.Scholarly discourse focuses on understanding consumer behavior in online environments, analyzing the impact of e-commerce on traditional business models, and exploring the ethical and regulatory challenges posed by the digital commerce landscape [41]- [45].
Research Gaps and Future Directions
Although e-commerce research has made substantial progress, there are still some gaps and avenues for further exploration.The impact of new technologies, such as AI, blockchain, and VR, on the ecommerce ecosystem presents a rich area for research.In addition, as the digital landscape continues to evolve, research investigating the challenges of cross-border e-commerce, sustainability, and the ethical implications of data-driven commerce is becoming increasingly relevant.
METHODS
The methodology used in this study involves a robust bibliometric analysis to explore trends and innovations in the field of e-commerce in the context of the digital age.To achieve this, we will use VOSviewer, a powerful bibliometric analysis tool, to visualize and analyze the relationships among various elements of scientific literature, such as publications, authors, keywords, and citations.The following subsections outline the step-by-step process for conducting bibliometric analysis using VOSviewer [46].
Data Collection and Processing
The data collection stage involves accessing reputable academic databases such as Web of Science, Scopus, and Google Scholar.A comprehensive search strategy will be designed using keywords such as "e-commerce", "digital commerce", "online shopping", "e-business", and related terms with the help of Publish or Perish (PoP) software.The time frame selected for data collection will cover the last two decades to ensure that the latest developments in e-commerce research are covered.
After obtaining the search results, the data will be cleaned to remove duplicate entries and irrelevant records.The remaining dataset will undergo preprocessing, where important metadata such as publication year, author name, affiliation, keywords, and number of citations will be extracted and organized.
. Mapping Results
The bibliometric analysis utilizing VOSviewer has yielded a wealth of insights into the trends and innovations that have shaped the field of e-commerce in the digital age.The visualization of co-authorship networks, keyword clusters, citation patterns, and emerging technologies has unveiled a comprehensive view of the intellectual landscape.This section presents and discusses the key findings derived from the analysis.(3) e-government (25) Access, e-government, economies Table 2 presents the outcomes of the cluster analysis based on keyword cooccurrence.Each cluster represents a thematic grouping of related keywords that have frequently occurred together within the e-commerce literature.The most frequent keywords within each cluster provide insights into the dominant themes and research areas that have emerged within the field.Let's delve into the discussion of the clusters: 1. Cluster 1: E-commerce and Personalization: This cluster encompasses a broad range of topics related to ecommerce and personalization.The high occurrence of keywords such as "e-commerce application," "mobile ecommerce," and "personalization" highlights the significance of tailoring user experiences and leveraging mobile technologies.This cluster reflects the evolution of e-commerce to provide personalized, mobilecentric shopping experiences through various applications and devices.2. Cluster 2: Supply Chain and Web Site: Cluster 2 revolves around supply chain dynamics and webrelated concepts.The presence of keywords like "cross border ecommerce," "internet pharmacy," and "supply chain" indicates a focus on the logistics and global aspects of ecommerce.This cluster underscores the importance of efficient supply chain management and the role of websites in facilitating cross-border trade.
Cluster 3: Transaction:
The keywords within this cluster revolve around transactions and their various dimensions.The emphasis on "buyer," "e-commerce adoption," and "transaction" reflects research concerning the process of completing online transactions, including buyer behavior, legal aspects, and the transactional nature of e-commerce.
Cluster 4: Digital Economy:
Cluster 4 delves into the concept of the digital economy and its impact on commerce.The presence of keywords like "communication technology," "digital economy," and "internet economy" suggests a focus on understanding how digital technologies shape economic activities and generate revenue within the e-commerce landscape.
Cluster 5: Trust:
Trust emerges as a central theme in Cluster 5, which explores aspects related to trust-building in ecommerce interactions.Keywords such as "evolution," "social commerce," and "trust" point to the evolution of trust mechanisms in online environments, emphasizing the role of social interactions and consumer trust in driving ecommerce success.6. Cluster 6: e-government: Cluster 6 represents a smaller cluster with keywords related to egovernment.Although less frequent, the presence of keywords like "egovernment" and "economies" suggests a connection between government initiatives and the digital economy, possibly exploring how governments leverage e-commerce for economic development.Overall, the cluster analysis provides a holistic view of the diverse themes and research areas within the e-commerce literature.Each cluster represents a unique facet of e-commerce research, reflecting the multidimensional nature of the field as it responds to technological advancements and changing consumer behaviors.These clusters offer valuable insights for researchers, practitioners, and policymakers seeking to navigate the complex landscape of ecommerce trends and innovations.Overall, the analysis of keyword occurrences provides a snapshot of the key themes, concepts, and research areas that dominate the e-commerce literature.The prevalence of certain keywords indicates the depth and breadth of scholarly exploration, while the fewer occurrences of others suggest emerging and specialized research directions within the evolving landscape of e-commerce trends and innovations.
The Eastasouth Journal of Information System and Computer Science (ESISCS)
CONCLUSION
The journey through the landscape of e-commerce trends and innovations in the digital age has yielded profound insights into the transformative power of technology on commerce.
The bibliometric analysis, facilitated by VOSviewer, has revealed a multidimensional and interconnected ecosystem where research themes, collaborations, and innovations intersect.From the co-authorship networks that exemplify the collaborative spirit within the field to the keyword clusters that unravel emerging trends, the scholarly discourse on ecommerce showcases its dynamic nature.
E-commerce has evolved far beyond its early days of online shopping, permeating every facet of business and consumer interactions.The prevalence of terms like "transaction," "trust," and "framework" underscores the foundational elements that underpin e-commerce research.At the same time, emerging terms such as "mobile device," "future," and "advergames" hint at the horizons of innovation that continue to expand.
In this digital age, commerce is not just about transactions; it is about personalization, seamless experiences, and harnessing emerging technologies.The intellectual tapestry woven through this analysis serves as a guide for researchers, businesses, and policymakers as they navigate the dynamic e-commerce landscape.As technology continues to evolve, so too will the nuances of e-commerce, shaping the future of commerce in ways we can only begin to fathom.This research contributes to understanding and navigating this transformation, offering a deeper appreciation of how commerce has adapted and thrived in the digital age.
Figure 2 .
Figure 2. Research Trend Bibliometric analysis, facilitated by VOSviewer, has illuminated the intellectual
Figure 3 Figure 3
Figure 3. Visualization Cluster Figure 3 presents the results of the cluster analysis based on the co-occurrence of keywords.Each cluster represents a thematic grouping of related keywords that co-occur frequently in the e-commerce literature.The
Figure 4 .
Figure 4. Author Collaboration The co-authorship network analysis revealed clusters of researchers who have collaborated on e-commerce-related research.The visualization showcased the collaborative dynamics within the field, highlighting research groups, institutions, and individuals | 3,147.2 | 2023-08-28T00:00:00.000 | [
"Business",
"Computer Science",
"Economics"
] |
Image completion algorithm of anthurium spathes based on multi-scale feature learning
Machine vision has been used to grade the potted anthurium plant in large-scale production recently. Images are taken to measure the number and size of anthurium spathes. However, due to the limitation of the shooting angle, the occlusion problem reduces the accuracy of measurement. It is necessary to segment the overlapping spathes and repair the incomplete ones. The traditional image completion model has good performance on missing small areas, but it is not satisfactory for missing large areas. In this article, a multi-scale fusion Recurrent Feature Reasoning (RFR) network was proposed to repair the spathe images. Unlike the traditional RFR, a multi-layer component was used in the feature reasoning module. This network can combine multi-scale features to complete the learning task and obtain more details of the spathe, which makes the network more advantageous in image completion when missing large areas of spathes. In this study, a comparison experiment between this network and the widely used image completion network was performed, and the results showed that this network performed well in all types of image completion, especially with large-area incomplete images.
Introduction
With the continuous growth of the potted Anthurium industry, automation production technology is in urgent need of improvement (GuoHua and Shuai, 2017).As an important part of anthurium automation production, grading plays a vital role in the whole production process (Pour et al., 2018;Soleimanipour et al., 2019;Soleimanipour and Chegini, 2020;Wei et al., 2021).At present, the manual grading method, which is characterized by low efficiency and accuracy, has been gradually replaced by automatic detection technology based on machine vision (Liu et al., 2023).Anthurium detection is used to measure anthurium plant height, crown width, number of flame spathes, flame spathe width, and other indicators from an image taken from above.However, when detected, the spathe is overlapped and cannot be fully visualized, which leads to a large measurement error and low classification accuracy.Therefore, it is particularly important to improve the measurement accuracy of potted anthurium by repairing the incomplete spathe after segmentation and calculating its complete contour.
Traditional image completion is mainly carried out by geometric modeling, texture matching, line fitting, and other methods (Li et al., 2013;Xia et al., 2013;Huang et al., 2014;Chan et al., 2015;Amayo et al., 2017;Iizuka et al., 2017;Li et al., 2019;Ge et al., 2022).For example, Wang et al. repaired incomplete maize leaf images by detecting and matching broken points, as well as fitting the Bezier curve of broken leaves, and then completed the segmentation of corn plants (Wang et al., 2020).Lu et al. propose a radial growth repair algorithm to repair broken roots, which takes the main root tips as the starting point and allows them to grow along the radial path.The repair accuracy of root length and diameter can reach 97.4% and 94.8%, respectively (Lu et al., 2021).Luo et al. propose a grape berry detection method based on edge image processing and geometric morphology.This method introduces edge contour search and corner detection algorithms to detect the concave position of the berry edge contour and obtain the optimal contour line.The average error of the berry size measured by this method is 2.30 mm (Luo et al., 2021).All these methods are aimed at repairing images with small missing areas but are not suitable for occluded images with large missing areas.
The development of deep learning technology has led to improved performance in image completion.(Haselmann et al., 2018;Wang et al., 2021;Zaytar and El Amrani, 2021;Belhi et al., 2023;Xiang et al., 2023;Guo and Liu, 2019;Chen et al., 2023;Mamat et al., 2023) However, it is used less in the field of agriculture.Chen et al. (2018) show that the model is better than the current most commonly used completion networks and has a better image completion effect on plant seedlings.
At present, the deep learning algorithms for plant completion mostly include convolutional neural networks and generative adversarial networks (Geetharamani and Arun Pandian, 2019;Vaishnnave et al., 2020;Ugȗz and Uysal, 2021;Yu et al., 2021;Bi and Hu, 2020;Jiao et al., 2019;Abbas et al., 2021;Zhao et al., 2021;Kumar et al., 2022;Padmanabhuni and Gera, 2022;Wong et al., 2020).Convolutional neural networks use encoders to extract potential features of the known parts of the image, and then generate the unknown parts through decoders of the image, while adding constraints to optimize repair results.The generative adversarial network is composed of two sub networks: a generator and a discriminator.The generator is used to generate relevant image data, and the discriminator is used to determine whether it is a generated image or a real image.The two networks confront each other and learn until they reach a balanced state.RFR and CRFill are two types of methods, respectively.As shown in Table 1, these two types of methods are not satisfactory when missing large areas, which needs to be improved.
This article first analyzed the problems of existing models.Then, an improvement plan was proposed, and a comparative experiment was conducted between the improved model and the existing model.The main contributions of this article are as follows.
1.The visualization method was used to analyze the reasons for the poor performance of the RFR network in large-area missing image completion.
2. A model with strong feature learning ability was proposed, which effectively reduces the image completion error when large areas are missing.
Dataset establishment
Photos are taken by Azure Kinect depth camera from above in a 1.8m×1.3m×1.8mbox.The distance between camera and platform is 100cm.Two 50cm long, 32w power LED light strips
Visualization of recurrent feature reasoning network
RFR (Li et al., 2020) is a neural network (Szegedy et al., 2015) model for image completion, which completes images by reducing the range to be filled layer by layer, and the reuse of the parameters effectively reduces the size and running time of the model.As shown in Figure 1, the RFR network includes three modules: an area identification module, a feature reasoning module, and a feature merging operation.The area recognition module is used to calculate the current area that needs to be filled, and then the feature inference module fills the area.These two modules run in series and alternatively.Each run outputs the filling result of the current round.Feature merging operation fuses the features of multiple scales and outputs the final filled image.
It can be seen from the results in Table 1 that the traditional RFR model performs well in completing missing small areas, but poorly in missing large areas.The feature reasoning module is the core of RFR, which directly affects the completion accuracy.In this study, a visual method is used to separate all feature channels of the convolution layer in the feature reasoning module, and then the visual feature map of each channel is obtained.This is helpful to determine the reason for the inadequate completion when missing large areas (Arora et al., 2014).
Figure 2 is the visual feature map of the coding layer and decoding layer in the feature inference module for both large and small missing cases.As can be seen from the figure that compared to small area missing images, there are more blue color blocks in large area missing images, which indicates that the semantic information extracted by the encoder in large-area missing images is relatively less.This will result in the decoder to lack enough effective information during image reconstruction, so that the weight of the red feature map is concentrated in a few feature dimensions, thus the repair result is poor.
Model construction
In order to solve this problem, the Inception module is proposed to enhance the learning and reasoning ability of the network on various scale features, so as to improve the completion accuracy when large areas are missing.Figure 3 shows the model used in this study, composed of an area identification module, a feature reasoning module, and a feature merging operation.However, different from the single layer network of RFR, a multi-layer network is used in the feature reasoning module of the model, which can fuse the features of various subsets to complete the learning task and extract richer features.
As shown in Figure 4, the Inception module is added to each layer of the feature reasoning module.The input image for this layer is processed through four parallel layers, and then fused by 3×3 conv.Due to the different sizes of convolutional kernels, 1×1 convolutions, 3×3 convolutions, and 5×5 convolutions have different sensory fields.More detailed features are obtained when the sensory field is smaller.At the same time, the global features are obtained by Maxpooling.This improved model can obtain not only detailed features of different scales but also global features.Therefore, the information is more comprehensive which is critical for improving the accuracy of image completion.The calculation process is as follows: h (i) 4 = MaxPooling(X (i) ) ( 4) w he r e h (i) 1 , h (i) 2 , h (i) 3 , h (i) 4 r e p r e s e n t t h e o u t p u t o f 1 × 1 convolutions, 3×3 convolutions, 5×5 convolutions, and Max Pooling, respectively, and h (i) represents the result of concatenation and convolution processing of the four components.W i 1x1 represents the weight matrix of the 1×1 convolution of layer i, and similarly, W i 3x3 , W i 5x5 represent the weight matrix of 3×3 convolution and 5×5 convolution, respectively.X (i) is the output feature map of the previous layer network.b (i) 1 , b (i) 2 , b (i) 3 represent the bias terms of the 1×1 convolution, 3×3 convolution, and 5×5 convolution, respectively.ReLU and LeakyReLU represent activation functions.
Figure 5 shows the visual results of the improved feature reasoning module on the above large area missing image.Compared with RFR, The visual results of the feature reasoning module.
this model has richer feature information in both the encoding and decoding processes, which also indicates that this model effectively improves the learning ability of the feature reasoning module.
Model training
Transfer learning was used to speed up the convergence, and the Adam optimizer was used, with a learning rate of 2 Â 10 4 , a batch size of 4, and 120,000 as the number of iterations.The multi-scale image completion network was trained by a joint loss function consisting of the content loss of the completed part, the content loss of the whole spathe, and the perceptual and style losses, to improve the consistency of the completed image and the real image.The expression of the loss function is as follows: where, L sum is the loss function, L hole is the content loss of the completed part, L valid is the content loss of the whole spathe, L perceptual is the perceptual loss, and L style is the style loss.In this article, the loss function coefficients are set as l hole = 1, l valid = 6, l perceptual = 0:05, l style = 120.The random mask algorithm was used to automatically generate missing images during training.Two types of comparison experiments were designed according to the proportion and type of the missing, and the completion results were compared with Four widely used models CRFill, RFR, CTSDG and WaveFill.CTSDG uses a bi-gated feature fusion (Bi-GFF) module to integrate reconstructed structure and texture maps to enhance their consistency.WaveFill is based on wavelet transform, breaking the image into multiple frequency bands and filling in the missing areas in each band separately.Feature reasoning module layer structure of multi-scale feature fusion RFR.
Evaluation indicator
In this article, qualitative and quantitative methods are used to evaluate the repair result.The quantitative evaluation mainly shows the degree of improvement in image completion compared with other models, which needs to be analyzed in combination with the results of qualitative analysis.
To evaluate the completion accuracy of the model, the following polar coordinate system was established on the surface of the spathe.As shown in Figure 6, assuming that the quality of each pixel in the image is uniform, the centroid of the spathe is taken as the pole.Horizontally to the right indicates 0°of the polar axis, and counterclockwise is the positive direction of the angle.The unit of the axes in polar coordinate system are pixels.The contours extraction algorithm is used, and the contours of the completed spathe and real spathe are r 1 (q) and r 2 (q), respectively.Mean square error(MSE) is a commonly used index to measure the difference between the predicted value and the actual observed value, and it can well represent the degree of fitting between the predicted contour and the real contour.The calculation formula is as follows: where, q is the polar angle of the polar coordinate system, and r(q) is the distance from the centroid to the contour edge when the polar angle is q.A smaller mean squared error correlates with higher measurement accuracy.
FIGURE 6
Accuracy evaluation of spathe image completion.The visual results of the improved feature reasoning module.
Qualitative evaluation
To qualitatively evaluate the completion effect of the model in this study, repair experiments were carried out on the images of 15 groups of test sets, and the results are shown in Tables 3-5.It can be seen from the completion results that CRFill has the worst performance of the three types and can hardly repair images with large missing areas.
RFR is prone to errors, and the results are variable.The other three models can complete a similar spathe profile.However, compared with the model presented in this article, CTSDG and WaveFill cannot accurately complete the detailed features in images with large missing areas, and the total deviation is large.The model in this article adds the Inception module, which utilizes additional reasoning features in large-area completion.Even when the image is 40-50% missing, the model still demonstrates good completion ability, which is very important for phenotype detection.Figure 7 shows the image completion accuracy of each model for different incomplete types.It can be seen that CRFill performs poorly in all types of image completion and differs significantly from others.For the top-missing type, other models perform well with a mean square error between 15.27 and 21.18.Meanwhile, for the side-missing and bottom-missing types, the mean square error of image completion increases, but the values of our model are still the smallest among all models, which are 23.66 and 54.83 respectively.Among the five types, the error in the bottom-missing type is large, which is due to the significant individual variances at the bottom of the spathe and make it difficult to complete.However, of all the types, the model in this study has the best performance and the accuracy is higher than other models.
FIGURE 7
Average MSE of different incomplete types.
Influence of incomplete proportions on image completion results
Figure 8 shows the image completion accuracy of each model for different incomplete proportions.The incomplete proportion has a significant impact on the completion accuracy.Similarly, aside from CRFill, the completion accuracy of the other models is adequate when the incomplete proportion is less than 10%.With an increase in the incomplete proportion, the average MSE gradually increases.When the incomplete proportion reaches 40-50%, the average MSE significantly increases.This is because when a large proportion is missing, the number of features used for reasoning is reduced.It can also be seen from the results that when the incomplete proportion is less than 40%, the average MSE of the model in this study has little difference from RFR, CTSDG, and WaveFill.However, when the incomplete proportion reaches 40-50%, the model shows a significant advantage, approaching half of the error of the others.According to the qualitative evaluation results in 3.2.1,there are obvious errors in the repair results of other models when the incomplete proportion is 40%-50%, while the result of the improved model is in good agreement with the original image.Therefore, this error is considered acceptable.
It can be seen from the results of comparative experiments that Inception module combines different convolution layers in parallel and connects the result matrices processed by different convolution layers together to form a deeper matrix in depth dimension.It can aggregate visual information of different sizes and reduce the dimensionality of larger matrices to extract features of different scales.Therefore, the information obtained by the improved model is more abundant, and the accuracy of image completion is effectively improved.
Conclusion
This study analyzed the reasons for the low completion accuracy of the RFR model in large-area missing images by visual methods.The Inception module was proposed to improve the feature reasoning module of the RFR model, which further improved the feature learning ability.The improved model could obtain not only the detailed features of different scales, but also the global features, which perform well.In missing type comparison experiment with existing widely used models, it can be seen that the top-missing type has the best results, followed by side-missing type, and bottommissing type has the largest repair error due to significant individual differences.However, no matter what kind of missing type, the model presented in this article has obvious advantages.In the comparative experiments of different missing parts, it was found that the repair error of each model increased with the increase of missing proportion.When the incomplete ratio reaches 40-50%, the error of this model is only half that of others.This shows that this model performs best regardless of the type and proportion of missing images, and its repair accuracy is significantly higher than other models, which is crucial for improving the measurement accuracy of potted anthurium.Although the method in this article integrates features of different scales, it is still based on two-dimensional images, ignoring the influence of the tilt Angle of spathes.If depth Average MSE of different incomplete proportions.Wei et al. 10.3389/fpls.2023.1281386Frontiers in Plant Science frontiersin.orginformation can be introduced to repair images in three-dimensional space in the future, the repair accuracy can be further improved.
repaired root images of dicotyledonous and monocotyledonous plants using a convolutional neural network.Da Silva et al. (2019) reconstructs the damaged leaf parts by training a convolutional neural network model using synthetic images and then estimated the defoliation level.Silva et al. (2021) predicts the original leaf shape and estimates the leaf area based on conditional adversarial nets.Experiments show that this method can be used for leaf image completion.Zeng et al. (2022) proposed a plant point cloud completion network based on a Multi-scale Geometry-aware Transformer to solve the problem of leaf occlusion between plant canopy layers.The results
FIGURE 1
FIGURE 1Structure of the RFR network.
FIGURE 3
FIGURE 3Structure of a multi-scale feature fusion RFR network.
TABLE 1 Image
Completion effects of different models.thesameheight as the camera, located on both sides of the camera 37.5cm apart, and the two light strips are at a 60°Angle to the horizontal direction.60pots of anthurium are used for image collection, and then the complete spathe images are extracted manually.Together with those searched from the internet, a total of 901 spathe images were collected in this study, including 726 for training and 175 for testing.Each image has a resolution of 256 x 256 and contains only one complete spathe.To improve the learning ability of the model, 726 images of the training set were scaled, rotated, and translated, and 5,320 training samples were obtained.To evaluate the performance of each model in images of different missing types and proportions, 15 groups of test samples were generated from 175 images of the test set.As shown in Table2, each group was generated by 175 original images as required, and a total of 2625 images were obtained.Since the spathes are usually in the canopy layer, The occlusion of spathes is mainly caused by adjacent paths or leaves.it is found in the previous images that most of the occlusion are on one side, mainly on the root and side, and a few are on the top.Therefore, masks are randomly generated at these three parts in proportion for image training and testing.
Frontiers in Plant Science frontiersin.orgare installed at
TABLE 2
Example images of the test set.
TABLE 4
Comparison of side missing image completion results.
TABLE 3
Comparison of top missing image completion results.
TABLE 5
Comparison of bottom missing image completion results. | 4,634 | 2023-12-13T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Characterization of subcutaneous and omental adipose tissue in patients with obesity and with different degrees of glucose impairment
Although obesity represents a risk factor for the development of type 2 diabetes mellitus (T2DM), the link between these pathological conditions is not so clear. The manner in which the different elements of adipose tissue (AT) interplay in order to grow has been suggested to have a role in the genesis of metabolic complications, but this has not yet been fully addressed in humans. Through IHC, transmission electron microscopy, cytometry, and in vitro cultures, we described the morphological and functional changes of subcutaneous and visceral AT (SAT and VAT) in normoglycemic, prediabetic and T2DM patients with obesity compared to lean subjects. In both SAT and VAT we measured a hypertrophic and hyperplastic expansion, causing similar vascular rarefaction in obese patients with different degrees of metabolic complications. Capillaries display dysfunctional basement membrane thickening only in T2DM patients evidencing VAT as a new target of T2DM microangiopathy. The largest increase in adipocyte size and decrease in adipose stem cell number and adipogenic potential occur both in T2DM and in prediabetes. We showed that SAT and VAT remodeling with stemness deficit is associated with early glucose metabolism impairment suggesting the benefit of an AT-target therapy controlling hypertrophy and hyperplasia already in prediabetic obese patients.
Obesity and type 2 diabetes mellitus (T2DM) are closely linked global health care problems. Although the close relationship between weight gain and development of T2DM is well established, factors determining or accelerating the progression from the former to the latter continue to be debated 1 . The microvasculature seems to play a central role in glucose homeostasis; impaired capillary recruitment and tissue perfusion have been shown to reduce glucose uptake, leading to insulin resistance (IR) and metabolic alterations 2 . It has been demonstrated that low vascular density in adipose tissue (AT) is associated with an abnormal metabolic profile [3][4][5] . Furthermore, microangiopathy is a hallmark of diabetes complications. The cellular elements of the microvasculature appear to be particularly sensitive to injury from sustained hyperglycemia. The microvascular impairment or damage differs considerably between different tissues and organs, but it is possible to assume that all organs are affected simultaneously to a greater or lesser degree 6 . Moreover, there is now abundant evidence that microvascular dysfunction/disease is not restricted to the presence of Type 1 or 2 diabetes but is also present in many diseases which are, or represent, the first step in the progression toward diabetes itself, such as obesity. Therefore, we supposed Results patients' clinical characterization. The demographic, anthropometric and metabolic characteristics of the lean subjects and obese patients are detailed in Table 1. The lean subjects, who were age-matched with the obese patients, showed a lower incidence of comorbidities (Hypertension, Dyslipidemia, Obstructive Sleep Apnoea Syndrome). As expected, fasting plasma glucose (FPG) was similar in the lean and obese normoglycemic (ob N) patients; BMI and blood leptin level were not statistically different in the three groups of obese patients. FPG levels and the HOMA IR were statistically different in the obese groups; they rose markedly as glycemic impairment increased (obese prediabetic and diabetic patients, ob preDM and T2DM). Ob T2DM patients displayed a mean glycated hemoglobin A1c (HbA1c) value of 66 ± 21 mmol/mol (8.2 ± 4.1%) and 56% of them presented microalbuminuria (181 ± 186 mg/g creatinine) (data not shown). Regarding the three most important www.nature.com/scientificreports www.nature.com/scientificreports/ microvascular complications (nephropathy, neuropathy and retinopathy) only 11/50 patients reported one or more of them (data from 7 patients were not available). This low rate of diabetic microvascular complications is probably due to the short duration of the diabetes. Indeed 58% (33/57) of our patients had had a recent disease diagnosis (less than 5 years). Thirty-eight percent of the ob preDM patients had impaired fasting glucose, 26% had impaired glucose tolerance, 36% had both. The glucose and insulin area under the curve (AUC) were thus lower in the ob N compared to the ob preDM patients. The incidence of comorbidities and the systemic inflammation markers (high sensitivity C-reactive protein, hs-CRP; tumor necrosis factor-alpha, TNF-α; interleukin-6, IL-6) were higher in the ob T2DM compared to the ob N patients.
Clinical data of subgroups of subjects analyzed in each different experiment were reported in Supplementary Information (Supplementary Tables S1-S6).
Adipose tissue morphological and gene expression studies. For the immunohistochemical (IHC) analysis, we selected patients with similar age among the groups described in Table 1. Histomorphometric evaluations ( Fig. 1) highlighted that the SAT adipocytes were larger than the VAT ones in all the groups studied (P < 0.001, Mann-Whitney test). The median adipocyte area in both the SAT and VAT of all the obese subgroups was significantly higher compared to that of the lean group. Interestingly, adipocytes of ob preDM were larger than those of ob N and the ob T2DM subgroup showed an adipocyte size comparable to ob preDM ( Fig. 1A-C). This is an important result, considering that the ob preDM patients selected for IHC analysis were "early" prediabetic. Indeed even if their glucose AUC was similar to the whole prediabetic population, their insulin AUC and HOMA IR values were more similar to those of the ob N subgroup (ob preDM vs ob N IHC; HOMA IR : 3.6 ± 1.8 vs 3.3 ± 1.9, P = 1.0; insulin AUC: 15463 ± 12065 vs 10535 ± 4853 mU/L per min, P = 0.84) (Supplementary Table S1).
Capillary density was higher in the VAT than in the SAT in all of the groups (P < 0.001, Mann-Whitney test) (Fig. 1D,E). The density of vessels was lower in both fat depots in the obese patients compared to the lean, but there were no differences between the obese subgroups ( Fig. 1A,D,E). IHC data obtained from SAT and VAT positively correlate ( Fig. 2A,B). Moreover, peroxisome proliferator-activated receptor-gamma isoform 2 (PPARG2) mRNA was significantly upregulated in the obese patients compared to the lean, and the ob T2DM group had the lowest levels of all three groups ( Supplementary Fig. S1A,B).
Hypoxia inducible factor 1 alpha subunit (HIF1A) was upregulated only in the SAT of the obese patients compared to that of the lean group; the ob T2DM subgroup exhibited the highest expression level in SAT ( Supplementary Fig. S1C,D). The vascular endothelial growth factor A (VEGFA) expression in the SAT and VAT was similar in all the patients analyzed ( Supplementary Fig. S1E,F).
Capillary basement membrane (CBM) analysis of adipose tissue.
In literature, CBM thickness is described as being correlated with age 13 , therefore in order to avoid this bias we selected the more possible age-matched patients to perform the transmission electron microscopy (TEM) analysis (Supplementary Table S2). It is worth noting that no patients in the ob T2DM group selected for TEM analysis reported diabetic microvascular complications but 60% presented microalbuminuria. The CBM thickness of the VAT of these patients was measured to investigate AT microangiopathy (Fig. 3). Only the ob T2DM patients showed a significant increase in CBM thickness compared to the other 3 groups (Fig. 3C). No significant differences were observed between the lean, ob N and preDM patients, although a pattern of progressive thickening was noted.
The CBM thickness was strongly positively correlated with FPG ( Fig. 3D) and HOMA IR (Fig. 3E) and not with age (data not shown).
The immunophenotype and adipogenic potential of stromal vascular fraction (SVF). The percentage of SAT-ASCs (CD45−/31−/34+) on the SVFs was significantly higher in the ob N compared to that in the lean group (Fig. 4A,B). VAT (Fig. 4C) in all the obese patients showed a significant ASC enrichment compared to that in the controls. Both SAT-and VAT-ASCs percentage was lower in the ob preDM compared to the ob N subgroup; this reduction was observed also in the ob T2DM subgroup to the same extent ( Fig. 4B,C). The VAT contained a higher number of ASCs than the SAT did in all the patients. The number of adipogenic precursors was not correlated with age (data not shown), though the ob T2DM patients analyzed were older than the ob N and preDM patients (Supplementary Tables S3, S4).
The ASC phenotype was further characterized in the attempt to evaluate the expression of typical mesenchymal stem cell markers such as CD73, CD90, CD105, CD271 and pericyte marker CD146 (Supplementary Table S7). More than 90% of the CD45−/31−/34+ cells expressed CD73 and CD90 in both depots and similar patterns occurred in all the patient subgroups. CD105 was, instead, expressed only by 30-70% of the CD45−/31−/34+ in both the SAT and VAT of the lean subjects and obese patients. More specifically, the percentage of CD45−/31−/34+/105+ cells was similar in the SAT of all the groups, but it had increased in the VAT of all the obese subgroups compared to the lean group. Similar results were obtained with regard to CD271 antigen expression. A small ASC percentage (<2%) expressed the pericyte marker CD146 and there were no differences between the patient groups or AT depots.
The percentage of endothelial progenitors (CD45−/31+/34+) was higher in the SAT of all the obese patients compared to that in the lean (Fig. 4D) and this trend was similar, even if not significant, in the VAT (Fig. 4E). Importantly, the percentage of stem cells in the SAT was positively correlated with that in the VAT (Fig. 2C,D). No differences in the number of endothelial mature cells (CD45−/31+/34−) were observed in the two AT depots of the various groups (Fig. 4F,G).
Lastly, the capacity to differentiate towards the adipogenic lineage was investigated in obese patients whose clinical characteristics are described in Supplementary Tables S5, S6. In vitro differentiation revealed a lower (2019) 9:11333 | https://doi.org/10.1038/s41598-019-47719-y www.nature.com/scientificreports www.nature.com/scientificreports/ percentage of mature adipocytes from the SAT-SVF of ob preDM and T2DM compared to the ob N subgroup (Fig. 5A). SVFs isolated from the VAT displayed a very low adipogenic potential with no differences between the subgroups (Fig. 5B). These results were confirmed by the gene expression analysis of 3 adipose-specific genes (PPARG2, FABP4 and ADIPOQ), quantified in cell cultures upon in vitro adipogenic differentiation (Fig. 5C-H).
Discussion
Since AT plasticity contributes to the pathogenesis of obesity and related metabolic complications 9,14 , we aimed to explore AT remodeling in normoglycemic, prediabetic and diabetic patients with obesity.
Several lines of evidence suggested that AT vascularization changes could influence fat growth and its metabolic features 4,5,15,16 . Confirming and extending on previous studies, we showed the presence of capillary rarefaction in the SAT and VAT of the obese patients compared to the lean subjects 3,4,17,18 . Surprisingly, the degree of vascularization was similar in the obese subgroups independently of any further increase in adipocyte size or worsening in the metabolic profile (Fig. 1D,E). We demonstrated that among obese patients there are no differences in the number of AT capillaries, neither in VEGFA expression ( Supplementary Fig. S1E,F) nor in the www.nature.com/scientificreports www.nature.com/scientificreports/ quantification of endothelial mature cells isolated from AT-SVFs (Fig. 4F,G). Thus, obesity-related T2DM did not seem to be triggered by vascular network defects. At the same time, the capillary rarefaction and the higher levels of the HIF1A we found in the SAT of the obese patients ( Supplementary Fig. S1C) confirm the hypothesis that hypoxia could play a central role in AT dysfunction in obesity, as previously reported 3,18-22 . Our AT expression profile analysis showed that HIF1A upregulation is not associated with a parallel induction in VEGFA expression ( Supplementary Fig. S1C,E). Although the endothelial precursors were higher in the AT of the obese patients compared to lean controls (Fig. 4D,E), they seemed to be unable to differentiate into mature endothelial cells or to form capillary structures (Figs 1D,E and 4F,G), suggesting that HIF1A upregulation during obesity induces only a detrimental transcriptional program contributing to AT dysfunction 17,19,20,23 .
Our TEM investigation in VAT highlighted a significant CBM thickening only in ob T2DM patients ( Fig. 3A-C) and a positive correlation between this thickening and FPG and HOMA IR (Fig. 3D,E), but not with BMI values. These results support the hypothesis that obesity per se does not affect CBM structure and that thickening is mostly related to the exposure to chronic hyperglycemia. Our results suggested that the capillaries of VAT of obese T2DM patients could be functionally impaired contributing to the worsening of hypoxic conditions and AT inflammation leading to a more severe adiposopathy associated with long-term metabolic complications present in overt T2DM. CBM thickening of the retina and the kidney has long been known to be a crucial anatomic feature of microangiopathy in patients with Type 1 and 2 DM 6 but this is the first study describing VAT microangiopathy as a function of glucose metabolic impairment in obese patients.
Regarding AT morphology, we observed a clear increase in adipocyte size in patients with obesity when compared to lean controls at the same extension in SAT and in VAT, but intriguingly the largest cell size observed in patients with obesity and T2DM was also detected in the prediabetic condition ( Fig. 1A-C). Thus, alterations in AT morphology did not seem to be a result of prolonged hyperglicemia, hyperinsulinemia or elevated IR degree in the patients studied. We recently published that the insulin signalling pathway is similarly active in the AT of prediabetic and normoglycemic obese patients and it is downregulated in diabetic obese patients 24 . Therefore, the increase in adipocyte size seems pre-existent to insulin signalling alterations and could be linked to genetic or epigenetic characteristics of patients. The increasing size of adipocytes in the obese patients compared to the lean controls and the general observation that the adipocytes were larger in the SAT than in VAT (Figs 1A-C and 2A) also concur with previous reports [25][26][27] . The huge AT expansion in obesity was also confirmed by the marked up-regulation of PPARG2 expression in both depots ( Supplementary Fig. S1A,B).
We observed an increase of ASCs, which could be considered a marker of AT hyperplastic growth, in the obese patients compared to lean controls and a shrinking of their number in the obese patients with glycemic impairment (Fig. 4A-C). Accordingly, the in vitro adipogenic potential of SAT-SVF cells from the ob preDM and T2DM patients was lower than that in the ob N (Fig. 5A,C,E,G), supporting the theory that the balance between www.nature.com/scientificreports www.nature.com/scientificreports/ hypertrophy and hyperplasia in AT growth contributes to metabolic impairment 25,[28][29][30] . Despite the fact we only had the possibility to perform cross-sectional observations on our obese cohort, several of our results were consistent with previous findings obtained by a prospective longitudinal study on the Pima Indian population 30 . www.nature.com/scientificreports www.nature.com/scientificreports/ Although SAT seems to conserve some beneficial features such as the higher amount of endothelial precursors and a greater in vitro adipogenic potential, many of our findings obtained in SAT correlate with those obtained in VAT (Fig. 2) suggesting that both AT depots could play a role in severe obesity and metabolic complications. Figure 6 well represents all our data showing that AT in obesity grows both through hypertrophy and hyperplasia causing vascular rarefaction that is not related to glucose metabolism impairment. Mature adipocytes are enlarged and there are significantly fewer ASCs early in the prediabetic condition. These alterations in AT remodeling are also present into overt diabetes in association with the development of AT diabetic microangiopathy.
Our results suggest that the dynamic balance between AT hyperplasia and hypertrophy, rather than vascular network impairment, is the early crucial alteration triggering the pathogenesis of impaired glucose homeostasis in severe obesity. Future studies will be necessary to identify the mechanisms leading to these AT structural changes and to develop a specific AT targeting therapy for prediabetic obese patients.
Methods
Patients. In order to investigate AT architecture, paired SAT and VAT samples were collected from 177 obese patients and from 18 normal weight non-diabetic (lean) subjects considered the control group. The baseline clinical evaluation included a complete medical history and clinical examination. The patients' anthropometric measurements were taken, and hematological and biochemical parameters were determined ( Table 1). The study's exclusion criteria were: a history of malignancy, chronic inflammatory diseases such as ulcerative colitis and/or Crohn's disease, active infectious diseases, drug or alcohol abuse and, for the lean subjects, T2DM.
All the obese patients, attending the Center for the Study and the Integrated Treatment of Obesity (Ce.S.I.T.O) of Padua Hospital between January 2014 and June 2016, were selected with a matched BMI. In accordance with the American Diabetes Association criteria they were divided into 3 groups depending on their glycemic profile: 62 were ob N, 58 ob preDM and 57 ob T2DM 31 . One hundred sixty-nine obese patients underwent a laparoscopic sleeve gastrectomy and 8 underwent a gastric bypass.
The lean subjects (18.5 ≤ BMI ≤ 24.9 Kg/m 2 ) were attending the Division of General Surgery or the Ce.S.I.T.O of Padua Hospital; AT samples were harvested from them during abdominal surgery (laparoscopic cholecystectomy, fundoplication, colic resection for diverticular disease, rectal prolapse reduction).
All adipose tissue samples were collected during laparoscopic surgery in the abdominal region. In particular, VAT was obtained harvesting at least 1 cm 3 of omental fat, while SAT was obtained excising 1 cm 3 of subcutaneous fat at trocar site, both in obese and lean subjects.
The incidence of the three principal obesity-related comorbidities (hypertension, dyslipidemia and obstructive sleep apnoea syndrome) was calculated for each group of patients. Patients were considered affected by one of the above-mentioned comorbidities when they were receiving specific treatment or when they met international criteria 32-34 . Biochemical analysis. Blood biochemical analyses were performed after an 8-hour fast. FPG, insulin, lipid profile, hs-CRP, TNF-α, IL-6 and leptin levels were measured in all of the obese patients studied. The HOMA IR was used to calculate the insulin resistance index 35 . A 3-hour 75 g oral glucose tolerance test (OGTT) was performed for blood glucose and insulin plasma levels at baseline and 30, 90, 120, 150, 180 minutes after glucose www.nature.com/scientificreports www.nature.com/scientificreports/ loading (180 mL of syrup with 82.5 g glucose monohydrate equal to 75 g of glucose). Glucose and insulin AUC was calculated. OGTT was not performed in obese patients with a history of diabetes, and fasting insulin was not measured in patients receiving insulin treatment, meaning that HOMA IR was not calculated in these patient categories. In patients with T2DM we also measured HbA1c (by high performance liquid chromatography) and microalbuminuria by the albumin/creatinine ratio (urinary albumin/creatinine ratio <30 mg/g creatinine was considered normal). Biochemical measurements were performed using diagnostic kits standardized according to the World Health Organization First International Reference Standard: glucose (Glucose HK Gen.3, Roche Diagnostic, USA), insulin, IL-6, TNF-α (IMMULITE 2000 Immunoassay, Siemens Healthcare GmbH, Germany), hs-CRP (CardioPhase High Sensitivity C-Reactive Protein, Siemens Healthcare) and leptin (Leptin-RIA-CT, Mediagnost, Germany). IHC analysis. Paired SAT and VAT samples were fixed in 4% formaldehyde (Diapath S.p.A, Bergamo, Italy), paraffin embedded, cut into 5 µm thick sections and stained with monoclonal mouse anti-human CD31 (clone JC70A, 1:100; DakoCytomation, Carpinteria, CA, USA). The selected patients were of similar age, while their main demographic, anthropometric and metabolic characteristics were comparable to the whole population (Supplementary Table S1). Indirect immunohistochemistry was performed with a Dako labeled streptavidin biotin-horseradish peroxidase kit. AT images were captured at 20X magnification with a Leica DFGC450 digital camera (Leica DM LB2 light microscope) in at least 10 different fields (up to a minimum of 200 random adipocytes) per specimen of each patient. The adipocytes' sizes were measured using LAS Software (Leica Microsystems Inc., Deerfield, IL, USA). The median adipocyte area for each field was used to calculate the median adipocyte area for each subgroup of patients. The number of capillaries per mm 2 was determined counting positive capillary lumens in the same fields in which the adipocytes' areas were measured. TEM analysis. VAT samples were fixed in 2% glutaraldehyde-2% paraformaldehyde in 0.1 M sodium cacodylate buffer pH 7.4 for 1 hour at 4 °C, postfixed in osmium tetroxide 1% for 1 hour at 4 °C and embedded in an Epon-Araldite mixture. Ultrathin sections (60-70 nm) were obtained with an Ultratome V ultramicrotome (LKB, Stockholm, Sweden), counterstained with uranyl acetate and lead citrate and viewed with a Tecnai G2 microscope (FEI, Hillsboro, OR, USA) operating at 100 KV. Images were captured with a Veleta (Olympus Soft Imaging System GmbH, Münster, Germany) digital camera.
Morphometric evaluation of CBM from TEM images at 93000X magnification was performed using Image J software, as described by Baum, on at least 45 capillaries for each group of patients 36 . Only images showing capillaries cut perpendicular to their long axes were used for analysis.
SVF extraction and flow cytometry analysis.
Depending on tissue availability, SVF cells from AT were freshly isolated for ex vivo multiparametric flow cytometric analysis and/or primary adipocytes culture.
SAT and VAT biopsies were minced and digested in collagenase type II solution (1 mg/mL) (Sigma-Aldrich, St. Louis, MO, USA), centrifuged (350 xg, 10 min) and the red blood cells were removed using a standard lysis buffer, as previously described by Sanna et al. 37 .
Adipogenic differentiation. 1 × 10 5 SVF cells per well were seeded in duplicate in 96-well plates in a human standard medium and, when the cells reached confluence, adipogenic differentiation was induced using an adipogenic medium as described in Borgo et al. 24 . At the end of differentiation (12 days), RNA was extracted from the cell cultures (two 96-well replicates lysed together), as described in the paragraph below, and the cell cultures (two 96-well replicates) were fixed in 10% formalin/PBS and stained with Oil-Red O (Sigma-Aldrich) solution in 40% isopropanol. After 3 PBS washes, the percentage of mature adipocytes was estimated by observing the specific staining for lipid droplets during double blind observation with a Leica DM IL LED inverted microscope. | 5,079.2 | 2019-08-05T00:00:00.000 | [
"Medicine",
"Biology"
] |
Adaptive bandwidth kernel density estimation for next-generation sequencing data
Background High-throughput sequencing experiments can be viewed as measuring some sort of a "genomic signal" that may represent a biological event such as the binding of a transcription factor to the genome, locations of chromatin modifications, or even a background or control condition. Numerous algorithms have been developed to extract different kinds of information from such data. However, there has been very little focus on the reconstruction of the genomic signal itself. Such reconstructions may be useful for a variety of purposes ranging from simple visualization of the signals to sophisticated comparison of different datasets. Methods Here, we propose that adaptive-bandwidth kernel density estimators are well-suited for genomic signal reconstructions. This class of estimators is a natural extension of the fixed-bandwidth estimators that have been employed in several existing ChIP-Seq analysis programs. Results Using a set of ChIP-Seq datasets from the ENCODE project, we show that adaptive-bandwidth estimators have greater accuracy at signal reconstruction compared to fixed-bandwidth estimators, and that they have significant advantages in terms of visualization as well. For both fixed and adaptive-bandwidth schemes, we demonstrate that smoothing parameters can be set automatically using a held-out set of tuning data. We also carry out a computational complexity analysis of the different schemes and confirm through experimentation that the necessary computations can be readily carried out on a modern workstation without any significant issues.
Introduction
High-throughput sequencing (HTS) has become a central technology in genome-wide studies of protein-DNA interactions, chromatin-state modifications, gene regulation and expression, copy number variations, etc. [1,2]. In many cases, such experiments can be viewed abstractly as attempting to measure a "signal" f that varies across the genome. For instance, if the DNA that is sequenced comes from chromatin immunopreciptation (ChIP) of a transcription factor, then the signal f is expected to have the highest amplitude in regions of the genome where the factor binds most strongly. If the sequenced DNA comes from reverse transcription of RNA, then f is expected to have the highest amplitude in regions of the genome that are most actively transcribed. Of course, experience with HTS technologies has shown that such genome-wide signals also reflect other biases or influences-due to, for example, sequencing, chromatin accessibility, mappability, etc. [3]. Techniques for correcting such biases are beginning to emerge [4,5]. Regardless, highthroughput sequencing continues to generate numerous important insights into the molecular networks that govern the cell.
Various analysis algorithms specialize in extracting biologically-relevant information from different types of HTS data. For example, peak-calling algorithms take mapped reads and attempt to identify regions of high enrichment (for review and some comparisons, see [3,6,7]). Some algorithms attempt to solve this problem generally, whereas others specialize in identifying punctate transcriptionfactor binding sites [8,9] or, conversely, broader regional enrichment, as is often seen in histone modification patterns [7,10]. Similarly, a raft of algorithms specializes in estimating gene expression, including the expression of alternative spliceoforms (e.g., [11][12][13][14]). While such approaches are clearly valuable, few deal directly with the problem of estimating the genomewide signal f.
Yet, there are many reasons to be interested in such a direct reconstruction. Perhaps the most straightforward is that reconstructing f is useful for visualization in genome browser tracks. Visualization of the signal allows biologists to sanity-check their data, compare different signals at an intuitive level, identify regions of interest, generate hypotheses, and so on [15]. Reconstruction also allows us to manipulate and combine different signals, for example by "subtracting" a background noise/control signal from a treatment signal. Indeed, there is some evidence from the peak-calling literature that true binding and background processes can be separated, leading to enhanced signal fidelity [10,16]. We contend that such issues have not been explored in the literature nearly as thoroughly as they should have been, in part because of a lack of focus on the more elementary problem of reconstructing genome-wide signals themselves.
The question then becomes, how can we best reconstruct the genome-wide signals measured by HTS experiments? One simple approach is a read "pileup" map. The details of computing a pile-up depend on whether DNA fragments are sequenced entirely or only partially and, in the latter case, also on whether they are sequenced partially from just one end (resulting in single-end reads) or from both ends (resulting in pairedend reads). In the case of a single-end dataset, which is probably the most common type of HTS dataset at present, the sequenced reads are mapped back to the genome to obtain their locations ( Figure 1A). Then the positive-and negative-strand reads are either extended to the mean fragment length ( Figure 1B) or shifted towards each other by half the mean fragment length. In the former case, the signal profile is built as an aggregation of the intervals representing the fragments ( Figure 1C) [16,17]. In the latter case, the simplest way of building a profile is by using a moving histogram. This involves sliding a window of fixed width across the genome and counting the number of reads falling within the window as the window moves forward. Although such histograms have been implemented in various versions [8,[18][19][20], in general, histograms are problematic as estimators because they are not smooth and the resulting estimates are strongly affected by the choice of histogram bin width.
An alternative and more accurate estimator is the kernel density estimator (KDE), where a kernel (e.g., Gaussian) of a chosen bandwidth (standard deviation) is centered at each sample point (a read), and the kernels are then summed to obtain the density estimate ( Figure 1D) [21,22]. Intuitively, high-density regions would correspond to tall peaks due to the piling up of closelyspaced kernels. These KDE-based density estimates can be thought of as denoting the probability of finding a read at a given base pair location. QuEST [23], F-Seq [15], and Qeseq [7] apply this method to identify enriched regions in HTS data. Although the density estimates obtained by these algorithms are in general smoother and more accurate than those obtained using histograms, the bandwidths of the kernels are fixed and are chosen arbitrarily (QuEST uses 30 bp, Qeseq uses 150 bp, and F-Seq uses an indirect feature-length parameter to set bandwidth to typically a few thousand bp). The fact that the quality of the density estimates are very much dependent on the choice of the kernel bandwidth necessitates a more careful and methodical approach to bandwidth selection. In theory, a single optimal bandwidth can be systematically chosen for a given dataset using one of the popular plug-in or crossvalidation approaches [24][25][26][27][28]. However, the large genome sizes and the sparsity of HTS data make it a cumbersome process to estimate bandwidth in this manner. Even if achieved, a single bandwidth for the entire genome would not usually be sufficient for identifying enriched regions with a high degree of accuracy, owing to the widely varying spatial smoothness of the read distributions. Ideally, small bandwidths work best for high-density regions and large bandwidths work best for low-density regions. If the bandwidth is fixed for the entire genome, then it has to take a compromise value between the two extremes, thus limiting the accuracy of the resulting density estimate. In addition, the estimate would tend to have a large number of spurious local maxima corresponding to individual reads in low-density regions ( Figure 1D). Due to these reasons, a fixedbandwidth KDE is not the best choice for modeling the widely-varying distributions associated with ChIP-Seq or other types of HTS data.
An effective alternative is to use an adaptive scheme that utilizes local data features to dynamically adjust the density estimate to reflect variations in the underlying true density. Adaptive-bandwidth KDE, as the name suggests, achieves this by adapting (or varying) the kernel bandwidth according to the local characteristics of the data. Two types of adaptive-bandwidth KDEs have been investigated in the literature. First is the balloon estimator [29] where, for each estimation point, a bandwidth is first chosen and the estimate at that point is then computed as an average of the identically-scaled kernels evaluated at that point. The kernels are, of course, centered at the data points. Since the bandwidth is fixed for a given estimation point, this estimator, taken pointwise, behaves like a fixed-bandwidth KDE. Although the estimator has been shown to be promising in higher dimensions, it has serious drawbacks in the univariate and bivariate settings [30,31]. Most importantly, the estimate fails to integrate to one and, in certain situations, has a performance that is worse than that of the fixed-bandwidth KDE.
The second type of adaptive-bandwidth estimator is the sample-point estimator, where a bandwidth is selected for each data point instead of the estimation point [32]. The estimatef is then an average of differently-scaled kernels centered at the data points. When the kernel function is a density,f itself is a density. This type of estimator has been generally found to be a better performer than the balloon estimator [31,33,34], and is easily adaptable for HTS data. In addition, considering the large genomic sizes that are encountered, the estimate is simple and straightforward to compute since there are only as many kernels as the number of reads in the data. The only caveat, pointed out in [31], is a phenomenon referred to as "non-locality" where the estimate at a certain point can be affected by kernels corresponding to data very far away from it. However, in practice, this would not be an issue because, for the sake of computational feasibility, the kernel tails would have to be truncated after a reasonable number of standard deviations. This truncation would typically have no serious consequence as the values involved would be very small.
In this paper, we present an adaptive-bandwidth KDE for modeling the tag distributions of HTS data. The estimator automatically adjusts to the smoothness variations by choosing an appropriate local bandwidth for every read location, thereby leading to a much better estimate of the underlying distribution compared to that obtained using a fixed-bandwidth KDE. To the best of our knowledge, adaptive-bandwidth KDEs have not been considered for HTS data before. The method is inspired from the sample-point estimator [32], but has a number of new features that have been specifically developed to make it suitable for use in HTS data analysis. We consider three possibilities for the choice of the kernel function, namely, the square, triangular, and Gaussian distributions, and compare and evaluate their performance using a number of public datasets. For more detailed discussions on adaptive KDEs in general, the reader is referred to [29][30][31][33][34][35] and references therein.
Datasets
We compare different density estimation approaches on a suite of ten ENCODE single-end ChIP-Seq datasets available through the Gene Expression Omnibus. We downloaded the data in the form of BAM files, in which reads have already been mapped to positions in the human genome. We chose five datasets describing pulldowns for histone 3 with the following modifications: H3K27ac (GSM733718), H3K27me3 (GSM733748), H3K36me3 (GSM733725), H3K4me1 (GSM733782), and H3K4me2 (GSM733670). The other five datasets describe binding of the following transcription factors: BRCA1 (GSM935377), CTCF (GSM733672), GTF2F1 (GSM935581), RAD21 (GSM803466), and REST (GSM803365). For the sake of computational convenience, we restricted our attention to estimating the genomic signal on chromosome 1. In pilot studies we conducted, there were no significant differences in conclusions based on density estimation over the whole genome versus density estimation on just chromosome 1. By focusing on chromosome 1, our computations proceeded much faster. Hence, we first isolated the reads from chromosome 1, removed any duplicate reads, and then shifted positive and negative strand reads towards each other by one half the fragment length, which was estimated using the MaSC approach [5]. We took the starting positions of the resulting reads as data "points" for the purpose of density estimation, and sorted them in ascending order. Each dataset was thus reduced to a sorted list of positions X = (x 1 , x 2 , . . ., x n ).
Fixed and adaptive bandwidth kernel density estimators
From a density-estimation perspective, the data X is viewed as being sampled from some unknown distribution f(x) on the genome. The idea is to estimate f using the sample data. We do this with a kernel density estimator, which is of the form Here, x is a query point at which we want to evaluate our estimate of f, K is a kernel function, x i is a sample data point, and h i is the bandwidth associated with x i . The kernel function K, for example, might be Gaussian in shape, with mean zero and standard deviation h i . The translated kernel, In a fixed-bandwidth kernel density estimate, h i is equal to a constant value h, which may be chosen a priori or dependent somehow on the data. Intuitively, the larger h is, the more aggressively the data is smoothed, because the kernel function becomes broader for larger h. Below, we also experiment with blending fixed-bandwidth estimators with a uniform density. This creates a density of the formf is the kernel density estimate of Eqn. 1, and u(x) is a uniform density over the genome.
[In fact, because we are concerned with probabilities over integer base pair positions, we should be discussing probability distributions rather than probability densities. However, in keeping with the traditional terminology of these estimators, we will employ the term 'density' throughout.] In an adaptive-bandwidth kernel density estimator, each h i is allowed to be different. We employ a variant of the k-nearest neighbor rule to assign the bandwidth h i . In the statistics literature, there are various schemes for assigning bandwidths. Perhaps the simplest and the most practical rule is to assign to point x i a bandwidth h i equal to the absolute distance from x i to its k th nearest neighbor [31]. We will call this the KNN1 rule. Intuitively, in regions of sparse data, the k th nearest neighbor will be far away, and so large bandwidths will be assigned, leading to aggressive smoothing of the signal. In dense regions, on the other hand, the k th nearest neighbor will be much closer, leading to small bandwidths, and thereby an accurate reconstruction of the signal. The choice of k allows us to indirectly control (to a certain degree) the bandwidth assigned to each pointlarge k values generally lead to large bandwidths, although the exact bandwidth assigned to each point depends on its proximity to its neighbors.
It turns out that the KNN1 rule has a minor problem which can be awkward in practice. Consider the situation where there are two regions of dense data with a sparse region in between. It may so happen that the bandwidths of all points (at least for some choices of k) may be set by points within the same dense region. Consequently, no points, including those at the inside edges of the dense regions, would be assigned a large bandwidth, wide enough to cover the span of the sparse region. Therefore, the points in the sparse region may each end up with a probability of zero. If we then evaluate a new set of data points on the density estimate, and a single point from this set happens to fall in the aforementioned sparse region, then the zero probability assigned to this point would result in the joint probability of the new set of points to be zero-all because of that single point in the sparse region. This "zero problem" is quite common with ChIP-seq datasets, where there are large numbers of very sparse regions.
To circumvent this problem, we propose a variant of KNN1 for assigning bandwidths, which we call KNN2. According to this rule, a point x i is assigned the same bandwidth as in the KNN1 rule unless all k of its nearest neighbors are on the same side (left or right). In that case, we instead take the bandwidth to be the distance to the single nearest point in the opposite direction. This rule ensures that the density estimate is nonzero everywhere (except possibly at the extreme ends of the range), thereby avoiding the zero problem.
Kernel functions
We explore three possible shapes of the kernel function: Gaussian, square (also known as the Parzen window), and triangle (also known as the hat function). As a matter of computational convenience, we truncate the Gaussian distribution at ± 5 standard deviations. We interpret the bandwidth parameter h for each shape of the kernel function in such a way that, when viewed as a distribution in its own right, the standard deviation of that distribution is approximately equal to the bandwidth parameter. This ensures the greatest comparability of results from different kernel functions in experiments where we vary bandwidths or employ adaptive bandwidths. We also take care that each kernel function sums to one. As such, the three kernel functions we consider are Here, c g (h), c sq (h), and c tr (h) are normalizations that ensure, as a function of bandwidth, that each kernel sums to one.
Effects of different density estimation schemes in genomic signal reconstruction
To help visualize the effects of different density estimation approaches on real data, we computed fixed-and adaptive-bandwidth Gaussian density estimates based on the Rad21 data in a window of chromosome 1. For fixed-bandwidth estimation, we considered two choices: h = 16 (close to the h = 15 default used in QuEST), and h = 362, which we show in the next subsection to be the optimal value according to the probability of heldout tuning data (at least when optimized over integer powers of √ 2 ). For adaptive bandwidth, we also considered two choices: k = 7, which is optimal according to a held-out tuning dataset, and k = 14, chosen to increase smoothing.
The results are shown in Figure 2A. The fixed bandwidth estimate with h = 16 includes many fluctuations across the window. Indeed, in data-sparse regions of the window, each data point (shown by a black mark) produces its own small bump in the curve, just as in our idealized example of Figure 1. Between these bumps, the density estimate is zero-although it is probably reasonable to expect that, if the experiments were repeated, reads may appear at other locations within the region of generally low signal. Still, the density estimate is highest where the most data can be found, and these strong peaks in the curve are readily picked out by the eye. With the larger bandwidth of h = 362, most fluctuations in the curve are smoothed away, leaving only broad swells where the data is most dense. This, correctly, eliminates the visual distraction of small fluctuations, although it also de-emphasizes the more dense regions and possible structure within them (such as possible multiple peaks). The adaptive bandwidth estimates eliminate small fluctuations for both choices of k, while still strongly emphasizing data-dense regions. The difference between the estimates corresponding to k = 7 and k = 14 has more to do with the fine structure of dense regions-questions such as "is an enriched region a single peak, or two or three separate peaks?" In the next section, we demonstrate that k (or h) can be optimized using held-out data in a tuning set. However, such questions of fine structure may also be studied by considering additional information, such as the locations of binding motifs for the factor, or signals in other ChIP-Seq datasets.
To quantitatively demonstrate the qualitative effects described above, we identified all strict local maxima for chromosome 1 in the fixed bandwith h = 362 and the adaptive bandwidth k = 7 curves. The curve f has a strict local maximum at position . While some local maxima correspond to regions of high read densities that are biologically significant, such as a transcription-factor binding site, other local maxima correspond to peaks of stand-alone kernel functions corresponding to individual reads that are likely to have no significance. Figure 2B shows histograms of the heights of these maxima. From the histograms, we see that the adaptive bandwidth estimator produces a much wider range of peak heights resulting from the fact that it strongly emphasizes data-dense regions. It also produces a smaller number of local maxima (NLM)-a trend that holds for most, though not all, of the datasets we have considered here (see Table 1). Unsurprisingly, we note a generally inverse relationship between the bandwidth h or the number of nearest neighbors k and the number of local maxima in the resulting density estimate. Nevertheless, all density estimates have tens of thousands of local maxima, considering that these results correspond only to chromosome 1. Therefore, when computed for the whole genome, the numbers can be expected to be much greater than the expected number of bona fide binding sites for a transcription factor.
To the extent that some of the local maxima correspond to "noise" in the density estimate, we can say that larger kernel bandwidths and/or adaptively chosen bandwidths (as opposed to fixed bandwidths) generally produce less noisy density estimates. Specifically, among the three kernel functions considered, the square kernel tends to yield the most noisy signal, understandably due to its abrupt transitions (a rising edge and a falling edge for every kernel centered at a read). In comparison, the triangle kernel is smoother (or less noisy) due to its piecewise linearity. The Gaussian kernel, on the other hand, yields the smoothest (or the least noisy) density Figure 2 Effects of different signal-reconstruction approaches. (A) From top to bottom: fixed-bandwidth Gaussian estimators for a portion of the Rad21 data using h = 16 (close to the h = 15 that is default in the QuEST software) and h = 362 (which appears optimal based on evaluation of held-out tuning data), adaptive-bandwidth Gaussian estimators using k = 7 (optimal based on held-out tuning data) and k = 14 (double the optimal choice, and intended to obtain more aggressive smoothing). In each plot, the curve shows the reconstructed density. The short black vertical marks indicate the read positions. (B) Histograms of heights of local maxima for fixed-bandwidth Gaussian (h = 362) and adaptive-bandwidth Gaussian (k = 7). Note that the vertical axes are in log scale. signal, due, of course, to its well-known smoothing and denoising properties [36].
Adaptive-bandwidth KDE outperforms fixed-bandwidth KDE on held-out data
To more formally assess the accuracy of different density estimation strategies, we randomly divided each dataset into three parts: 50% for training (i.e., creation of the density estimate), 25% for tuning (setting parameters such as bandwidth h or number of neighbors k), and 25% for testing. We first focus on the results for Rad21, which are largely representative of the other datasets, before presenting a comparison across all ten datasets. Figure 3A shows the results of several fixed-bandwidth density estimators on the Rad21 dataset: the standard fixed-bandwidth Gaussian kernel estimate (ε = 0), and that same estimate blended with a 10%, 1%, and 0.1% uniform density (ε = 0.1, 0.01, and 0.001, respectively). The vertical axis shows the mean log probability of the tuning data under the density estimate obtained using the training data. The horizontal axis shows the effect of varying the bandwidth. [The mean log probability of the tuning data is equivalent to the logarithm of the geometric mean of the tuning data point probabilities. We use the logarithm here for greater visibility of the plots. We employ the mean across tuning points so that different datasets with different total numbers of points can be compared directly.] For the ε = 0 case, it is only at the largest tested value of the bandwidth parameter, h = 2 15 = 32768, that the tuning data has a nonzero probability. For smaller bandwidths, some tuning data points are left uncovered by any kernel in the training density. Such points get assigned a zero probability individually and, therefore, the entire tuning set is assigned a zero probability. For a typical, point-binding transcription factor, the peaks in the density may be a few hundred base pairs wide, and therefore smoothing with a kernel bandwidth in the tens of thousands is not ideal. In such situations, then, the vast regions of low signal levels demand a bandwidth inappropriate to the more interesting parts of the signal, and therefore choosing a single bandwidth becomes difficult.
For the ε >0 cases, the uniform density component solves the "zero problem" of tuning points being left uncovered (similar to our previous use of a uniform mixture component in analyzing multi-modal flow cytometry data [37]). The tuning data thus has nonzero probability for all choices of bandwidth, and by varying the bandwidth we can choose an appropriate one, as shown in Figure 3A. For this dataset, an appropriate bandwidth appears to be in the range of 500 to 1000 base pairs. We note that the best choice of bandwidth has some dependence on the ε parameter. Moreover, different choices of ε lead to mildly differing probabilities of the tuning data if bandwidth is optimized. For all choices of ε, however, we clearly see that too small a bandwidth leads to undersmoothing of the data, as seen by very poor probability of the tuning data. When bandwidth is too large, oversmoothing results, and the probability of the tuning data also suffers, although this loss is not as severe as that with undersmoothing. For other datasets, we often found that the tuning curves were even flatter for high bandwidths than the Rad21 tuning curve, rendering these datasets relatively resistant to oversmoothing. For the remainder of our analyses, we focus on the ε = 0.1 choice. Although the tuning data were slightly less probable with this choice than with smaller values of ε, this choice favored smaller bandwidths h, which is preferable for emphasizing regions of true signal density. Figure 3B shows the results of adaptive-bandwidth density estimators with Gaussian, triangle, and square kernel functions for varying values of the nearest-neighbor parameter k. These results are again for the Rad21 dataset, and show the mean log probability of the tuning data under the density estimate obtained from the training data. We have used the KNN2 rule for assigning bandwidths. For this dataset, we find that an optimal value of k = 7 can be chosen based on the tuningset probability. Values smaller than this optimal value result in undersmoothing, and larger values result in oversmoothing. However, in absolute terms, the tuning-set probability actually changes very little as a function of k. The probability using the best value of k (7) is only about 10% higher than under the worst value of k tested. By way of comparison, better or worse values of the bandwidth parameter h for the fixed-bandwidth estimators in Figure 3A resulted in orders-of-magnitude differences in the tuning-set probability. We also observe that the probability of the tuning data obtained under the adaptive-bandwidth scheme is higher than that obtained under the fixed-bandwidth scheme. Intuitively, the adaptive-bandwith scheme allows, by design, coverage of sparse data regions without sacrificing accuracy in high-density regions. The Gaussian and triangle kernels performed very similarly, with the Gaussian being slightly better at all values of k. The square kernel fared slightly worse, although the difference is small compared to even the small loss that may result from a poor choice of k, let alone the difference observed for fixed-bandwidth density estimators. Figure 3C compares the results of fixed and adaptivebandwidth approaches on the full suite of the 10 datasets. We compare fixed-bandwidth Gaussian kernel estimation with ε = 0.1 and h = 16 (close to the h = s = 15 choice that is the default in the QuEST software [23]), fixed-bandwidth Gaussian kernel estimation with ε = 0.1 and bandwidth h optimized on the tuning set, and adaptive-bandwidth estimation with Gaussian, triangle, and square kernel functions with KNN2 bandwidth selection and optimal k (chosen to maximize tuning-set probability). We report geometric mean probability of the test data points in the bar charts, while Table shows the optimized bandwidths h or nearest neighbor parameters k, depending on the method. The results are remarkably consistent across the 10 datasets. Adaptive-bandwidth estimation with Gaussian kernels is uniformly the best performer, followed closely by the triangle and the square kernels. Thus, by the measure of mean log probability of test data, the Gaussian kernel is consistently best at smoothing (or denoising) the data. Among the two fixed-bandwidth cases, the approach of optimizing bandwidth on a tuning set always leads to improved test-set performance, emphasizing the importance of using a tuning set to optimize algorithm parameters. For many datasets, the fixed (but optimized) bandwidth Gaussian estimator is only about 10% worse than the adaptive-bandwidth schemes, although for some datasets its performance drops to about two-thirds or even onehalf of that of the adaptive-bandwidth schemes. The unoptimized fixed-bandwidth scheme is uniformly the worst, with a test-set probability on average about onetenth of that of the other schemes.
Kernel function choice influences time and space complexity
Although our analysis shows that the choice of the kernel function-Gaussian, square, or triangle-has little influence on tuning or test-set probabilities, the choice does impact the computational resources needed to compute the densities. The final columns of each half of Table show the CPU times, measured in seconds on a SunFire x2250 cluster computing node, for evaluating the full density estimate across chromosome 1 for the three different kernel functions. The Gaussian estimate is always the most expensive to compute, and there are two main reasons for this. First, it has the widest support of all the kernels (10 standard deviations in diameter), and it involves evaluation of the exponential function, which is a relatively time-consuming operation. The triangle estimate, with a smaller support and a simple function to evaluate, was typically about 2.5 times faster to compute. The square estimate, with the narrowest support and a constant height (though dependent on bandwidth), was roughly 10 times as fast to evaluate as the Gaussian estimate.
In slightly more formal terms, if we have D training data points, S base pairs of average kernel support, and a genome of size G base pairs, then we expect O(DS + G) computations to evaluate a kernel density estimate. The G term is for initializing the density to zero everywhere, and the DS term is for evaluating each kernel over the base pairs to which it contributes probability mass. Empirically, the linear influences of D and S are well born out when we plot, for instance, CPU time versus dataset size or mean bandwidth size. Different kernel functions affect mainly the slope of the relationship of CPU time to D or S-i.e., they determine the constant inside the big O.
This analysis, however, assumes explicit representation of the density value at every base pair. The square kernel density estimate is piecewise constant, comprising O (D) pieces-each data point contributes one rise and one fall to the functionf (x) . Thus, the density can be represented in terms of the start, end, and height of each piece, and can be computed in O(D) time and requires only O(D) space (as opposed to O(G) space for a general density over the whole genome). Moreover, such a piecewise constant density is readily represented as a BED file, making it convenient for browser viewing. For the triangle kernel function, the density estimate is piecewise linear with O(D) pieces; this too can be handled in O(D) time and space, although we know of no browser file format allowing piecewise linear functions. Given that the triangle kernel has similar computational requirements to the square kernel, and yet an accuracy comparable to the Gaussian kernel, defining a browser file format that allows for piecewise linear functions could be advantageous. Thus, although accuracy points to the Gaussian kernel as the best choice for density estimation, square and triangle kernels have points in their favor regarding computation and browsing convenience.
Conclusions and future work
We have investigated adaptive-bandwidth kernel density estimators for the reconstruction and visualization of genomic signals underlying ChIP-Seq data, with several results. First, we found that adaptive-bandwidth schemes generally outperform fixed-bandwidth schemes in terms of accuracy. In our opinion, adaptive-bandwidth schemes also hold visualization advantages, although we admit this is somewhat subjective. With optimized smoothing parameters, fixed-bandwidth estimators held a slight advantage in terms of computation time, although all estimates can be computed quickly enough that computation time does not seem to be a major concern. Among different kernel functions, we found that all yielded comparable accuracy with some having potential advantages in terms of compact representation and genome browser compatibility. It remains to be investigated whether the increased accuracy of adaptivebandwidth estimates will translate into improved abilities of extracting biological information-for instance, in terms of transcription factor binding sites or peaks, assessment of regions of enrichment for histone marks or, more generally, comparison of different ChIP-Seq signals. Relatedly, our methods may be useful in more accurately decomposing genomic signals into constituent parts, or correcting for different sources of bias. One way to do this would be to create adaptive-bandwidth KDE smooths of different possible sources of bias (e.g., local GC content, mappability, etc.), and then use regression, deconvolution, or principal components style anlayses to isolate the true signal of interest. Alternatively, one might generalize the adaptive-bandwidth KDE approach to that of conditional density estimation to account for possible biases at the time of signal reconstruction. We leave these questions as topics for future study.
Software implementing adaptive-bandwidth density estimation for ChIP-Seq data is available at http://www. perkinslab.ca/Software.html. | 7,809.2 | 2013-12-20T00:00:00.000 | [
"Biology",
"Computer Science"
] |
Phospholipid scramblase 1 binds to the promoter region of the inositol 1,4,5-triphosphate receptor type 1 gene to enhance its expression.
Phospholipid scramblase 1 (PLSCR1) is a multiply palmitoylated, endofacial membrane protein originally identified based on its capacity to promote accelerated transbilayer phospholipid movement in response to Ca(2+). Recent evidence suggests that this protein also participates in cell response to various growth factors and cytokines, influencing myeloid differentiation, tumor growth, and the antiviral activity of interferon. Whereas plasma membrane PLSCR1 was shown to be required for normal recruitment and activation of Src kinase by stimulated cell surface growth factor receptors, PLSCR1 was also found to traffic into the nucleus and to tightly bind to genomic DNA, suggesting a possible additional nuclear function. We now report evidence that PLSCR1 directly binds to the 5'-promoter region of the inositol 1,4,5-triphosphate receptor type 1 gene (IP3R1) to enhance expression of the receptor. Probing a CpG island genomic library with PLSCR1 as bait identified four clones with avidity for PLSCR1, including a 191-bp fragment of the IP3R1 promoter. Using electrophoretic mobility shift and transcription reporter assays, the PLSCR1-binding site in IP3R1 was mapped to residues (-101)GTAACCATGTGGA(-89), and the segment spanning Met(86)-Glu(118) in PLSCR1 was identified to mediate its transcriptional activity. The significance of this interaction between PLSCR1 and IP3R1 in situ was confirmed by comparing levels of IP3R1 mRNA and protein in matched cells that either expressed or were deficient in PLSCR1. These data suggest that in addition to its role at the plasma membrane, effects of PLSCR1 on cell proliferative and maturational responses may also relate to alterations in expression of cellular IP3 receptors.
The phospholipid scramblase (PLSCR) 2 gene family consists of an apparent tetrad of genes with identifiable orthologues conserved from Caenorhabditis elegans to man (1). The first identified member of this family (PLSCR1) was isolated based upon its capacity to promote Ca 2ϩdependent accelerated transbilayer membrane phospholipid (PL) movement, mimicking the remodeling of plasma membrane PL that is observed under conditions of injury and apoptosis (2,3). PLSCR1 is a multiply palmitoylated, Ca 2ϩ -binding, Pro-and Cys-rich, endofacial plasma membrane protein that was shown to distribute into lipid raft domains and to be a substrate of the Abl and Src tyrosine kinases (4 -6). The exact biologic function of this protein remains controversial; although PLSCR1 mediates Ca 2ϩ -dependent transbilayer movement of PL in proteoliposomes (2,3) and was reported to increase cell surface expression of phosphatidylserine through remodeling of plasma membrane PL in mammalian cells exposed to Ca 2ϩ ionophore and other inducers of injury or apoptosis (7)(8)(9), it has also been observed that induced elevation of PLSCR1 expression can occur without a detectable increase in PL movement between membrane leaflets, and gene deletion of PLSCR1 did not impair cell capacity to undergo this remodeling of cell surface PL (10 -12). Whereas its role in plasma membrane PL movement remains unresolved, there is increasing evidence that PLSCR1 serves to regulate cellular maturation and proliferative responses; mutation in PLSCR1 was shown to influence the leukemic potential of the murine monocyte cell line MM, whereas gene deletion of PLSCR1 or inhibition of PLSCR1 protein expression using antisense or small interfering RNA was found to impair myeloid cell maturation or proliferation in response to select hematopoietic growth factors (12)(13)(14)(15)(16). PLSCR1, a potently stimulated interferon (IFN)-responsive gene, was also found to be required for normal expression of the antiviral activity of IFN, whereas PLSCR1 gene deletion and small interfering RNA suppression of PLSCR1 expression were found to inhibit the expression of a select subset of IFN-stimulated genes, including those with known antiviral activity (17,18). The mechanism by which PLSCR1 influences cell response to these various growth factors and cytokines remains unresolved. In the case of epidermal growth factor receptor, PLSCR1 was shown to be recruited into the activated plasma membrane epidermal growth factor receptor complex, concomitant with the phosphorylation of PLSCR1 by activated Src and its binding to the phosphotyrosine domain of the adapter Shc (4,5). Gene deletion of PLSCR1, or mutation of its Tyr residues that are phosphorylated by Src, prevented this interaction and was accompanied by attenuated growth factor-induced activation of Src kinase, implying a role for PLSCR1 in Src recruitment and feedback activation of the kinase by activated epidermal growth factor receptor. Similar interactions were also observed with related growth factor receptors.
Despite the considerable evidence that PLSCR1 is an endofacial cell surface protein with apparent biologic function at the plasma membrane, recent data suggest an additional role for this protein in the nucleus. Nuclear trafficking of PLSCR1 has been observed only in circumstances where its cellular expression was induced by IFN and other cytokines or growth factors that transcriptionally activate this gene, implying nuclear import of de novo synthesized PLSCR1 rather than a redistribution of the membrane-bound pool (19). Mutation of PLSCR1 at its sites of Cys palmitoylation or inhibition of palmitoylation by cell treatment with bromo-palmitate also resulted in nuclear trafficking of the protein and abrogated its distribution to the cell surface. Nuclear import of PLSCR1 was shown to be mediated by the importin nucleopore transport chaperones and to be dependent on specific binding of PLSCR1 to importin-␣ via an unconventional nuclear localization signal identified within the PLSCR1 polypeptide (20). PLSCR1 was also found to bind directly to genomic DNA, and after nuclear uptake, its extraction from the nucleus required salt elution or incubation with DNase, implying interaction with chromatin.
To gain insight into the possible function of nuclear PLSCR1, we utilized genomic binding site cloning (GBSC) from a human CpG island-enriched library to identify potential gene regulatory regions to which PLSCR1 might bind. Our data suggest that PLSCR1 specifically binds to a segment of the 5Ј-promoter of IP3R1, enhances transcription of this gene, and is required for normal expression of cell IP3R1 protein.
Preparation of CpG Island Genomic Library DNA-Human CpG island genomic library in E. coli (UK HGMP Resource Center) was amplified in 0.3% low melting agarose-LB medium, and the DNA was purified with a Qiagen column. Genomic DNA inserts (average size, 760 bp) were amplified by PCR for 30 cycles using the primers 5Ј-CGG CCG CCT GCA GGT CGA CCT TAA-3Ј and 5Ј-AAC GCG TTG GGA GCT CTC CCT TAA-3Ј according to the manufacturer's instructions. Amplified genomic DNA was purified by phenol extraction.
Genomic DNA-binding Site Cloning-PCR-amplified CpG library genomic DNA was precleaned by incubation of 25 g of DNA with 250 l of GST immobilized on glutathione-Sepharose 4B (GST-Sepharose) and 400 l of binding buffer (40 mM Tris, pH 7.5, 100 mM KCl, 10% glycerol, 5 mM EDTA, 1 mM dithiothreitol, 0.1 mg/ml bovine serum albumin, 0.1% Triton X-100, and protease inhibitor mixture) for 1 h at 4°C. The precleaned genomic DNA was incubated with 21 l (10 g of protein) of either PLSCR1-GST immobilized on glutathione-Sepharose 4B (PLSCR1-GST-Sepharose) or control GST-Sepharose for 1 h at 4°C with agitation. The beads were washed five times with binding buffer and suspended in 300 l of PCR buffer (BD Clontech). Bound genomic DNA was amplified by PCR for 25 cycles (denaturing for 1 min at 94°C, annealing for 0.5 min, and extension for 1.5 min at 68°C) with Advantage DNA polymerase and the primers noted above. PCR products were purified by phenol extraction and used in the next cycle of GBSC. After seven cycles of GBSC, enriched genomic DNA was separated on agarose gel and cloned into Topo-pCR2.1 T-A cloning vector (Invitrogen), and the inserts were sequenced.
Mapping Gene Transcriptional Activation Domain in PLSCR1-Fulllength PLSCR1 cDNA (Met 1 -Trp 318 ) or its serial deletions (Met 86 -Trp 318 , Val 119 -Trp 318 , Arg 177 -Trp 318 , and Pro 212 -Trp 318 ) were cloned into pAS2-1 vector (BD Clontech) to encode fusion proteins with Gal4 DNA-binding domain. Yeast was transformed with either full-length PLSCR1-pAS2-1 or the PLSCR1 deletions or pAS2-1 vector control, and cultured on agar plates. Yeast colonies were transferred to filter paper, and expression of -galactosidase under the control of Gal4responsive elements was detected with X-GAL as substrate. Colonies were imaged using Kodak Image Station 440 CF (PerkinElmer Life Sciences).
Cell Culture and Treatment-Sorted MEF were cultured in Dulbecco's modified Eagle's medium, 10% fetal bovine serum, and KEC in Dulbecco's modified Eagle's medium and F-12 (1:1), 10% fetal bovine serum. In some cases, the cells were treated with or without 2 M alltrans-retinoic acid (ATRA) (Sigma) for 24 h and harvested to measure gene expression by Northern and Western blots as described below.
Transcriptional Activity Assay-PLSCR1 or PLSCR1-C3 A ( 184 CCCPCC 189 3 184 AAAPAA 189 ) mutant were cloned into pcDNA3, and GBSC-derived IP3R1 genomic DNA was cloned into pGL3 (Promega). Immortalized PLSCR(1&3)Ϫ/Ϫ MEF were co-transfected with 1.4 g of pcDNA3-PLSCR1 or pcDNA3-PLSCR1-C3 A ( 184 CCCPCC 189 3 184 AAAPAA 189 ) mutant, 0.1 g of pGL3-GBSC-derived genomic DNA, and 0.1 g of PRL-TK (Promega) for calibration of transfection efficiency, using the Lipofectamine plus reagent (Invitrogen). Forty-eight hours after transfection, luciferase activity was assayed with the dual luciferase system according to the manufacturer's instructions (Promega) by automated injection using a MicroLumatPlus microplate luminometer (EG&G Berthold). This was followed by measurement of the Renilla luciferase activity (Promega). Relative luciferase activity is presented as a ratio of firefly to Renilla luciferase activity. Alternatively, transfection efficiency was determined by co-transfection with pEGFP-C3. Forty-eight hours posttransfection luciferase activity was measured using Bright glow reagent (Promega) in a MicroLumatPlus microplate luminometer, and GFP fluorescence was detected by flow cytometry on a FACSCalibur (Becton Dickinson). The results are shown as ratios of luciferase to GFP values.
Northern Blot-Total RNA in MEF was purified using the RNeasy kit (Qiagen). Eight g of total RNA in 30 l RNA sample buffer were loaded onto a 1% formaldehyde denaturing agarose gel. After electrophoresis the RNA was transferred to a positively charged nylon membrane (Ambion) and hybridized with either 32 P-labeled IP3R1 cDNA probe (1074 bp, from 2021 to 3094, GenBank TM accession number NM_010585.2) or 32 P-labeled glyceraldehyde-3-phosphate dehydrogenase cDNA probe (982 bp, from 79 to 1060, GenBank TM accession number NM_001001978.1), respectively. The membrane was washed and exposed to x-ray film. The intensity of each band was then quantified by densitometry using a Kodak Image Station 440CF (PerkinElmer Life Sciences) with background correction. Variation in mRNA loading was corrected by normalization to the band intensity of glyceraldehyde-3-phosphate dehydrogenase.
Western Blot-Mouse primary bone marrow cells, MEF, and KEC were incubated for 1 h at 4°C in cell lysis buffer (1% Triton X-100, 10 mM Tris, pH 7.4, 150 mM NaCl, 5 mM EDTA, 1 mM dithiothreitol, and protease inhibitor mixture) and centrifuged at 20,800 ϫ g for 10 min at 4°C. Brain tissue was isolated from either WT or PLSCR1Ϫ/Ϫ mice and homogenized in 10 volumes of cell lysis buffer, kept at 4°C for 1 h, and centrifuged at 20,800 ϫ g for 10 min at 4°C. The proteins were resolved on a 4 -20% gradient Tris-glycine gel and transferred to a nitrocellulose membrane. After blocking with 5% Blotto, immunoblotting was performed using rabbit anti-IP3R1 (Upstate Biotechnologies, Inc.), goat anti-IP3R2 (Santa Cruz Biotechnology), mouse anti-IP3R3 (BD Transduction Laboratories), mouse anti-PLSCR1 monoclonal antibody 4D2 (human PLSCR1) or 1A8 (mouse PLSCR1) (11,12), and mouse anti-actin (Sigma) antibodies, respectively. Following development with the appropriate species-specific horseradish peroxidase-conjugated antiglobulin antibodies and SuperSignal chemiluminescence substrate (Pierce), the bands were visualized on x-ray film. The intensity of each band was then quantified by densitometry using a Kodak Image Station 440CF (PerkinElmer Life Sciences) with background correction. Variation in protein loading was corrected by normalization to the band intensity of -actin.
RESULTS
Identification of PLSCR1-binding DNA-Because PLSCR1 was shown to be actively imported into the nucleus and to avidly bind genomic DNA, we sought to identify candidate DNA sequences that bound PLSCR1 within a CpG island-enriched human genomic library. Seven rounds of panning and amplification against this library using GST-PLSCR1 as bait yielded four distinct clones that bound to PLSCR1 ( Fig. 1): three with inserts that corresponded to sequence derived from uncharacterized genes and one with a sequence from the 5Ј regulatory region of IP3R1 (residues Ϫ112 to ϩ79, of Ensembl sequence ENST00000302640 at www.ensembl.org). The specificity of the clones that were isolated on GST-PLSCR1 for binding to PLSCR1 was confirmed by performing EMSA against MBP-PLSCR1. As shown in Fig. 1, a shift in mobility was observed for all four clones in the presence of MBP-PLSCR1 but not in the presence of MBP alone.
Identification of a Gene Transcriptional Activation Domain in PLSCR1-Next we used a yeast Gal4-LacZ reporter system to examine whether PLSCR1 contains a transcriptional activation domain. When PLSCR1 was expressed as a fusion protein with the Gal4 DNA-binding domain, transcriptional activity as detected by -galactosidase gene expression was observed, in absence of a "bait" library containing the requisite transcription activation domain for the LacZ reporter. As expected, expression of the Gal4 DNA-binding domain alone did not activate transcription (Fig. 2). This suggested that PLSCR1 directly activated the minimal promoter of LacZ or interacted with an intrinsic factor in the reporter cell with such an activation domain. Expression of a series of truncated mutants of PLSCR1 revealed that PLSCR1-dependent transcriptional activity was lost upon truncation between residues Met 86 and Glu 118 , locating a putative transcriptional activation domain to this 33-amino acid segment within PLSCR1.
PLSCR1 Promotes IP3R1 Gene Transcription-Having demonstrated specific binding of PLSCR1 to DNA sequences identified by GBSC and having located a putative transcription activation domain within PLSCR1, we next investigated the effect of PLSCR1 on gene transcription in a mammalian cell line. For these experiments, the DNA inserts of the four GBSC-derived clones were cloned into the luciferase reporter vector pGL3. Because the PLSCR1 protein sequence is highly homologous to PLSCR3 and shows similar tissue distribution, a cell line deficient in both PLSCR1 and PLSCR3 (PLSCR(1&3)Ϫ/Ϫ MEF) was selected for these experiments to analyze the transcriptional activity of PLSCR1 without potential interference by PLSCR3. As shown in Fig. 3A, of the four GBSC-derived clones that bound PLSCR1 (from Fig. 1), only GBSC clone 3 (representing nucleotides Ϫ112 to ϩ79 of the IP3R1 promoter region) was found to activate luciferase gene expression of the pGL3 reporter when co-transfected with PLSCR1. This activation of the IP3R1 (Ϫ112 to ϩ79)-pGL3 reporter construct by PLSCR1 was observed for both the wild type protein and for the PLSCR1-C3 A mutant, which localizes preferentially to the nucleus (Fig. 3B).
To identify the sequence in the IP3R1 promoter region required for the observed PLSCR1-mediated gene transcription, pGL3 luciferase reporter constructs containing the full-length or serial deletions of the IP3R1 promoter region (Ϫ112 to ϩ79) were transfected into PLSCR(1&3)Ϫ/Ϫ MEF for evaluation of gene transcription activity. As shown in Fig. 4, truncation of 33 bp from the 5Ј end (construct Ϫ79 to ϩ79) resulted in the loss of Ͼ90% of transcriptional activity, suggesting that the sequence from Ϫ112 to Ϫ80 of the IP3R1 promoter region was primarily responsible for the observed PLSCR1-enhanced reporter gene transcription. However, a pGL3 luciferase reporter construct containing nucleotides Ϫ112 to Ϫ80 exhibited Ͻ10% of the transcriptional activity of the full-length construct, indicating an additional contribution by sequence contained within nucleotides Ϫ79 to ϩ79. To identify the binding site for PLSCR1 within the IP3R1 promoter region, fulllength and the partial IP3R1 promoter sequences were subjected to EMSA with PLSCR1. Only full-length (Ϫ112 to ϩ 79) and the sequence from Ϫ112 to Ϫ80 were observed to bind to PLSCR1, locating the bind-ing site to this 33-bp sequence (not shown). The binding site for PLSCR1 within this sequence was further delineated by performing EMSA with PLSCR1 of serial mutations of this 33-bp segment (Fig. 5). These experiments mapped the minimal binding site for PLSCR1 to 13 nucleotides ( Ϫ101 GTAACCATGTGGA Ϫ89 ) of the IP3R1 promoter region. This 13-bp sequence containing the PLSCR1-binding site in human IP3R1 is identically conserved in mouse IP3R1.
Finally, we examined the effect of PLSCR1 on cellular IP3R1 gene expression in situ. IP3R1 is constitutively expressed in many mamma- lian cells, and its expression has been shown to increase in cells treated with retinoic acid. In these experiments, PLSCR(1&3)Ϫ/Ϫ MEF stably transfected with wild type, mutant PLSCR1-C3 A, or vector control were subjected to Northern blotting for IP3R1 mRNA and Western blotting for IP3R1 protein. As shown in Fig. 6, when compared with the vector only controls, cells expressing the nuclear localized PLSCR1 mutant (PLSCR1-C3 A) exhibited Ͼ2-fold increase in expression of IP3R1 mRNA (Fig. 6A) and protein (Fig. 6B) under both basal and ATRA-stimulated conditions. Ectopic expression of wild type PLSCR1, which is normally palmitoylated and predominantly distributes to plasma membrane and not to nucleus (19), resulted in only small and variable changes in either basal or ATRA-stimulated expression of IP3R1. By contrast to the observed elevation in IP3R1 expression in cells expressing the nuclear trafficking PLSCR1 mutant, we detected no consistent change in the levels of expression of either IP3R2 or IP3R3 (Fig. 6B). Consistent with these data, inspection of the promoter segments of the IP3R2 and IP3R3 genes also failed to reveal nucleotide sequence corresponding to the deduced PLSCR1-binding site in IP3R1 (GTAAC-CATGTGGA; see Fig. 5).
In addition, an effect of PLSCR1 on constitutive IP3R1 expression was also observed when primary cells and tissue harvested from WT mice versus PLSCR1Ϫ/Ϫ mice were compared. As shown by Western blots of Fig. 7A, whole cell extracts of brain and bone marrow from PLSCR1Ϫ/Ϫ mice consistently exhibited lower levels of IP3R1 protein expression than corresponding cells obtained from wild type animals. Similarly, diminished IP3R1 expression was also observed in immortalized kidney epithelial cells obtained from PLSCR(1&3)Ϫ/Ϫ mice compared with identical cells from wild type animals (Fig. 7B).
As was observed for PLSCR1 (1&3)Ϫ/Ϫ MEF (Fig. 6), the expression of IP3R1 in PLSCR(1&3)Ϫ/Ϫ KEC was found to increase upon ectopic expression of PLSCR1 in the cell, and this activity depended upon the capacity of PLSCR1 to enter the nucleus; reconstitution of these cells with the palmitoylation-defective PLSCR1-C3 A mutant, which localizes to the nucleus, increased cellular IP3R1 levels under both basal and ATRA-stimulated conditions (Fig. 7C). By contrast to this nuclear-trafficking PLSCR1 mutant, cells transfected to express wild type PLSCR1, which distributes to the plasma membrane and not to the nucleus, showed no increase in IP3R1 expression relative to matched vector controls. We also observed no increase in IP3R1 expression from vector levels in PLSCR(1&3)Ϫ/Ϫ KEC transfected with the double mutant PLSCR1-C3 A&K3 A, a palmitoylation-defective mutant with additional mutation in the PLSCR1 nuclear localization signal to prevent its nuclear entry. Of note, however, whereas the PLSCR(1&3)Ϫ/Ϫ KEC used in these experiments were matched in both GFP fluorescence and in expressed mRNA (data not shown), the level of expression of the PLSCR1-C3 A&K3 A double mutant was always distinctly less than that of the WT or PLSCR1-C3 A constructs (Fig. 7C), potentially reflecting accelerated degradation of the double mutant construct in the cytosol.
DISCUSSION
These experiments identify a gene that is directly transcriptionally regulated by PLSCR1, a multiply palmitoylated plasma membrane protein that was recently discovered to traffic into the nucleus as cargo of the importin nucleopore chaperones and to bind directly to DNA. The gene target of nuclear PLSCR1, IP3R1, is known to play a key role in IP3-mediated mobilization of intracellular Ca 2ϩ stores from the endoplasmic reticulum of a variety of cells and tissues. This transcriptional activity associated with PLSCR1 toward the cellular IP3R1 gene was found to reflect 1) specific binding of PLSCR1 to the nucleotide sequence Ϫ101 GTAACCATGTGGA Ϫ89 that is conserved within the promoters of both human and mouse IP3R1 (Fig. 5); 2) an apparent transcription activation domain within the PLSCR1 peptide segment Met 86 -Glu 118 (Fig. 2); and 3) the capacity of PLSCR1 for nuclear import via a nuclear localization signal previously identified within PLSCR1 peptide segment 257 GKISKHWTGI 266 (Fig. 7C). Unresolved by these experiments is whether PLSCR1 functions in situ as a direct transcription factor to drive IP3R1 expression or as a co-activator to enhance the transcriptional activity of another transcription factor acting at the IP3R1 promoter.
The PLSCR1-binding site identified in IP3R1 (Ϫ101 to Ϫ89) is close to both a "TATA box" (Ϫ131 to Ϫ126) and to the binding site of transcription factor AP-2 (Ϫ191 to Ϫ184), which has been shown to regulate IP3R1 gene transcription (21). This suggests that once bound to the promoter, PLSCR1 might either directly promote transcription of IP3R1 through its own transcriptional activation domain (Fig. 2) or enhance the transcriptional activity of AP-2 or other transcription factor acting on the gene. Also of note, this binding site for PLSCR1 within the IP3R1 promoter ( Ϫ101 GTAACCATGTGGA Ϫ89 ; Fig. 5) itself contains an "E box" core consensus sequence, which has been identified as the target of many basic helix-loop-helix transcription factors and coactivators (22)(23)(24)(25). It remains to be determined whether nuclear PLSCR1 also binds to other genes that contain this sequence so as to influence their transcription.
IP3 receptors (types 1, 2, and 3) are intracellular Ca 2ϩ release channels that respond to the second messenger IP3 upon activation of a variety of cell surface receptors. By regulating the intracellular release of endoplasmic reticulum calcium stores, the IP3 receptors play central roles in regulating diverse cell responses to external stimuli, including storage granule secretion, excitation/contraction, gene expression, cell growth, differentiation, and maturation (26 -30). Among the three IP3 receptors, IP3R1 is widely distributed in a variety of cells and tissues, including most notably, prominent expression in Purkinje and other neuronal cells of the central nervous system, smooth muscle, cardiomyocytes, various epithelial cells, and oocytes (28,29,31). In many cells, IP3R1 expression is up-regulated in response to ATRA, 1,25-dihydroxyvitamin D 3 , and interleukin-1, and direct transcriptional regulation of the IP3R1 promoter has been demonstrated for transcription factors AP-2 and neuro-D-related factor (21,(32)(33)(34). In this context, it is of particular interest to note 1) recent evidence that cellular expression of PLSCR1 is also up-regulated by ATRA and that this induced expression of PLSCR1 in myeloid cell lines has been shown to be required for normal differentiation and maturation in response to ATRA (12)(13)(14) and 2) the substantial reduction in IP3R1 expression that was observed in cells deficient in PLSCR1 (Figs. 6 and 7). Of interest, available evidence suggests that the IP3R1 promoter lacks functional retinoic acid response elements, which implies that its activation in cells treated with ATRA is mediated by other factor(s) that are activated or transcriptionally induced by ATRA. Because the expression of PLSCR1 is induced by ATRA, and this protein exhibits transcriptional activity at the IP3R1 promoter, its participation in feedback activation of IP3R1 expression through ATRA is suggested. A similar role has been proposed for AP-2, an ATRA-induced transcription factor that binds to its site in the IP3R1 promoter to enhance transcription of the gene (21). Alternatively, the binding of PLSCR1 to its site in the IP3R1 promoter ( Ϫ101 GTAACCAT-GTGGA Ϫ89 ), which is in the vicinity of the AP-2-binding site (Ϫ191 to Ϫ184), might function to enhance AP-2-mediated transcriptional activation.
In addition to its potential role in regulating cellular content of IP3R1, it is of interest to note that the expression of PLSCR1, like IP3R1, has been shown to increase during myeloid cell differentiation from hematopoietic precursors, whereas suppression of either PLSCR1 or IP3R1 expression impairs normal differentiation and maturation of these cells (12)(13)(14)(15)(16). This raises the possibility that the observed defects in growth factor-stimulated proliferation and maturation of cultured PLSCR1Ϫ/Ϫ hematopoietic stem cells and the impaired granulopoiesis noted in PLSCR1-deficient mice are related in part to a diminished expression of cellular IP3 receptors and to consequent changes in regulated Ca 2ϩ mobilization from intracellular stores as initiated through various growth factor receptors.
Our data suggest that in addition to the various biologic activities that have been attributed to PLSCR1 at the plasma membrane, this protein also has the potential to selectively alter gene transcription through its nuclear trafficking and binding to genomic DNA. Nuclear trafficking of PLSCR1 is observed in circumstances where palmitoylation of the polypeptide is prevented or precluded and is also observed in circumstances of transcriptionally induced increases in PLSCR1 expression, which may reflect deficient palmitoylation of the protein at high rates of translation (19). Although our data implicate IP3R1 as the target gene regulated by nuclear PLSCR1, it remains to be determined whether the transcription of other genes might also be affected in circumstances of its entry into the nucleus. For example, it was recently shown that the expression of a select subgroup of IFN-stimulated genes is suppressed in cells deficient in PLSCR1 and that this defective transcriptional response to IFN appears to underlie the diminished antiviral activity of IFN in PLSCR1-deficient cells (17). As was noted, PLSCR1 is itself transcriptionally induced by IFN through an IFN-stimulated response element in exon 1, and under circumstances of its induction by IFN, prominent nuclear trafficking of the protein has been observed (11,19). | 5,801.6 | 2005-10-14T00:00:00.000 | [
"Biology"
] |
Spacetime duality between localization transitions and measurement-induced transitions
Time evolution of quantum many-body systems typically leads to a state with maximal entanglement allowed by symmetries. Two distinct routes to impede entanglement growth are inducing localization via spatial disorder, or subjecting the system to non-unitary evolution, e.g., via projective measurements. Here we employ the idea of space-time rotation of a circuit to explore the relation between systems that fall into these two classes. In particular, by space-time rotating unitary Floquet circuits that display a localization transition, we construct non-unitary circuits that display a rich variety of entanglement scaling and phase transitions. One outcome of our approach is a non-unitary circuit for free fermions in 1d that exhibits an entanglement transition from logarithmic scaling to volume-law scaling. This transition is accompanied by a 'purification transition' analogous to that seen in hybrid projective-unitary circuits. We follow a similar strategy to construct a non-unitary 2d Clifford circuit that shows a transition from area to volume-law entanglement scaling. Similarly, we space-time rotate a 1d spin chain that hosts many-body localization to obtain a non-unitary circuit that exhibits an entanglement transition. Finally, we introduce an unconventional correlator and argue that if a unitary circuit hosts a many-body localization transition, then the correlator is expected to be singular in its non-unitary counterpart as well.
I. INTRODUCTION
Generic isolated quantum systems typically thermalize via the interaction between their constituents 1-5 . One exception to this is the phenomenon of many-body localization (MBL) [6][7][8][9][10][11][12][13][14][15][16] where strong disorder causes the system to develop signatures of non-ergodicity such as sub-thermal entanglement under quantum quenches. More recently, it has been realized that new dynamical phases can emerge also in quantum systems subjected to projective measurements due to the 'quantum Zeno effect' 49 . Relatedly, one can consider evolution with more general non-unitary circuits [50][51][52][53][54] , which typically exhibit non-ergodic behavior as well. It is natural to wonder if there is any relation between these two classes of systems, namely, unitarily evolved systems that show single-particle/many-body localization, and systems where non-unitarity plays a crucial role in suppressing ergodic behavior. In this work we explore such a connection using the idea of the space-time rotation of a circuit [55][56][57][58][59][60][61][62][63] .
For a unitarily evolved system to exhibit localization, spatial disorder of course plays a central role. Evidence suggests that time-translation invariance, whether continuous or discrete, is also crucial. For example, Floquet (i.e. time-periodic) circuits with spatial disorder can exhibit MBL phenomena [64][65][66] , while unitary circuits that have randomness both in space and time tend to display ergodic behavior [67][68][69][70][71][72][73][74] . On the other hand, for the aforementioned non-unitary circuits displaying subthermal entanglement , translation invariance in the time or the space direction is not crucial. This is demonstrated by explicit construction of circuits consisting of projective measurements dispersed randomly in space-time that host a transition from an area-law entanglement regime to a volumelaw entanglement regime (see, e.g. Refs. [18][19][20]. A sub-class of such non-unitary circuits have translation invariance in the space direction but lack translation invariance in the time direction. Such circuits will be the focus of this work for reasons we discuss next.
The main idea we will explore is the 'space-time rotation' of a quantum circuit [55][56][57][58][59][60][61][62][63] with a focus on unitary cir- FIG. 1: The geometry of circuit rotation employed in this work illustrated for a 1d system. Given a unitary circuit that acts on a system of spatial size , the wavefunction evolved for time is schematically given by | | 0 = ∫ ( , ) ( ) where the fields labeled 0 , at the boundaries are also being integrated over in ( , ) while 0 , act as boundary conditions. Using the same bulk action , one may then define a rotated circuit that acts on a system with spatial extent , such that the wavefunction at time is | | 0 = ∫ ( , ) ( ) . In this rotated circuit, the fields labeled 0 , are being integrated over in ( , ) while 0 , act as boundary conditions. cuits that host a localization-delocalization transition. To set the stage, consider a general unitary circuit that acts for time on a -dimensional system of spatial size 1 × 2 × ... × . From this, one can define a 'partition function' = tr( ). Denoting the underlying degrees of freedom schematically by symbol , one may represent as a path integral in space-time, , x) is the space-time action and L ( , , 1 , 2 , ..., ) is the corresponding Lagrangian. Let us now define a new LagrangianL by interchanging and 1 : arXiv:2103.06356v3 [quant-ph] 22 Nov 2021 L ( , , 1 , 2 , ..., ) = L ( , 1 , , 2 , ..., ). For example, if L ( , , 1 , 2 ) = ( ) 2 − ( 1 ) 4 + ( 2 ) 6 + 4 , theñ L = −( ) 4 + ( 1 ) 2 − ( 2 ) 6 + 4 . Since the original circuit is local, it implies that both L andL are also local. We useL to define a new 'space-time rotated' circuit : Fig.1 for an illustration, and Sec.II below for details. By design, the circuit acts for time 1 on a system of spatial size × 2 ...× . Crucially, is not guaranteed to be unitary 56 . This point was recently employed in Ref. 63 to design a method for emulating certain non-unitary circuits and their associated measurement-induced phase transitions without requiring extensive post-selection. We note that in the context of imaginary time evolution, the idea of space-time rotation to obtain a dual quantum Hamiltonians was first employed in Ref. 75 .
In this work, we will perform the aforementioned space-time rotation on lattice models of Floquet circuits that are made out of unitaries with spatial disorder, and which display entanglement transitions due to the physics of localization. The rotated circuit will be generically non-unitary, and by construction, will possess translational invariance along a space direction, and disorder/randomness along the time direction. A motivation for our study is that the rotated and unrotated circuits have the same partition function , which is closely related to the spectral form factor 76,77 (= | | 2 ). Since the spectral form factor in a Hamiltonian/Floquet system is expected to show singular behavior across a localization transition 78,79 , one may wonder if this fact has any consequence for the rotated circuit. In the special case when the rotation results in a unitary circuit, it was shown in Ref. 56 that the (unrotated) Floquet circuit is chaotic. Here we instead start from Floquet circuits that can be argued to display a localization transition (and therefore not always chaotic), and study the non-unitary circuits that result from their rotation.
The first example we study corresponds to a Floquet circuit that displays an Anderson localization transition due to quasiperiodic disorder. Rotating this circuit results in a 1d free-fermion non-unitary circuit that exhibits a transition from a volume-law entanglement regime, ∼ ( is the spatial size), to a regime with entanglement characteristic of critical ground states: ∼ log( ). This is interesting because the known examples of non-unitary theories with free fermions have hitherto found only sub-extensive entanglement [50][51][52]54 . The fact that our non-unitary circuit is obtained from rotation of a unitary circuit plays a crucial role in its ability to support volume-law entanglement.
Next, we construct a 2d model where the unitary corresponds to a Floquet Clifford circuit and which displays a localization transition. Interestingly, space-time rotating this circuit results in a non-unitary circuit consisting only of unitaries and 'forced' projective measurements. We find that both the rotated and the unrotated circuits display an entanglement phase transition from a volume law regime to an area-law regime.
The last example we study corresponds to a Floquet unitary circuit that displays an MBL transition [64][65][66] . The rotated, non-unitary counterpart again shows two distinct regimes, one where the entanglement scales as volume-law, and another where entanglement shows sub-extensive behavior.
Finally, we introduce an unconventional correlator that can be interpreted both within a unitary circuit and its non-unitary counterpart. We briefly discuss its measurement without employing any post-selection. Using the 'ℓ-bit' picture of MBL 9,10 , we provide a heuristic argument that this correlator exhibits singular behavior across an MBL transition.
The paper is organized as follows. In Sec.II, we provide a brief overview of the idea of space-time rotating a circuit. In Sec.III, we discuss a Floquet model of non-interacting fermions in 1d that displays a localization-delocalization transition due to quasiperiodicity. We then study the phase diagram of the non-unitary circuit that results from its space-time rotation. In Sec.IV we discuss a two dimensional Clifford Floquet circuit that displays a localization-delocalization transition, and then study its space-time rotated version that turns out to be a hybrid circuit consisting of only unitaries and forced projective measurements. In Sec.V, we discuss a 1d interacting Floquet model that displays many-body localization transition and study the phase diagram of its rotated counterpart. In Sec.VI we introduce an unconventional correlator and discuss its physical consequences. Finally, in Sec.VII, we conclude with a discussion of our results.
II. BRIEF OVERVIEW OF SPACE-TIME ROTATION OF A CIRCUIT
Here we briefly review the idea of the space-time rotation of a circuit using a 1d lattice model 56 . Although we specialize to 1d for now, the discussion can be straightforwardly generalized to higher dimensions, as we do so in Sec.IV. We begin by considering the following unitary Floquet circuit for a system of spatial size : (1) As discussed in the introduction, a space-time rotated mapping is constructed by investigating the 'partition function' = tr ( ) . Using the standard quantum-classical mapping, can be expressed as a partition function of × number of classical variables { , } in two dimensions with complex Gibbs weight : The coupling between neighboring spins˜, , , +1 along the time direction results from , in the Floquet unitary , and the coupling constant˜, is determined as , = − /4 + 2 log tan , . To obtain the space-time rotated circuit, one can now define a Hilbert space for number of spins on a given fixed-time-like slice (Fig.1) Finally, by exchanging the labels of space-time coordinates ↔ , one can construct the space-time rotated circuit ( ) that evolves the system for a time and acts on a Hilbert space of size , ( A few remarks are in order. First, ( ) has the space translational invariance resulting from the time translation invariance in the unrotated Floquet circuit . Second, ( ) is generically non-unitary except for the self-dual points , = , = /4 56 . Third, in the special case when , , , , and ℎ are restricted to {0, ± /4}, ( ) corresponds to a hybrid quantum circuit with only unitary gates and forced projective measurements. While a /4 coupling gives unitary operation as just mentioned, , = 0 implies that the spin at site is frozen in the unrotated circuit, and hence, in the rotated circuit, this corresponds to a forced projective measurement of (1 + +1 )/2 on two neighboring spins. Similarly, , = 0 corresponds to a forced projective measurement of (1 + )/2 on a single site. The fact that a forced projective measurement can arise from the space-time rotation of a unitary gate has also been previously noted in Ref. 63 . Finally, once we have obtained the form of , we let the corresponding system size and the evolution time (Eq.3) be free parameters that are independent of the system size and evolution time of the Floquet unitary from which it was obtained. That is, we do not impose the conditions = , = , when we compare various properties of with . Having reviewed the mapping between a unitary and its 'space-time dual', in the rest of the paper we will consider several Floquet unitary circuits that exhibit entanglement transitions due to the physics of localization, and explore the phase diagrams of their space-time duals.
III. SPACE-TIME ROTATION & ENTANGLEMENT TRANSITION IN A QUASIPERIODIC CIRCUIT
As a first example, we consider a Floquet circuit in one space dimension hosting a localization-delocalization transition. We recall that models with quasiperiodic randomness, such as the Aubry-André-Harper (AAH) model [81][82][83] , can evade Anderson localization 84 in 1d. The AAH model is given by where and † are the fermion creation and annihilation operators. When the on-site potential is incommensurate, i.e., the wavenumber is irrational, all single-particle eigenstates are delocalized (localized) for | | > | | (| | < | |) and arbitrary offset . Motivated by this, we consider a Floquet circuit model with the following unitary for a spin-1/2 chain of size with periodic boundary conditions. We choose = 1, and ℎ to be quasiperiodic: (the inverse Golden ratio), and ℎ = 2.5. We note that Ref. 85 studied the incommensurate AAH modulation in the transverse field Ising model, and found that due to the interplay between symmetry and incommensurate modulation, it exhibits a rich phase diagram, including phases with delocalized, localized, and critical states that sometimes also break the Ising symmetry spontaneously.
Using the above Floquet unitary , we construct the corresponding space-time-rotated circuit as discussed above in Sec.II: Notice that the circuit is translationally invariant in space at each fixed time slice, but quasiperiodic in time. Now we discuss the entanglement structure of long-timeevolved states ( ) from a product state | 0 : where is chosen as ( ) and ( ) for the Floquet circuit and its space-time dual respectively. Using the Jordan-Wigner transformation, we map these circuits into a problem involving free-fermions, and numerically compute the entanglement entropy using the correlation matrix technique [86][87][88] (see Appendix.A 1 for the details).
For the unrotated circuit , we find that the entanglement entropy exhibits a volume-law scaling for 0.64 and an area-law scaling for 0.81 ( Fig.2(a)). In the intermediate regime, 0.64 0.81 ( Fig.2(b)) we find that ∼ ( ) with 0 < < 1. Notably, deep in the volume-law phase, the entanglement entropy density / ≈ 0.386 regardless of , which is very close to the average value predicted for random quadratic Hamiltonians of free fermions derived in Ref. 80 : = log 2 − 1 + −1 (1 − ) log(1 − ) ≈ 0.386 at = 1/2. We also explore delocalization properties of the single-particle eigenfuntions of the circuit in terms of free fermions and find three distinct phases (see Appendix.A 2), in line with the late-time entanglement entropy studied here.
We now discuss the space-time-rotated circuit . We find that it also exhibits a transition in the entanglement entropy of long-time evolved states. Fig.2(c) indicates that there is a transition in the entanglement entropy density / at ≈ 0.64: follows a volume law for 0.64, and obeys a sub-volume scaling for 0.64. We also note that in the volume-law regime, the coefficient of the volume law varies continuously, in strong contrast to the volume-law phase of the unrotated unitary circuit. To elucidate the nature of the sub-volume-law regime, we study Vs , and find that it scales logarithmically with the system size : ∼ log( ) (see Fig.2 where is a number that depends on . We also attempted a scaling collapse for the entanglement close to the critical point in the non-unitary circuit, see Appendix A 3. The collapse is reasonably good in the volume-law regime while it doesn't work well in the sub-volume-law regime. We suspect that this may be related to the fact that the coefficient in the logarithmic scaling of entanglement varies continuously with . A heuristic argument relates the physics of localization in the unitary circuit to the physics of quantum Zeno effect 49 in the rotated non-unitary circuit, and also suggests that the aforementioned entanglement transition is likely to occur at = − ℎ ≈ 0.64, in line with our numerical observations. For the unrotated circuit , the condition > implies that some ℎ in the term ℎ is arbitrary close to . Mapping the spin chain to Majorana fermions using the Jordan-Wigner transition, the corresponding location then has a broken bond between two neighboring sites of the Majorana fermions, thereby impeding their propagation. In contrast, from the point of view of the rotated circuit , the spacetime rotation of the term ℎ at ℎ = corresponds to the two-spin gate˜+ 1 with˜= − /4 + 2 log tan( ), which therefore acts as a projector 1 2 (1 + +1 ). Crucially, such a two-site projection occurs uniformly in space (due to the space translational symmetry of the rotated circuit), leading to the absence of volume-law entanglement for time-evolved states.
Perhaps the most surprising aspect of our result is the presence of a volume-law phase since the previous works on non-unitary free-fermion circuits found phases only with sub-extensive entanglement [50][51][52]54 . For hybrid circuits consisting of unitary evolution interspersed with projective measurements, it was found in Ref. 17 that volume-law entanglement in a free-fermion chain is destroyed by the presence of arbitrarily weak measurement. Ref. 46 argued for similar results. However, these results do not contradict ours since in the volume-law phase, our non-unitary circuit does not specifically correspond to unitary evolution interspersed with projective measurements but instead corresponds to more general evolution with a non-Hermitian Hamiltonian (see Eq.5).
To gain intuition for the origin of the volume-law phase, we consider a simplified circuit that has translation symmetry in both space and time: 0 = +1 ℎ , and allow and ℎ to be complex numbers. If 0 is obtained from the space-time rotation of a unitary circuit, a key feature is that the real part of both and ℎ will be /4. Writing = /4 + and ℎ = /4 + ℎ , we find analytically that such a circuit leads to volume-law entanglement at long times for any and ℎ (see Appendix.B). The volume-law phase originates from the fact that when Re( ) = Re(ℎ) = /4, an extensive number of single-particle eigenvalues of the Floquet unitary are real. Setting = ℎ = , and using a simple quasiparticle picture 89 , we find that the volume-law coefficient of entanglement decays exponentially with : / ∼ − for > 0. Therefore, there is no area-law phase in this simplified, translationally invariant model. We numerically verified these results as well. Although we don't have similar analytical results for the circuit (Eq.5), we verified numerically that Re(˜) = Re(h) = /4 (due to the circuit being obtained from the rotation of a unitary, namely ) is again essential to obtain a volume-law phase. In this sense, the volume-law phase of the non-unitary circuit is 'symmetry-protected' by the unitarity of the unrotated circuit.
One may also inquire about the role played by the timetranslation symmetry of the unitary circuit. If one chooses a different unitary circuit for each time slice, then the localization is lost at any and one only obtains a volume-law phase in the corresponding unitary circuit. We verified that the rotated circuit, which now lacks spatial translational symmetry, does not exhibit a phase transition. Therefore, at least for this specific problem, both the unitarity and the translation symmetry plays a crucial role to obtain the entanglement transition.
It was argued in Refs. 21,22 that a stable volume-law entangled phase of pure states in a hybrid unitary-projective circuit is a consequence of the robust error-correcting properties of the circuit against environmental monitoring. Consequently, a maximally mixed state = 1/2 evolved by the circuit will retain a finite residual entropy density up to an extremely long time, indicating stability against purification by monitoring. Motivated by these results, we studied the purification dynamics of a maximally mixed state evolved under our non-unitary circuit and investigated its von Neumann entropy density as a function of time. Remarkably, we find a sharp transition in the entropy density, where for < 0.64 (i.e. the volume-law entanglement phase), the system has a non-zero entropy density even at times , and for > 0.64, the system is purified with a vanishing entropy density in a time that is independent of the system size (see Fig.3 and Appendix.A 4).
IV. SPACE-TIME ROTATION & ENTANGLEMENT TRANSITION IN A 2D CLIFFORD CIRCUIT
We next explore entanglement transitions in a twodimensional Floquet model and its space-time dual. We consider the following Floquet unitary on a square lattice of size × : Here each , ℎ is chosen to be 0 or 1 with probability and 1− respectively. This is a Clifford circuit since it maps a Pauli string to another Pauli string: 4 1 2 − 4 1 2 = 1 2 for = 1, 2 and 4 − 4 = . Therefore it can be efficiently simulated based on the Gottesman-Knill theorem [90][91][92] . The construction of the circuit is motivated from Ref. 93 , although it differs from the precise circuit discussed in that work.
To construct the space-time-rotated circuit, we interchange the time coordinate and one of the space coordinates while leaving the other space coordinate unchanged. This results in the mapping as follows. Since coordinates are unchanged, the gate − 4 , with being a -directed bond, is invariant under the space-time-rotated mapping. The gate on an -directed bond in the non-unitary circuit for ℎ = 1, 0 respectively. Therefore, the rotated circuit consists of unitary evolution interspersed with forced projective measurements, and is given by where for each and , with probability and 1 − . Note that has translation symmetry along inherited from the time translation symmetry in the unrotated Floquet circuit. Now we discuss the entanglement structure of long-timeevolved states. For both unrotated and rotated circuit, we find an entanglement transition between a volume-law phase and an area-law phase at the same finite critical probability = ≈ 0.28 (see Fig.4). Assuming the following scaling form of entanglement entropy | ( ) − ( )| = (( − ) 1/ ), we find that the correlation length exponent however differs in the two circuits ( ≈ 0.38 for the unrotated circuit and ≈ 0.49 for the rotated one). The coefficient of the volume-law entanglement varies continuously in both circuits and vanishes continuously across the phase transition.
We also analyzed entanglement scaling at the critical point, and found that both in the rotated and the unrotated circuit, the data is indicative of the scaling ∼ log , which is reminiscent of results in Refs. 34,40,51 , see Appendix C. However, as pointed out in Ref. 42 , on small system sizes, a slight error in the location of the critical point can make an area-law scaling, ∼ , appear as ∼ log scaling. Therefore, one may need to study larger system sizes to be conclusive. As an aside, we note that the scaling form ∼ log is not allowed for a system described by a unitary, Lorentz invariant field theory at low energies due to the constraint 2 / 2 ≤ 0 94 .
One may ask whether the time-translation symmetry is crucial to obtain the observed transitions. Specifically, consider a circuit where independent unitaries of the form in Eq.7 are applied at each time slice. In the (unrotated) unitary circuit, as one might expect, breaking time-translational invariance always leads to volume-law entanglement [67][68][69][70][71][72][73][74] . We confirmed that rotating such a circuit leads to a hybrid projectiveunitary circuit that also always exhibits a volume-law scaling. This is because the problem now essentially corresponds to anisotropic bond-percolation in three dimensions where no bonds are removed along one of the directions (namely ) and are removed with probability along the other two directions ( and ). Such a model is known to not exhibit a percolation transition for any 95 .
One may also consider the Floquet circuit (Eq.7) and its space-time dual (Eq.8) in 1d. In this case, however, both the unitary circuit and its rotated counterpart are in the arealaw phase for any non-zero . To see this, let's consider the unitary circuit and notice that when = 0, the spatial support of a single-site Pauli operator grows with time, leading to volume-law entanglement at long times. On the other hand, when ≠ 0, there is a finite density of locations (of (1/ )) where the or the gates are absent. These locations impose a 'wall' such that the end of a stabilizer string cannot grow beyond these walls. This leads to area-law entanglement (1/ ). In contrast, the 2d circuit discussed above allows for a volume-law phase for small non-zero since a local Pauli stabilizer spreads as a membrane that can bypass / for the same system as Fig.(a) the points corresponding to the absent or gates. Such a picture suggests that the entanglement transition may be related to a percolation transition, similar to Ref. 93 . However, the correlation length exponent we numerically obtained differs from the prediction of percolation in two dimensions. It would be be worthwhile to revisit this question in more detail in the future.
V. SPACE-TIME ROTATION OF AN INTERACTING FLOQUET MBL CIRCUIT
Finally, we present numerical results on an interacting Floquet model of the form in Eq.1: where = 0.8, and ℎ is a Gaussian random variable with mean ℎ = 0.8090 and variance = 1.421. As shown in Ref. 66 , tuning induces a transition from an MBL to an ergodic phase, where the Floquet eigenstates exhibit area-law entanglement for small and volume-law entanglement for large . Here we study the corresponding space-time dual non-unitary circuit.
As a benchmark, we first confirm the MBL-ergodic transition found in Ref. 66 for the Floquet unitary circuit. Using Exact Diagonalization (ED), we study the half-chain entanglement entropy averaged over all eigenstates of , and average the data from 200 random realizations of . We find clear signatures of a transition from a sub-extensive regime to a volume-law regime at finite = , . Since eigenstates are localized for < , and are expected to resemble an infinite-temperature pure state (i.e. a random pure/Page state 96 with entanglement entropy = 0.5( log 2 − 1)) for any > , , we perform a data collapse assuming the scaling form / = (( − , ) 1/ ), and find the critical point , ≈ 0.23 with the correlation length exponent = 1.09 ( Fig.5 (a) inset).
The space-time-rotated circuit corresponding to is where the field ℎ is now random in the time direction due to the space-time rotation, and the couplings˜,˜are defined in Sec.II. We first analyze the entanglement structure of states evolved via for times ∼ . We find signatures of a transition by tuning (see Fig.5 (b)). In particular, when one plots entanglement entropy density, one finds a crossing at ≈ 0.6 (see Fig.5 (c)), which separates a regime with volumelaw entanglement from a regime where the entanglement is sub-extensive.
Finally, we study the entanglement dynamics of an ancilla qubit that is initially maximally entangled with the system, following the protocol in Refs. 21,22 . We evolve the system for time ∼ , and find a crossing around ≈ 0.4 (see Fig.5 (d)). In addition, the entanglement of the ancilla qubit shows distinct features on two sides of this crossing (see Appendix.D for numerical data). For 0.4, the entanglement entropy of the ancilla qubit decays from its initial value ( = log 2) exponentially with time, while for 0.4, it remains at its initial value for a while (i.e. exhibits a 'plateau'), followed by an exponential decay. To quantify the plateau interval, we define a 'purification time' as the time after which the entanglement of the ancilla qubit has dropped below 0.65 (≈ 0.94 log(2)). We find ≈ (1) for 0.4 while it increases with system size for 0.4 (Fig.5 inset). Notably, for large enough ( 0.7), we find that grows super-linearly with , and therefore the non-unitary circuit may potentially serve as a good quantum error-correcting code.
Finally, we note that different values of the crossing points in different measures suggest that the finite size effects are likely strong at these system sizes. However, at the very least, the trends strongly indicate a stable volume law phase at 0.6 (see Fig.5 (c)), and a phase with sub-extensive entanglement at small but non-zero .
VI. SPACE-TIME ROTATED CORRELATORS: POST-SELECTION FREE MEASUREMENT AND PHYSICAL CONSEQUENCES
Since the circuits related by space-time rotation have the same bulk action (see the Introduction and Fig.1), it is natural to seek a relation between their physical observables. At the outset, one notices that conventional correlation functions such as 0 | † | 0 in the unitary circuit are not related to similarly defined correlations functions in its space-time rotated non-unitary , such as 0 | † | 0 / 0 | † | 0 . Referring to Fig.1, this is because in the former case, the fields 0 and are held fixed to define the wavefunction, and the fields 0 and are being summed over, while in the latter case, it is the other way around. However, consider the following object (see Fig.6): Since the action is invariant under space-time rotation and one is summing over all fields in the above integral, has a well-defined meaning in both the rotated and unrotated circuits: where ( , ) and ( , ) are evolution operators from time to when < , while when > , ( , ) = ( , ) (0, ), and ( , ) = ( , ) (0, ) (see Fig.6 for definitions of , ).ˆ1,ˆ2 are operators corresponding to the fields 1 , 2 in Eq.11, whose space-time insertion locations are shown in Fig.6.
The correlation functions in Eqs.11, 12 are rather unconventional since there is no 'backward trajectory' as in the standard Keldysh expression 97 It can be interpreted in two different ways: either as a correlation function for a system evolving unitarily with circuit , or as a correlation function for a system evolving with the rotated non-unitary circuit , see Eq.12.
such as 0 | † | 0 . To measure such correlators experimentally, one may employ the idea of a control qubit that generates two branches of a many-body state [99][100][101] . For example, to measure 0 | 1ˆ1 2ˆ2 | 0 for some 1 , 2 and a product state | 0 , the total system is initially prepared in a state | 0 ⊗ (| ↑ + | ↓ ) where the expression after ⊗ denotes the state of the control qubit. Using standard techniques [99][100][101] , one then applies the operator 2ˆ2 on the 'up-branch' of this initial state, i.e., the state | 0 ⊗ | ↑ , and similarly, one applies the operatorˆ † 1 † 1 on the down branch. Finally, one measures, the expectation value of the and the operators that act on the control qubit, which yields the object of interest, namely, the real and imaginary parts of 0 | 1ˆ1 2ˆ2 | 0 . The trace in Eq.12 would then need to be approximated by sampling over several such expressions, although even a single/few such expressions may sometime capture the qualitative aspects of interest (see below).
We also studied the correlator in the 1d free-fermion circuit discussed in Sec.III as well as the 2d Clifford circuit discussed in Sec.IV. We found that the correlator fails to distinguish between the localized phase and the delocalized phase in either of these circuits for distinct reasons. For a localized freefermion circuit, the time-evolved operator ( ) continues to have a non-zero overlap with at arbitrarily long times, i.e. ( ) = + .... However, due to the lack of dephasing in free-fermion circuits (see e.g. Ref. 102 ), the terms under '...' do not vanish even at long times despite averaging over disorder, and their contribution fluctuates in time significantly at all times. Consequently, the spacetime-rotated correlator does not provide a clear signature across the localization transition.
On the other hand, for a Clifford circuit, the correlator does not differentiate between a localized phase and a delocalized phase due to the absence of the 'ℓ-bit' picture ( ) = + ... . Specifically, ( ) will always be a single product of Pauli operators over various sites, and the localization/delocalization phase manifests in the bounded/unbounded spatial support of ( ), instead of the relative weight of vari-ous operators. Therefore, our aforementioned argument in the context of generic MBL systems does not apply.
We note that Ref. 63 discussed an alternative method to relate quantities between a unitary circuit and its rotated non-unitary counterpart. In particular, Ref. 63 considered a protocol where the purification dynamics in the non-unitary circuit can be obtained by a combination of unitary dynamics and projective measurements.
VII. SUMMARY AND DISCUSSION
In this work, we employed the idea of the space-time rotation of unitary circuits to construct non-unitary circuits that display entanglement phase transitions. We focused on specific Floquet unitary circuits that display localization-delocalization transitions of various kinds (free fermion, Clifford, manybody). We found that the delocalized (localized) regime of the unitary circuit maps to a regime with volume-law (arealaw/critical) entanglement in the corresponding non-unitary circuit. Therefore, the space-time rotation maps the physics of localization to the physics of quantum Zeno effect. We also found that the entanglement transitions in the non-unitary circuits are accompanied by purification transitions of the kind introduced in Refs. 21,22 . We introduced an unconventional correlator in the non-unitary theory that can in principle be measured without requiring any post-selection, and provided a heuristic argument that this correlator is singular across an MBL transition.
Our procedure leads to the construction of a non-unitary free fermion circuit that supports volume-law entanglement, which has hitherto been elusive [50][51][52]54 . As discussed in Sec.III, we find that a non-unitary circuit obtained by the rotation of a free fermion unitary circuit has the special property that the real parts of certain hopping elements are automatically pinned to /4. This leads to volume-law entanglement when the nonunitary circuit has translational symmetry in both space and time, and the possibility of a volume-law to area-law transition when disorder is introduced in the non-unitary circuit along the time direction.
Given our results, it is natural to ask if the space-time rotation of a unitary circuit hosting a localization-delocalization transition always leads to a non-unitary circuit that also shows an entanglement transition. Firstly, we note that a localization-delocalization transition in a unitary system will induce a singularity in the spectral form factor since the spectral form factor is well known to be sensitive to quantum chaos. Due to our mapping, the spectral form factor for the nonunitary theory will also be singular across the transition (since = | tr | 2 = | tr | 2 ). Recent progress 103 shows that at least for a class of non-unitary evolution, the spectral form factor continues to encode features of quantum chaos. Further, as discussed in Sec.VI, a correlator that is well-defined in both the unitary and the non-unitary theory can be argued to be singular across an MBL transition. However, this correlator is a bit hard to interpret physically within the non-unitary theory. It will be worthwhile to pursue a physical understanding of the spectral form factor and the correlator in Sec.VI for the non-unitary theories studied in this paper.
We also explored the role played by the time-translation symmetry of the unitary circuit. In the examples we studied, breaking of time-translation symmetry also leads to the absence of entanglement transition in the rotated non-unitary circuit. We suspect that the entanglement transitions in nonunitary circuits that are space-time dual of time-translationally invariant unitary circuits belong to a different universality class compared to those hosted by non-unitary circuits where such a symmetry is absent.
As argued in Ref. 63 , if a non-unitary circuit is related to a unitary circuit via space-time rotation, then at least some of its properties (such as the purification rate) may be obtained purely via unitary evolution combined with a small number of projective measurements. Furthermore, as discussed in Sec.VI, an unconventional correlator in the non-unitary theory can be measured using only unitary operations. Applying these results to the examples discussed in this work would potentially allow one to access the physics of entanglement transitions in hybrid projective-unitary circuits without post-selection.
We note that Ref. 52 introduced an interesting relation between non-unitary circuits of free fermions in + 1 spacetime dimensions and the Anderson localization-delocalization transition for Hermitian Hamiltonians in + 1 space dimensions. The basic idea employed is to relate the circuit in + 1 space-time dimensions to the scattering matrix that describes the Chalker-Coddington model 104 in + 1 dimensional space. In contrast, our work focuses on relating a unitary and a non-unitary system that live in the same number of spacetime dimensions. It might be worthwhile to understand the volume-law phase in our non-unitary circuit of free fermions (Sec.III) and its higher dimensional generalizations from the perspective in Ref. 52 .
Note Added: After the completion of this work, we became aware of a related work 105 (appearing in the same arXiv posting) which also considers entanglement dynamics in spacetime duals of unitary circuits. Our works are largely complementary and agree where they overlap.
Since the Floquet dynamics does not conserve the total fermion number, it is more convenient to employ the Majorana fermions by defining 2 −1 = + † and 2 = ( − † ), which satisfy { , } = 2 . The Floquet unitary defined in Eq.A1 then reads Since is Gaussian in Majorana fermions, the Majoranas evolve under as † where is an orthogonal matrix. Correspondingly, the Majoranas at time can be obtained by repeatedly applying the orthogonal transformation on { }: ( ) = ( ) . Using this formalism, we can calculate the correlation matrix at time : Γ ( ) = ( ) ( ) − , from which the entanglement entropy between a region and its compliment can be found by diagonalizing Γ ( ), the restriction of the correlation matrix to the region (Refs. [86][87][88] ): with {± } being the 2 eigenvalues of Γ ( ). In the left panel, the black and gray dashed lines serve as a reference for the scaling laws IPR ∼ and IPR ∼ √ respectively.
Single-particle eigenfunctions of Floquet unitary
Here we discuss the properties of the single-particle eigenfunctions of in terms of the Majorana fermions. We find signatures of three distinct phases, in line with the results from entanglement entropy of long-time-evolved states (Sec.III in the main text). Specifically, we study the inverse participation ratio: where is an eigenfunction of at the -th Majorana site. We recall that the IPR is a conventional tool to quantify the localization/delocalization property of wavefunctions. In one spatial dimension, an extended (delocalized) wavefunction has | | ∼ (1/ √ ), implying IPR ∼ ( ). On the other hand, a localized wavefunction is mainly supported on a finite number of lattice sites, yielding IPR ∼ (1). Here we study the IPR averaged over all eigenstates of , and find that the averaged IPR (denoted as IPR ) exhibits three different scalings with the system size as the modulation strength is varied, similar to the entanglement entropy of long-time-evolved many-body states. For small , IPR scales as ( ), a signature of a delocalized phase, while for large , IPR ∼ (1), corresponding to a localized phase. In addition, there is an intermediate regime (0.64 0.8), where IPR scales as ( ) with ∼ 0.5 (see Fig.8).
Scaling collapse of entanglement entropy
In Fig.9 we provide numerical data for the scaling collapse of the late-time entanglement entropy for the rotated circuit (Eq.5).
Purification dynamics
Here we present additional numerical results on the purification dynamics of a density matrix that is initially in a completely mixed state (i.e. ( = 0) ∝ 1) and is evolved with the non-unitary circuit defined in Eq.5 (see Fig.10). At = 0.2 (i.e. in the volume-law phase), entropy density / decreases at short times and remains non-zero for the longest observed time ( ∼ 2 ). At = 1.2 (i.e. in the critical phase), entropy density decreases exponentially to zero within a characteristic time scale that is independent of .
For | /4| < 1, we find = 4 ± √︃ 1 − 4 2 , and the corresponding energy = ± √︃ − 2 is real. Now let's solve for the inequality | /4| < 1 analytically for certain simple cases to identify the -modes with purely real single particle energy . For = 0, one has |cos | < 1 cosh(2 ℎ ) , and for any finite (i.e. non-infinite) ℎ , there is a finite interval of with purely real energy (see also Fig.11 left): . (B10) Within the quasiparticle picture 89 , since only those quasiparticle pairs with purely real energy have an infinite lifetime, Eq.B10 implies the existence of finite density of such quasiparticle pairs, resulting in the volume-law entanglement in long-time-evolved states at any non-infinite ℎ . In particular, the volume-law coefficient of entanglement entropy follows / ∼ ∫ ∈ ( ). (defined in Eq.B10) specifies the interval of -modes with purely real energy. ( ) is the entanglement contributed from the quasiparticle pair with momentum , and is a non-universal function determined from the initial state. For large ℎ , since the length of interval decays exponentially as −2 ℎ , the volume-law coefficient / decays exponentially as well: where > 0 is a non-universal number that depends on the initial state.
Another simple case is = = ℎ , where the corresponding modes with real energy satisfy 0 < < 1 = cos −1 cosh(4 )−3 cosh(4 )+1 . For large , one finds 1 ∼ −2 , implying the volume-law coefficient where > 0 is a non-universal number that depends on the initial state. Although here we only discuss two cases (varying ℎ at fixed = 0 and varying = ℎ = ), we checked that the condition Re( ) = Re(ℎ) = /4 always gives extensive number of -modes with purely real energy, indicating volume-law entanglement. In strong contrast, any deviation from Re( ) = Re(ℎ) = /4 gives (1) number of -modes with purely real energy, resulting in the absence of volume-law entanglement (see Fig.11 middle and right). | 9,924.6 | 2021-03-10T00:00:00.000 | [
"Physics"
] |
Non-cytotoxic Cobra Cardiotoxin A5 Binds to αvβ3 Integrin and Inhibits Bone Resorption
Severe tissue necrosis with a retarded wound healing process is a major symptom of a cobra snakebite. Cardiotoxins (CTXs) are major components of cobra venoms that belong to the Ly-6 protein family and are implicated in tissue damage. The interaction of the major CTX from Taiwan cobra, i.e. CTX A3, with sulfatides in the cell membrane has recently been shown to induce pore formation and cell internalization and to be responsible for cytotoxicity in cardiomyocytes (Wang, C.-H., Liu, J.-H., Lee, S.-C., Hsiao, C.-D., and Wu, W.-g. (2006) J. Biol. Chem. 281, 656-667). We show here that one of the non-cytotoxic CTXs, i.e. CTX A5 or cardiotoxin-like basic polypeptide, from Taiwan cobra specifically bound to αvβ3 integrin and inhibited bone resorption activity. We found that both membrane-bound and recombinant soluble αvβ3 integrins bound specifically to CTX A5 in a dose-dependent manner. Surface plasmon resonance analysis showed that human soluble αvβ3 bound to CTX A5 with an apparent affinity of ∼0.3 μm. Calf pulmonary artery endothelial cells, which constitutively express αvβ3, showed a CTX A5 binding profile similar to that of membrane-bound and soluble αvβ3 integrins, suggesting that endothelial cells are a potential target for CTX action. We tested whether CTX A5 inhibits osteoclast differentiation and bone resorption, a process known to be involved in αvβ3 binding and inhibited by RGD-containing peptides. We demonstrate that CTX A5 inhibited both activities at a micromolar range by binding to murine αvβ3 integrin in osteoclasts and that CTX A5 co-localized with β3 integrin. Finally, after comparing the integrin binding affinity among CTX homologs, we propose that the amino acid residues near the two loops of CTX A5 are involved in integrin binding. These results identify CTX A5 as a non-RGD integrin-binding protein with therapeutic potential as an integrin antagonist.
and constitute ϳ50% of the weight of cobra venom. CTXs are believed to play a critical role in cobra venom toxicity (5). We have shown that CTXs bind to glycosaminoglycans with specificity and are retained on the membrane surface for action (6 -8). Interestingly, one of the CTXs, CTX A3, interacts with sulfatide to form a pore and becomes internalized to further target mitochondria in cardiomyocytes and H9C2 myoblasts (9 -12). The mechanisms for CTX-induced perturbation of the wound healing process and severe tissue damage are unknown. It is not clear whether there are cellular receptors for CTXs, although other cobra venom components such as secretory phospholipase A 2 are known to have diverse targets for their actions by involving glycosaminoglycans, protein receptors, and membrane lipids (13,14).
CTXs are all -sheet basic polypeptides of 60 -62 amino acid residues with a three-fingered loop-folding topology and are members of the Ly-6 protein family. Members of this family share one or several repeat units of the Ly-6 domain, which is defined by a distinct disulfide bonding pattern of between 8 and 10 cysteine residues (15). This protein family can be divided into two subfamilies. One subfamily includes the secreted single domain snake cytotoxins (e.g. CTXs and neurotoxins), which possess only eight cysteines and no glycosylphosphatidylinositolanchoring signal sequence. Another subfamily comprises glycosylphosphatidylinositol-anchored glycoprotein receptors with 10 cysteine residues (e.g. urokinase-type plasminogen activator (uPA) receptor with three Ly-6 domains (16).
Recent advances in understanding the structure and mechanism of toxin interaction with integrins (29 -31) have allowed the development of small molecule antagonists with therapeutic potential (32)(33)(34). For instance, echistatin has been shown to be a potent inhibitor of bone resorption both in culture and in an animal model by directly interacting with ␣v3 integrin via the RGD sequence (35). ␣v3 in osteoclasts, multinucleate cells (MNCs) formed by the fusion of mononuclear pro-genitors of the macrophage family, is the key integrin in mediating the formation of the fused polykaryon in the late stage of osteoclast differentiation and in osteoclast adhesion during bone resorption (36,37). A peptidomimetic antagonist of ␣v3 based on the RGD sequence inhibits bone resorption in vitro and prevents osteoporosis in vivo (38).
We have previously reported that the uPA receptor binds to several integrins and that this interaction plays a critical role in signal transduction from uPA and the uPA receptor (39). Based on the structural similarity between CTX and the uPA receptor, we hypothesized that integrins may be involved in the action of CTXs. In the present study, we demonstrate that several CTXs specifically bound to ␣v3 integrins. Of all CTX homologs with known three-dimensional structures tested, non-cytotoxic CTX A5 from Taiwan cobra (Naja atra) exhibited the strongest binding to human ␣v3 at a K d of ϳ0.3 M. We also show that CTX A5 binding to ␣v3 effectively inhibited bone resorption and differentiation of murine osteoclasts. Comparison of the binding affinity of all studied CTXs for ␣v3 revealed a potential role for the amino acid residues located at the two loops of the three-fingered CTXs. These results suggest a potential role for integrins in the actions of CTXs and that non-RGD CTX A5 has therapeutic potential as an integrin antagonist.
EXPERIMENTAL PROCEDURES
Materials-CTXs were purified from the crude venom of N. atra, Naja mossambica, and Naja nigricollis by SP-Sephadex C-25 ion exchange chromatography, followed by high pressure liquid chromatography on a reverse-phase C 18 column as described previously (40,41). The purity of the studied CTXs, including rhodamine-labeled CTX A5 (12), was checked by mass spectrometric analysis routinely during the purification process to be consistent with the known protein sequences. The fibrinogen ␥-chain C-terminal globular domain (␥C; amino acids 151-411) was synthesized in bacteria as an insoluble protein and refolded as described previously (42). Recombinant soluble ␣v3 was synthesized in CHO-K1 cells using the soluble ␣v and 3 expression constructs provided by Dr. Tim Springer (Center for Blood Research, Boston, MA) and purified by nickel-nitrilotriacetic acid affinity chromatography as described (20). Mouse anti-human ␣v3 integrin monoclonal antibody LM609 was from Chemicon International, Inc. (Temecula, CA). Medium 200 (catalog no. M-200-500) and low serum growth supplement (catalog no. S-003-10) were purchased from Cascade Biologics for testing cell proliferation under low serum conditions. Rabbit anti-human 3 integrin polyclonal antibody was purchased from Chemicon International, Inc. Recombinant mouse RANKL and macrophage colony-stimulating factor (M-CSF) were obtained from R&D Systems (Minneapolis, MN). The BD BioCoat TM Osteologic TM bone cell culture multitest system was obtained from BD Biosciences. The GRGDSP and GRGESP peptides were purchased from American Peptide Co. Inc. (Sunnyvale, CA). Naphthol AS-MX phosphate and fast red violet LB salt were obtained from Sigma.
Cells-Chinese hamster ovary (CHO) cells expressing human 3 integrin (designated 3-CHO cells) have been described previously (43). 3-CHO cells express a hamster ␣v/human 3 integrin hybrid. As a control, CHO cells were transfected with the pBJ-1 vector together with the neomycin gene and were selected for G418 resistance (designated mock-transfected CHO cells). Calf pulmonary artery endothelial (CPAE) cells (CCL-209, between passages 17 and 23) were obtained from American Type Culture Collection and cultured in minimal essential medium supplemented with 10% fetal bovine serum, and penicillin/ streptomycin. RAW 264.7 murine monocytic cells (American Type Culture Collection TIB-71) were maintained in Dulbecco's modified Eagle's medium (Sigma) supplemented with 10% fetal bovine serum, 100 units/ml penicillin G, and 100 g/ml streptomycin.
Binding of Soluble ␣v3 Integrin to CTXs-Binding assays were performed as described previously (43). Briefly, 96-well microtiter plates were coated with 100 l of 0.1 M NaHCO 3 (pH 9.4) containing CTXs at 1ϳ5 M and incubated for 16 h at 4°C. The remaining protein-binding sites were blocked by incubation with 0.1% bovine serum albumin (BSA; Sigma) for 1 h at room temperature. Soluble horseradish peroxidaseconjugated ␣v3 in 50 l of HEPES/Tyrode's buffer supplemented with 10 mM cation or 5 mM EDTA was added to the wells and incubated at room temperature for 1 h. After non-bound soluble integrins were removed by rinsing the wells with the same buffer, bound integrins were quantified by measuring the absorbance at 450 nm and developed by adding the 3,3Ј,5,5Ј-tetramethylbenzidine substrate for horseradish peroxidase.
Cell Adhesion, Proliferation, and Viability Assays-The 96-well microtiter plates were coated with CTXs as described above. In adhesion assays, cells (10 5 cells/well) in 100 l of HEPES/Tyrode's buffer supplemented with 1 mM cation were added to the wells and incubated at 37°C for 1 h. After non-bound cells were removed by rinsing the wells with the same buffer, bound cells were quantified by measuring endogenous phosphatase activity (44). For CTX-induced cytotoxicity, we defined cell proliferation for the long time effect (24 or 48 h) and cell viability for the short time effect (2 h). CPAE cells (1 ϫ 10 5 cells/well) were incubated with CTXs at the indicated concentrations in 96-well cell culture plates at 37°C for 48 h. Proliferation of CPAE cells was measured using the 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT) assay. In brief, after culture, MTT solution (2 mg/ml) was added to each well and incubated for 1 h. The precipitated dye was solubilized by dimethyl sulfoxide (Sigma), and the absorbance at 570 nm was measured. The absorbance measured at 570 nm was proportional to the number of living cells in culture.
Inhibition of CTX-␣v3 Interaction by ␥C-␥C (10 g/ml) (45) was immobilized on the 96-well plates. Soluble ␣v3 integrin was added to the wells in the presence of CTXs at various concentrations. Bound integrin was detected in binding assays as described above.
Surface Plasmon Resonance Binding Studies-Surface plasmon resonance (SPR) experiments were performed on Biacore X instruments (Biacore International AB, Uppsala, Sweden). ␣v3 integrin was immobilized on a Biacore CM5 chip at a constant flow rate of 5 l/min by the amine coupling method (46). In general, the dextran surface was activated by injection of a 1:1 mixture of 0.1 M N-hydroxysuccinimide and 0.39 M 1-ethyl-3-(3-dimethylaminopropyl)carbodiimide hydrochloride for 7 min ␣v3 (in 10 mM acetate (pH 4.0) and preincubated with 1 mM Mn 2ϩ ) was injected over the activated surface for 7 min. Finally, 35 l of 1 M ethanolamine was used to deactivate the surface. The difference in the base line before and after immobilization accounts for the integrin immobilized. One resonance unit corresponds to 1 pg of protein accumulated per mm 2 . The reference surface was activated by the same activation/deactivation procedure except without injection of the integrin solution. Because of the relatively small size of CTXs (ϳ7 kDa) compared with integrin (ϳ200 kDa), a high surface density integrin surface (ϳ5000 resonance units) was prepared to observe the dominant binding response. All of the observed binding responses were found to be fast; and therefore, there was no significant mass transport effect under the experimental conditions.
All binding experiments were performed at 25°C at a continuous flow rate of 40 l/min with HEPES-buffered saline (10 mM HEPES-NaOH (pH 7.4), 150 mM NaCl, and 0.005% (v/v) Tween 20). The equilibrium binding response of CTX, which was the response difference between the integrin-immobilized surface and the reference surface, was used to estimate the binding affinity. Upon titration with various CTX concentrations, the binding response was not a simple Langmuir equilibrium binding isotherm. The binding affinities were determined from the slopes of the Scatchard plots at the reported concentration range.
Osteoclast Differentiation Assays-RAW 264.7 cells (2 ϫ 10 3 cells/ well) were cultured in 96-well plates in the presence of RANKL (50 ng/ml) and M-CSF (10 ng/ml) for 5 days with a change to fresh medium every 2 or 3 days. Osteoclast-like cells were evaluated by tartrate-resistant acid phosphatase (TRAP) staining. TRAP staining was performed according to Evans et al. (47) with a slight modification. After the culture period, adherent cells were fixed in 10% formaldehyde for 10 min. Cells were then stained for 30 min for TRAP activity with 0.1 mg/ml naphthol AS-MX phosphate as a substrate and 0.6 mg/ml fast red violet LB salt as a stain. This staining was performed in 0.1 M sodium acetate buffer (pH 5.0) containing 50 mM sodium tartrate, followed by counterstaining with hematoxylin. TRAP-positive MNCs with more than three nuclei were considered to be osteoclast-like cells and were counted under a microscope. For measurement of TRAP intensity, the plates were scanned by a transparent light scanner, and the red color image was extracted from the scanned image using ImageJ and is represented here as TRAP intensity (48).
Bone Resorption Assays-Bone resorption assays were performed according to Ariyoshi et al. (49) with a slight modification. RAW 264.7 cells were cultured for 10 days with RANKL (50 ng/ml), M-CSF (10 ng/ml), and various concentrations of CTX A5 on BD BioCoat TM Osteologic TM multitest slides, which are calcium phosphate ceramic thin film-coated quartz disks, to quantify osteoclastic cell-mediated mineral resorption. After the culture period, cells were removed using 6% NaOCl and 5.2% NaCl. The resorption area was observed under a light microscope and counted using ImageJ.
Osteoclastic Cell Viability-Cell viability was evaluated by the MTT assay. RAW 264.7 cells were plated in 96-well plates at a concentration of 2 ϫ 10 3 cells/well 1 day before the experiment and then stimulated FIGURE 1. Binding of CTXs from Taiwan cobra venom as measured by enzymelinked immunosorbent and SPR binding assays. A, soluble ␣v3 integrin was incubated with various CTXs (5 M) coated on 96-well microtiter plates for 1 h at room temperature in HEPES/Tyrode's buffer supplemented with 10 mM Mg 2ϩ . Flavoridin (FLV; 20 l/ml) and BSA (1%) were used as controls. B, the SPR experiments were performed using a Biacore X instrument. The Scatchard plots demonstrate CTX binding to immobilized ␣v3 (10,000 resonance units (RU)) with preincubation with 1 mM Mn 2ϩ before immobilization procedures. C, shown are representative response traces of the indicated CTXs (2, 5, and 10 M) after passage over an immobilized ␣v3 surface (5000 resonance units). Data are expressed as the relative response after subtraction of the background signal recorded on a reference surface. Note that CTXs A2 and A4 exhibited larger responses compared with CTX A3 even though they bound more weakly to integrin according to the Scatchard plots. with RANKL and M-CSF. Stimulated cells were cultured in the presence or absence of CTX A5 for 5 days.
Confocal Microscopy-Rhodamine-labeled CTX A5 was prepared as described previously (12). Macrophages were cultured with RANKL and M-CSF for 5 days. To monitor the location of CTX A5, 5 M rhodamine-labeled CTX A5 was applied to the culture in Hanks' balanced salt solution (HBSS) for 5 min at 37°C before fixation in formaldehyde. After CTX A5 was applied and then washed with 1% BSA in HBSS (10), 3 integrin was detected using anti-3 integrin polyclonal antibody dissolved in HBSS with 1% BSA and washed twice with HBSS with 1% BSA and once with HBSS. The fluorescein isothiocyanate-conjugated secondary antibody was then applied. Co-localization is depicted in yellow. Images were recorded using a Zeiss LSM 510 confocal microscope with a full-width half-maximum of 0.6 m to set a pinhole size.
Scanning Electron Microscopy-After removing cells with NaOCl and NaCl as described above, a BD BioCoat TM Osteologic TM multitest slide was treated by ultrasonication in water for 0.5 h. It was then allowed to dry in vacuum before being coated with gold and examined by scanning electron microscopy (Hitachi S-4700) at an accelerating voltage of 5 kV and an emission current of 9500 nA (50).
RESULTS AND DISCUSSION
We tested whether recombinant soluble ␣v3 integrin binds to immobilized CTXs from Taiwan cobra venom in an enzyme-linked immunosorbent assay-type assay. As shown in Fig. 1A, all CTXs, i.e. CTX A1 to A6, exhibited detectable binding compared with the positive control flavoridin and the negative control BSA. Soluble ␣v3 bound to immobilized CTX A5 most efficiently. We determined how strongly the soluble CTXs bound to immobilized ␣v3 in SPR studies. We immobilized soluble ␣v3 by amine coupling on a biosensor chip and measured the CTX binding to the chip surface. SPR measurement (Fig. 1B) confirmed the high affinity of CTX A5 for ␣v3 with an apparent dissociation constant of ϳ0.3 M. As shown in the biosensor response and related Scatchard plots (Fig. 1, B and C), CTXs bound to ␣v3 with different affinities (CTX A5 Ͼ A3 Ͼ A2 and A4 Ͼ A1 and A6) in a dose-dependent manner. For CTX A6, which exhibited the weakest response, the Scatchard plot at low concentration range indicated a binding affinity of ϳ15 M. Therefore, there is a difference of ϳ2 orders of magnitude in terms of CTX affinity for ␣v3. Although other CTXs such as CTX A3 target sulfatides and induce CTX pore formation in the plasma membrane of cardiomyocytes, CTX A5 lacks observable cytotoxicity (40 -41) and is thus known as a cardiotoxin-like basic polypeptide. We examined whether CTX A5 acts as an integrin ligand in more biological systems.
We first studied whether CHO cells expressing recombinant ␣v3 integrin (designated 3-CHO cells) adhere to the CTX A5-coated surface. We found that 3-CHO cells adhered to the CTX A5-coated surface in a dose-dependent manner at higher levels compared with control CHO cells transfected with vector only (designated mock-transfected CHO cells) (Fig. 2). These results suggest that ␣v3 is involved in the binding of CTX A5 to the cell surface. Because mock-transfected CHO cells showed low level adhesion, it is possible that other integrins or non-integrin receptors may also be involved in the binding of CTX A5 to the cell surface. Soluble ␣v3 integrin bound to immobilized CTX A5, but not significantly to CTX A6 (Fig. 3B; for a sequence comparison between CTXs A5 and A6, see Fig. 3A). Saturated binding was observed for CTX A5 with an apparent affinity of ϳ1 M. Mn 2ϩ (10 mM) or Mg 2ϩ (10 mM) enhanced its binding to CTX A5 by ϳ5and 2-fold, respectively, and EDTA significantly reduced the binding of ␣v3 integrins to CTX (Fig. 3C). Mn 2ϩ and Mg 2ϩ enhanced the binding of soluble ␣v3 to CTX A6 and the flavoridin disintegrin as well. These results suggest that the cation dependence of CTX A5 is similar to that of other known integrin ligands.
We tested whether CTXs compete for binding to soluble ␣v3 integrin with ␥C. As shown in Fig. 4, CTX A5 competed with ␥C for binding to ␣v3, but CTX A6 did not, consistent with the observation that CTX A6 shows weak binding to ␣v3. These results suggest that CTXs share a common binding site in ␣v3 with ␥C.
We investigated whether the difference in integrin binding among CTXs is related to their cytotoxicity. CPAE cells constitutively express ␣v3 integrin on their surface. As shown in Fig. 5A, CPAE cells bound to a CTX-coated plate in a manner similar to 3-CHO cells. This suggests that ␣v3 is indeed a predominant receptor for CTX binding to CPAE cells. In this experiment, we also included CTX A3 for comparison because we have shown recently that CTX A3-induced cell death of cardiomyocytes depends on the binding of CTX A3 to sulfatides, a glycosphingolipid located at the outer leaflet of plasma membranes (10). Interestingly, CTX A3 and flavoridin blocked CPAE cell proliferation in a dose-dependent manner, but CTXs A5 and A6 did not (Fig. 5, B and D). Under low serum conditions, however, CTX A6, but not CTX A5, exhibited observable cytotoxicity (Fig. 5B, inset). Although CTX A5 induced the strongest cell attachment, it did not show detectable cytotoxicity, consistent with the non-cytotoxic property of CTX A5 despite its binding to integrin.
We should emphasize that CTX A3-induced cytotoxicity is cell-dependent and that its targets might also vary depending on the cell system studied. For instance, anti-sulfatide antibody did not block the cytotoxicity of CTX A3 in CHO cells (data not shown) even though the same antibody works well with the effect of CTX A3 in cardiomyocytes and H9C2 myoblasts (10). Surprisingly, despite the binding of CTX A3 to ␣v3 integrin in CPAE cells, anti-␣v3 integrin antibody LM609 exhibited only a slight protection effect on CPAE cell viability (Fig. 5C). Recent progress in understanding the function of integrins has revealed a complicated regulation mechanism among integrins, the glycosphingolipid domain, and endocytosis (51,52). Considering that CTX A3 binds to integrin and the glycosphingolipid domain in the plasma membrane and that it can also be internalized into mitochondria via a still unknown mechanism, future investigations on CTX A3-induced CHO cell death should shed light on the cell signaling process involving integrins and the lipid domain.
Although CTX A5 does not have an RGD motif, we investigated whether CTX A5 affects bone resorption via its binding to ␣v3 integrins. Using RANKL to stimulate murine osteoclast differentiation, we show in Fig. 6 that CTX A5 inhibited osteoclast formation by perturbing the polykaryon fusion required in the late differential stage of osteoclast formation. CTX A5 (10 M) reduced the formation of TRAP-positive MNCs to the basal level (Fig. 6A), but did not show significant cytotoxicity (Fig. 6B). The blocking effect could be observed only when CTX A5 was present in the late stage of osteoclast formation, i.e. when CTX A5 was present either during the entire period of osteoclast formation or during days 4 -6 ( Fig. 6C, lower panel), when the expression of significant ␣v3 integrin starts for MNC formation. There was no significant difference in TRAP intensity in the early stage of osteoclast formation (Fig. 6C, upper panel). Because there was no inhibitory effect when CTX A5 was present during the initial 1-3 days, the result is consistent with the idea that CTX A5 binds to ␣v3 integrin to block differentiation.
␣v3 integrin plays a role not only in osteoclast differentiation, but also in osteoclast adhesion during bone resorption. We studied the effect of CTX A5 binding to osteoclasts on their bone resorption activity. Fig. 7 shows the activity as measured by the resorption area determined by light microscopy and scanning electron microscopy. Significant inhibition (ϳ50%) was detected at a CTX A5 concentration of ϳ2.5 M. Finally, a confocal microscopic study was performed to determine whether CTX A5 can indeed co-localize with ␣v3 as visualized using anti-3 integrin antibody. We found that the integrin overlapped with rhodamine-labeled CTX A5 (Fig. 8B), suggesting that CTX A5 indeed binds to ␣v3 in osteoclasts. We propose that CTX A5 inhibits murine osteoclast differentiation and resorption by binding to ␣v3.
These results also predict that CTX-integrin interaction may be a potential therapeutic target for tissue degeneration induced by CTXs even though the role that integrin binding plays in the CTX-perturbed wound healing process is not clear at this time. CTXs constitute ϳ50% of the weight of the Taiwan cobra venom toxin. The local concentration of CTXs easily reaches a micromolar range at the bitten area because each bite will inject ϳ20 -570 mg of venom (53). Thus, the binding of CTXs to ␣v3 integrin at K d values of ϳ0.3 to 15 M should be biologically relevant in the wound tissue. However, CTX A5 induced cell adhesion of CPAE cells (which expressing ␣v3) without detectable cytotoxic effects. Thus, unlike many other disintegrins that are known to perturb cell proliferation by binding to integrin on the membrane surface, the binding of CTXs to cells through integrins is not sufficient to induce their cytotoxic effect.
We detected only weak binding of CTX A6 to ␣v3 integrin. It is interesting that CTX A6, a CTX homolog identified only cobras caught in the eastern part of Taiwan (54), exhibited weak ␣v3 binding activity. CTX A6 was not cytotoxic in cardiomyocytes, CHO-K1 cells, and H9C2 myoblasts. 4 We tested the effect of CTXs on CPAE cell proliferation as a measure of their cytotoxicity. CTX A6 did not have detectable inhibitory effects on CPAE cell proliferation. However, under low serum conditions, the cytotoxicity of CTX A6 could be detected (Fig. 5). Sequence comparison between CTXs A6 and A3 (Fig. 9A) showed that the amino acid residues located at loop I (Lys-5, Val-7, and Leu-9), loop II (Thr-29), or the tight turn (Pro-15) of CTX A3 might be involved in CTX-integrin interaction, explaining their difference in integrin binding affinity.
It is interesting to point out that the loop I conformation of CTX A6 adopts a type VI turn with a cis-peptide bond between the two prolines of the conserved Leu-Ile-Pro-Pro-Phe sequence for group I CTXs (54). In contrast, CTX A5 was previously classified as a group II CTX based on the absence of a proline residue at position 9. As a consequence of the conformational difference at position 9, the tip of loop I protrudes to the FIGURE 9. Structure and activity relationship of CTX-integrin interaction. A, role of loop I in the differential binding of CTXs A3 and A6 to immobilized integrin as detected by SPR binding measurements. The concentration ranges used were 0.25-10 and 1-30 M for CTXs A3 and A6, respectively. The shaded amino acid residues in the sequence and the side chain in the three-dimensional structure represent the residues possibly involved in the interaction. RU, resonance units. B, role of loop II in the differential binding of T␥ (CTX Tr) and CTX M2 to immobilized integrin as detected by SPR binding measurements. The concentration ranges used were 0.5-10 and 0.2-10 M for T␥ and CTX M2, respectively. C, stereo view of the three-dimensional structure of CTX A5 emphasizing the locations of amino acid residues suggested to affect CTX-integrin interaction. The backbone of CTX A6 is also shown overlaid on CTX A5 to indicate the effect of the cis-Pro peptide bond on the loop I conformation. The boxed amino acid residues in the sequence represent residues of CTX A5 overlapped by residues of CTX A3 and T␥ proposed to be involved in CTX-integrin interaction. Three additional residues, i.e. Glu-17, Asp-59, and Arg-38, are also highlighted in the three-dimensional structure to show possible sites responsible for the metal dependence. The three-dimensional structures of CTX A3 (monomer A; code 1XT3), T␥ (monomer A; code 1TGX), and CTX A5 (monomer A; code 1KXI) are from the Protein Data Bank, and the sequences of CTX A3, CTX A6 (code 1UG4), T␥, CTX M2, and CTX A5 are aligned based on their structures. opposite side of the slightly concave flat molecules between group I and II CTXs (Fig. 9C). We predict that the structure of loop I of CTXs may be responsible for integrin binding.
Group I CTXs with a cis-Pro peptide bond in the loop I region are usually purified from African cobras. To test whether the loop I region is mainly responsible for CTX binding to ␣v3 integrin, we performed a SPR binding study on two group I CTXs, i.e. T␥ (CTX ␥ purified from N. nigricollis) and CTX M2 (Fig. 9B). We select these CTXs because their amino acid sequences are different only in the specific region at the tip of loop II. In contrast to the SPR binding exhibited by other group II CTXs such as CTXs A5 and A3, group I CTXs bound to ␣v3 with two-phase behavior. Such binding behavior could be fitted by a twostage reaction with a conformational change model (BIAevaluation Version 4.1, Biacore International AB) (data not shown). It is therefore tempting to speculate that the binding of integrin to group I CTXs at loop I might induce a conformational change in the CTX molecules, e.g. cis,trans-isomerization of Pro-9, to allow additional binding at the loop II region. The interpretation is also consistent with the fact that CTX M2 bound to integrin in a significantly different manner compared with T␥. The CTX M2 and T␥ molecules show a structural difference only in the loop II region. To reconcile the weak binding of CTX A6 (group I), we observed that those group I CTXs with detectable two-phase binding behavior have a positively charged residue (Arg-28) near the loop II region, whereas CTX A6 has a hydrophobic residue (Val-28). The apparent low affinity of CTX A1 can also be explained by the presence of a negatively charged residue (Asp-30) in this region. Interestingly, the strongest integrin-binding CTX, CTX A5, also has positively charged residues (Lys-28 and Lys-29) in this region. Based on these observations, we suggest that the hydrophobic domain near the tip of loop I and the charged residues flanking the hydrophobic loop II region may be involved in CTX-integrin interaction.
It should be noted that we did not find any acidic residues near the two loop regions of CTX A5. Given the observed metal dependence and inhibition by fibrinogen, it is possible that another acidic residue of CTX A5 such as Glu-17 or Asp-59, which is ϳ15 Å away from Arg-38 (Fig. 9C), might form a three-dimensional motif to substitute for the linear RGD motif to account for the effect (30). Although such a model can be tested in future studies by site-directed mutagenesis, our result indicates that there is indeed a correlation between the structure and integrin binding functions.
We have shown that CTX A5 has a potential biomedical application as an integrin antagonist in the bone resorption model in this study. ␣v3 integrin is expressed in tumor cells, wounds, and inflammatory tissues and in angiogenic endothelial cells. Therefore, it will be interesting to see whether CTX A5 affects inflammation, tumor growth, and angiogenesis in future studies (55). | 6,763.4 | 2006-03-24T00:00:00.000 | [
"Biology",
"Chemistry"
] |
Some aspects of interaction amplitudes of D branes carrying worldvolume fluxes
We report a systematic study of the stringy interaction between two sets of Dp branes placed parallel at a separation in the presence of two worldvolume fluxes for each set. We focus in this paper on that the two fluxes on one set have the same structure as those on the other set but they in general differ in values, which can be both electric or both magnetic or one electric and one magnetic. We compute the respective stringy interaction amplitude and find that the presence of electric fluxes gives rise to the open string pair production while that of magnetic ones to the open string tachyon mode. The interplay of these two leads to the open string pair production enhancement in certain cases when one flux is electric and the other is magnetic. In particular, we find that this enhancement occurs even when the electric flux and the magnetic one share one common field strength index which is impossible in the one-flux case studied previously by the present author and his collaborator in [17]. This type of enhancement may have realistic physical applications, say, as a means to explore the existence of extra dimensions.
Introduction
D brances are one type of non-perturbative stable Bogomol'ny-Prasad-Sommerfield (BPS) solitonic extended objects in superstring theories (for example, see [1]), preserving one half of the spacetime supersymmetries. These object are important or useful mainly because though they are non-perturbative, their dynamics can still be described, when the string coupling is small, by the perturbative open string with its two ends satisfying the usual Neumann Boundary conditions along the brane directions and the so-called Dirichlet boundary conditions along directions transverse to the branes [2]. When two such Dp-branes 1 are placed parallel to each other at a separation and are at rest, there is no net interaction acting between the two and this system, just like either of the Dp branes, is also a stable BPS one, preserving one half of the spacetime supersymmetries.
A Dp brane has its tension and carries also the so-called RR charge. One therefore expects in general an attractive force due to their tensions and a repulsive one due to their RR charges between such two Dp branes. The BPS nature of the Dp brane relates the charge and the tension and as such the sum of these two contributions gives a vanishing net interaction. We can check this by computing the lowest order stringy interaction amplitude from either a closed string tree-level cylinder diagram or equivalently an open string one-loop annulus diagram. In either computation, we have two contributions. The so-called NS-NS contribution, due to the brane tension, is as expected attractive, while the so-called R-R contribution, due to the RR charges, is repulsive. The sum of the two gives an expected zero net interaction by making use of the usual 'abstruse identity' [2].
When each Dp-brane carries fluxes, which can be electric and/or magnetic ones 2 , the interaction is in general non-vanishing. For large brane separation, this interaction, if non-vanishing, has to be attractive since it is due to different branes and the only contributions are from their tensions (different brane charges do not interact). For small brane separation, the story is a bit complicated as we will see. The best description is in terms of the open string one. If there is a non-vanishing interaction, the underlying system breaks all supersymmetries and we expect to have some interesting physics process to occur, especially when the brane separation is small.
From the open string perspective, the open string one-loop annulus diagram can be 1 For having a distance between the two, we need to have p ≤ 8. 2 The electric flux on a Dp-brane stands for the presence of F-strings, forming the so-called (F, Dp) non-threshold bound state [3,4,5,6,7,8,9,10], while a magnetic flux stands for that of co-dimension 2 D-branes inside the original Dp brane, forming the so-called (D(p-2), Dp) non-threshold bound state [11,12,13], from the spacetime perspective. These fluxes are in general quantized. We will not discuss their quantizations in the text for simplicity due to their irrelevance for the purpose of this paper. If the added fluxes on each Dp contain an electric one, this electric flux can provide a force acting on the virtual charge and anti-charge pair to pull them apart, which can also be understood as providing the energy needed to the virtual pair, to make them become real, i.e., the analog of the Schwinger pair production. So we expect the interaction amplitude, in the presence of electric flux(es), not only to be non-vanishing but also to have an imaginary part, giving rise to the open string pair production. In general, the pair production rate is vanishing small and suppressed exponentially by the brane separation. However, when magnetic fluxes are also present in a certain way 3 , this open string pair production rate is greatly enhanced and becomes significant to have potential physical applications. We would like to stress that the present pair production in Type II superstring case is different from that in the Type I superstring case as given in [14,15] . For a single Type II Dp brane, we have a U(1) gauge group and computations give a vanishing open string annulus amplitude as well as a vanishing open string pair production rate even if the brane carries a constant worldvolume electric flux. These vanishing results are due to that the open string is oriented and is therefore charge-neutral in the sense that its two ends carry the respective U(1) charge +1 and −1 with zero-net charge. This is also consistent with the fact that a Type II Dp brane carrying a constant electric flux is a 1/2 BPS non-threshold bound state (F, Dp) as discussed in footnote 2. So this system is stable rather than unstable and the pair production cannot occur. In order to have the open string pair production in Type II, the simplest possible choice is to consider two Dp branes placed parallel at a separation with each carrying a different electric flux. This is the rational for considering such a system of two Dp branes in this paper and as mentioned above already the open strings produced are directly related to the dimensions transverse to the branes. So a detection of the pair production by an observer living on the brane will signal the existence of extra dimensions, for exmple, for p = 3 case.
Given the above rational for the open string pair production in Type II string theories discussed in this paper, the open string pair production discussed in [14,15] is for the charged unoriented open string in Type I superstring with its two ends carrying their respective charge e 1 and e 2 coupled to a constant spacetime background electric field. This background field picks up some U(1) direction inside the non-abelian SO (32) in Type I. Since there are different choices of this U(1) embedding inside SO (32), this may give different values of the charge e 1 and e 2 . So for this case e 1 + e 2 can be nonzero and as such, as discussed in [14,15], there can be non-zero open string pair production rate. In terms of modern D-brane language, we know that the gauge group SO(32) is from the 16 D9 (spacetime filling) branes in type I theory and the open string considered describes certain overall dynamics, characterized by the U(1), of these 16 D9 branes. Consistent with this, the two ends of Type I unoriented open string, apart from the background field directions, obey the usual Neumann boundary conditions. There is no sense of brane separation here. The direct relevance to this case is a D9 in Type IIB. As stressed above, the corresponding Type IIB open string pair production rate always vanishes even if a constant worldvolume electric flux is applied. So the physics is quite different in these two cases.
For other Type II Dp-branes with p ≤ 8, the two ends of open string now obey Dirichlet boundary conditions along directions transverse to the brane in addition to the Neumann boundary conditions along the brane directions. For a system of two Dp branes placed parallel at a separation with each carrying fluxes considered in this paper, the exponential brane separation suppression factor appearing in the interaction amplitudes and the possible open string pair production rates comes from the zero-mode contribution from the Dirichlet directions.
The operator structure of the boundary state for a Dp brane holds true even with the presence of general external fluxes on the worldvolume [10] and using this closed string boundary state approach we can compute the closed string cylinder amplitude between the two Dp branes considered for any constant worldvolume flux. The corresponding open string annulus amplitude can be obtained, for the purpose of obtaining the possible open string pair production rate, simply by using a Jacobi transformation. This gives an advantage over the open string approach adopted in [14,15] for which only pure electric or magnetic field was considered in Type I superstring case. It appears difficult there if both electric and magnetic fields are present.
Without further ado, in this paper we will compute the interaction amplitude for a system of two sets of Dp branes, placed parallel at a separation, with each set carrying two fluxes with the same structure but different in values in the sense specified later on. We will also compute the corresponding open string pair production rate if any and discuss the relevant analytic structures of the amplitude. We will give a complete account of the aforementioned two-flux cases in this paper. Depending on the structure of the two fluxes on each set of the Dp branes, we have three cases to consider: 1) 8 ≥ p ≥ 2, 2) 8 ≥ p ≥ 3 and 3) 8 ≥ p ≥ 4. We will explore the nature (attractive or repulsive) of the interaction at large brane separation and small brane separation, respectively, and study various instabilities such as the onset of tachyonic one at small brane separation. We will determine at which conditions there exists open string pair production and its possible enhancement. We will also speculate possible applications of the enhanced open string pair production.
This paper is organized as follows. In section 2, we provide the basic setup for the computations of various interaction amplitudes for systems considered. In section 3, we compute the interaction amplitude and give a complete analytical analysis of this amplitude for the system of two sets of Dp branes placed parallel at a separation when the two fluxes on each set share one common field strength index. This case requires 8 ≥ p ≥ 2. Here we find a new possibility, when one flux is electric and the other is magnetic, that gives rise to a new pair production enhancement. This possibility will not occur when each set of Dp branes carry only one flux as studied previously in [17]. In section 4, we repeat the amplitude computation and its analysis for the system of two sets of Dp branes in a similar fashion but with the two fluxes on each set sharing no common field strength index when one is electric and the other magnetic or sharing one common field strength index when both are magnetic. This case corresponds to 8 ≥ p ≥ 3. This is the most interesting case for having the great enhancement of open string pair production when there are one electric flux and one magnetic one present on each set of the Dp branes. In particular, when the two electric fluxes are almost identical and the two magnetic fluxes are opposite in direction, the open string pair production rate has a great enhancement which is quite unexpected and this rate is the largest when p = 3. As we will discuss in section 6, this is the case that has potentially realistic applications, for example, one can use this as a means to explore the existence of extra dimension(s) among other things. In section 5, we repeat the same process for the system of two sets of Dp branes again in a similar fashion but now with the two magnetic fluxes on each set sharing no common field strength index. This corresponds to 8 ≥ p ≥ 4 case. We will discuss and conclude this paper in section 6.
The basic setup
In this section, we will provide the basis for computing the lowest order stringy interaction amplitude for a system of two sets of Dp branes placed parallel at a separation with each set carrying certain fluxes. For this, we consider first the closed string cylinder diagram with Dp branes represented by their respective boundary state |B [18,19]. For such a description, there are two sectors, namely NS-NS and R-R sectors. In each sector, we have two implementations for the boundary conditions of a Dp brane, giving two boundary states |B, η , with η = ±. However, only the combinations are selected by the Gliozzi-Scherk-Olive (GSO) projection in the NS-NS and R-R sectors, respectively. The boundary state |B, η for a Dp-brane can be expressed as the product of a matter part and a ghost part [20,21], i.e. where and the overall normalization c p = √ π 2π √ α ′ 3−p . As discussed in [10], the operator structure of the boundary state holds true even with the presence of external fluxes on the worldvolume and is always of the form and for the NS-NS sector and for the R-R sector. The ghost boundary states are the standard ones as given in [20], independent of the fluxes, which we will not present here. The M-matrix 4 , the zeromodes |B X 0 and |B, η 0R encode all information about the overlap equations that the string coordinates have to satisfy. They can be determined respectively [18,10] as 4 We have changed the previously often used symbol S to the current M to avoid a possible confusion with the S-matrix in scattering amplitude.
for the bosonic sector, and for the R-R sector. In the above, the Greek indices α, β, · · · label the world-volume directions 0, 1, · · · , p along which the Dp brane extends, while the Latin indices i, j, · · · label the directions transverse to the brane, i.e., p + 1, · · · , 9. We defineF = 2πα ′ F with F the external worldvolume field. We also have denoted by y i the positions of the D-brane along the transverse directions, by C the charge conjugation matrix and by U the matrix with the symbol ; ; denoting the indices of the Γ-matrices completely anti-symmetrized in each term of the exponential expansion. |A |B stands for the spinor vacuum of the R-R sector. Note that the η in the above denotes either sign ± or the worldvolume Minkowski flat metric and should be clear from the content. The vacuum amplitude can be calculated via where D is the closed string propagator defined as Here L 0 andL 0 are the respective left and right mover total zero-mode Virasoro generators of matter fields, ghosts and superghosts. For example, represent contributions from matter fields X µ , matter fields ψ µ , ghosts b and c, and superghosts β and γ, respectively, and their explicit expressions can be found in any standard discussion of superstring theories, for example in [22], therefore will not be presented here. The above total vacuum amplitude has contributions from both NS-NS and R-R sectors, respectively, and can be written as Γ = Γ NSNS + Γ RR . In calculating either Γ NSNS or Γ RR , we need to keep in mind that the boundary state used should be the GSO projected one as given earlier. For this purpose, we need to calculate first the amplitude Γ(η ′ , η) = B ′ , η ′ |D|B, η in each sector with η ′ η = + or−, B ′ = B(F ′ ) and B = B(F ). In doing so, we can setL 0 = L 0 in the above propagator due to the fact thatL 0 |B = L 0 |B , which can be used to simplify the calculations. Actually, Γ(η ′ , η) depends only on the product of η ′ and η, i.e., Γ(η ′ , η) = Γ(η ′ η). In the NS-NS sector, this gives Γ NSNS (±) ≡ Γ(η ′ , η) when η ′ η = ±, respectively. Similarly we have Γ RR (±) ≡ Γ(η ′ , η) when η ′ η = ± in the R-R sector. We then have Given the structure of the boundary state, the amplitude Γ(η ′ η) can be factorized as where we have replaced the c p in the boundary state by nc p with n an integer to count the multiplicity of Dp branes. In the above, we have The above ghost and superghost matrix elements A bc and A βγ (η ′ η), both independent of the fluxes, can be calculated to give, and in the NS-NS sector while in the R-R sector where R0 B sgh , η ′ |B sgh , η 0R denotes the superghost zero-mode contribution which requires a regularization along with the zero-mode contribution of matter field ψ in this sector. We will discuss this regularization later on.
For the matrix elements of matter part, i.e. A X and A ψ (η ′ η) given in (15), we can also calculate them with the matrix M given in (7). Their computations can be greatly simplified if the following property of matrix M is used, where T denotes the transpose of matrix. For a system of two sets of Dp branes, placed parallel at a separation y, with one carrying fluxF ′ and the other carrying fluxF , we can then have, (20) and in the NS-NS sector while in the R-R sector where R0 B ′ ψ , η ′ |B ψ , η 0R denotes the zero-mode contribution in this sector mentioned earlier. In the above, |z| = e −πt , V p+1 denotes the volume of the Dp brane worldvolume, λ α are the eigenvalues of the matrix w (1+p)×(1+p) defined as where matrix M ′ and M are the one given in (7) when the corresponding fluxes areF ′ and F , respectively, and I stands for the unit matrix. The orthogonal matrix W , satisfying results from an unitary transformation of the certain oscillator modes which is a trick used in simplifying the evaluation of the matrix elements of matter part from the contribution of oscillator modes. Let us take the following as a simple illustration for obtaining the matrix W . In obtaining A X , we need to evaluate, for given n > 0, the following matrix element, 0|e − 1 where |0 stands for the vacuum. We first defineα ′ n = M ′α n and this givesα ′T −n = α ′+ n =α + n M ′+ =α T −n M ′T with + denoting the hermitian conjugate. Since M ′ is real and satisfies (7), we have M ′+ = M ′T andα ′ has the same property asα. So we can havẽ α −n = (M ′T )α ′ −n . Substituting this into (25) and also dropping the prime onα ′ , we have (25) as where W is precisely the one given in (23). Since W is orthogonal, it can be diagonalized using a unitarity matrix V of the following form such that In the above, and v is a (1 + p) × (1 + p) unitary matrix. Now we further defineα ′ n = V +α n and α ′ n = V + α n , the evaluation of (26) becomes then as easy as the case without the presence of fluxes, giving the results of (20) to (22), respectively.
We would like to point out that a similar approach to the above in simplifying the computations can also be adapted for a system of Dp ′ and Dp, placed parallel at a separation, with each carrying fluxes for p − p ′ = 2k with k = 0, 1, 2, 3. What we have discussed above corresponds to k = 0 case.
Given the general fluxesF ′ andF , therefore W from (23) and (7), what can we say about the eigenvalues λ α of W for α = 0, 1, · · · p? Since the matrix W µ ν satisfies (24) and it is an identical matrix when there is no flux present, so we must have det W = 1. Then from (28), we have λ 0 λ 1 · · · λ p = 1, the product of the eigenvalues being unity. If we take trace of matrix W , we end up p α=0 λ α = trw from TrW = p α=0 λ α +(9−p). Here we use the big trace symbol Tr denoting the trace of W and the small trace symbol tr denoting the trace of w defined in (23). Further we can have p α=0 λ n α = trw n with n = 1, 2, · · · . We also have W −1 = V W −1 0 V + and W −1 = W T . This gives p α=0 1/λ n α = p α=0 λ n α . In general, not all these equations are independent, especially those with n > 9 from those with n ≤ p. We can determine the eigenvalues using certain number of the following equations, λ 0 λ 1 · · · λ p = 1, where we can limit 1 ≤ n ≤ p. We will use these equations to determine the eigenvalues λ α for the specific systems considered in the following sections 5 .
With the above preparation, we are ready to give the general structure of Γ NSNS (η ′ η) in the NS-NS sector and that of Γ RR (η ′ η) in the R-R sector, respectively, for the system of two sets of Dp branes, placed parallel at a separation, carrying the general respective fluxesF ′ andF . For the NS-NS sector, using (14), (16), (17), (20) and (21), we have then while in the R-R sector, we have from (14), (16), (18), (20) and (22), where the zero-mode contribution whose respective explicit relation with the fluxes will be given in the specific cases considered in the following sections. In obtaining the above, we have used the following relations where the explicit expression for c p as given right after (3) has been used and |z| = e −πt as given earlier.
So with the ready form of the amplitude (31) and (32), the amplitude computations are just boiled down to the determination of the eigenvalue λ α and the evaluation of the zero-mode matrix (33) once the worldvolume fluxes are given. In the following three sections, we will compute the explicit interaction amplitude and analyze its analytical structure for each of the three cases given in the Introduction.
The 8 ≥ p ≥ 2 case
In this section, we will consider the following two subcases with the corresponding two non-vanishing field strength components on each set of Dp branes sharing a common index. Without loss of generality, it can be cast in either of the following two structureŝ In the first subcase, we have two electric fluxesF 01 = −F 10 = −f 1 andF 02 = −F 20 = −f 2 , both of which share a common time index '0 ′ while in the second subcase, we have one electric fluxF 01 = −F 10 = −f and a magnetic oneF 12 = −F 21 = −g, both of which share a spatial index '1 ′ . In what follows, let us consider each in order.
The electric-electric case
In this subsection, we first consider theF ′ andF with the structure given in (35) to compute the interaction amplitude and subsequently to determine the open string pair production rate.
The interaction amplitude
Let us begin with computing the interaction amplitude. For this, we need to compute the corresponding M ′ and M via (7), respectively, and then use (23) to determine w. From this w, we have the eigenvalues from (30) as λ 0 λ 1 λ 2 = 1, and the rest λ 3 = · · · = λ p = 1. In obtaining the last equality in the second line above, we have used the equation in the first line. We don't actually need to solve the eigenvalues λ 0 , λ 1 , λ 2 from (37) and the relations satisfied by them are just needed to give the amplitude which we will compute now. From (31), we have the NS-NS amplitude where we can simplify the last product factor involving the eigenvalues λ 0 , λ 1 and λ 2 , using the eigenvalue relations given in (37), as where .
With this, we can re-express the above amplitude as Then the total amplitude from the NS-NS sector is where By the same token, we can have the R-R sector amplitude from (32) as Following the regularization scheme given in [23,20], we can have in the R-R sector, using the flux (35) and the expression for the R-R sector zero-mode (9) along with (10), So the total amplitude from the R-R sector is where The total amplitude is then where A n (±) are defined in (43) and B n in (47). Now we try to express this amplitude in terms of the Dedekind η-function and various θ-functions with their standard definitions as given, for example, in [24] . For this, we set the parameter λ = e 2πiν . We have then from (40) .
With this, the total amplitude (48) can be expressed as where in the last equality the following identity has been used which is a special case of more general identity given in [25]. Note that one can show , given that f 2 1 + f 2 2 < 1 and f ′2 1 + f ′2 2 < 1, which implies that the parameter ν is actually imaginary. So we can set ν = iν 0 with 0 < ν 0 < ∞ and (49) becomes , From the last equality in (50), we have For large y, the major contribution to the amplitude is from the large t-integration and we have, for p < 7, which gives indeed an attractive force 6 as anticipated in the Introduction. Here Ω q denotes the volume of unit q-sphere. For small y, the small t integration becomes important. The only factor which can become negative at small t is the one (1 − 2|z| 2n cosh 2πν 0 + |z| 4n ) in the denominator of the infinite product in the integrand in (53) since now |z| ∼ 1 and cosh 2πν 0 > 1. When this factor becomes negative, the sign for the infinite product remains unclear. So it is unclear about the nature of the interaction at small brane separation in terms of the closed string cylinder variable t.
The open string pair production
For small y, the open string description is more suitable and the underlying physics becomes more clear. So let us now pass from the above closed string cylinder amplitude to the open string annulus one via the Jacobi transformation t → t ′ = 1/t. For this, we need the following relations for the Dedekind η-function and the θ 1 -function, Using these two relations with τ = it and t ′ = 1/t, we have from the last equality of (48) where in the first equality we have set ν = iν 0 and in the second equality we have dropped the prime on t and expressed out the Dedekind η-function and the θ 1 -functions. Here |z| is the same as before, i.e. |z| = e −πt < 1. Note that from either (53) or (56), we can check that Γ = 0 only if ν 0 = 0, which actually implies In other words, the underlying system is still a 1/2 BPS one just like each set of the Dp branes carrying the same two electric fluxes 7 . When Γ = 0, the integrand in (56) has no exponential growing factor for large t, therefore there is no open string tachyon mode to appear for the present case of only having electric fluxes present. For small y, we see that all the other factors in the integrand in the second equality of (56) are positive except for the factor sin πν 0 t in the denominator which oscillates between +1 and −1 as the variable t increases. So this makes the small separation interaction nature obscure but also interesting. It is precisely due to the presence of this factor, which gives rise to an infinity number of simple poles along the positive t-axis in the integrand, that signals a new physics process to occur. These simple poles happen when sin πν 0 t vanishes while the factor sin πν 0 t/2 does not. We therefore have them at Each of these simple poles actually tells the production of a pair of open strings as described in the Introduction under the action of the applied electric fluxes [14,26], whose masses are proportional to the brane separation. When the brane separation is large, the probability in producing this kind of open string pairs is small since the mass for each pair is also large and therefore it is difficult to produce them. In this sense, the underlying system has almost no decay process to occur and the only thing left is the interaction between the two set of Dp branes. We are therefore certain about the nature of the interaction. However, for small brane separation, the pair production can become significant and the system decays. In other words, the amplitude has actually an imaginary part reflecting this decay and also giving the pair production rate. The infinite number of simple poles appearing in the integrand in (56) indicates the occurrence of the open string pair production. This process will continue and the energy of the system will be carried away by the pair production until f 1 = f ′ 1 and f 2 = f ′ 2 for which the system reaches its stable 1/2 BPS one.
The rate of open string pair production per unit worldvolume is the imaginary part of the amplitude (56) in the second equality, which can be obtained as the sum of the residues of the poles of the integrand times π following [14,26] and is given as We here try to understand this open string pair production rate. As anticipated, for given k, the larger the brane separation is, the smaller the rate. For given y, the larger the k is, the smaller the rate, too. This can be understood as for large k, the open string is produced with a tension (2k + 1) times the fundamental string tension, having also a large mass, therefore more difficult to be produced. For given k and y, the larger the ν 0 is, which can also imply a larger factor (52), the larger the rate, too. In particular, when |f a | and |f ′ a | with a = 1, 2 are given, f a f ′ a < 0, i.e., when the two electric fluxes on one set of Dp branes are opposite in directions to their correspondences on the other set, will give the largest ν 0 . When either f 2 1 + f 2 2 → 1 or f ′2 1 + f ′2 2 → 1 or both, i.e., to their respective critical values, ν 0 → ∞ and the rate W blows up, the onset of pair production instability.
For very small ν 0 ≪ 1, the rate (58) can be approximated by the first k = 0 term as where we have used (52) for ν 0 ≪ 1. This rate is vanishing small and has no practical physical significance. Note that when f a = f ′ a (now ν 0 = 0), the rate W vanishes and this is consistent with the underlying system being a 1/2 BPS stable one.
Before we close this subsection, we would like to point out that the present open string production (58) results from the virtual open string pair connecting the two sets of Dp branes placed parallel at a separation under the action of the applied two electric fluxes on each set. This is different from the open string pair production discussed in [14,26] for which the virtual open string pair with their ends attaching on the same branes, either on D26 branes in the bosonic case or on the D9 branes in Type I case. If we really want to draw the analog, their rate is the one on each isolated set of Dp branes carrying electric flux(es). For the present case, this rate actually vanishes since the open strings are neutral ones. This is also consist with the fact that each set of Dp branes carrying the electric fluxes is actually 1/2 BPS stable non-threshold bound state (see footnote 7) and therefore there should be no open string pair production. Note that the open string pair production W (58) also vanishes when f a = f ′ a (now ν 0 = 0) and this is consistent with the underlying system being a 1/2 BPS stable one, too. Further, unlike the amplitude and the open string pair production given in [14,26] or in [32] for which the electric flux(es) is (are) along the same or opposite direction, the present ones are for the electric fluxes in different direction, for example, the total electric flux in one set of Dp branes has in general different magnitude and direction from that in the other set of Dp branes. So the results in [32] is just a special case of the present ones when we take, say,
The electric-magnetic case
We now repeat the same process as in the previous subsection but having theF ′ and F with the structure given in (36). As we will see, this case 8 is more richer in physics and has actually three subcases to consider. In particular, we find an open string pair production enhancement which does not appear in the one-flux case considered previously by the present author and his colloaborator in [17].
The interaction amplitude
So we have the fluxF ′ on one set of the Dp branes and the fluxF on the other set for the present subcase, respectively, aŝ .
(60) 8 Note that each such Dp is also 1/2 BPS non-threshold bound state which can be obtained from, say, 1/2 BPS non-threshold bound state ((F, D0), D2) given in [8], by T-dualities along directions transverse to this bound state. Here the F-string is along one of D2 directions.
Following the same steps as in the previous subsection, we have the eigenvalues and λ 3 = · · · = λ p = 1. The zero-mode contribution (33) in the R-R sector to the amplitude (32) in the present case can be evaluated as Using (31) and (32) as well as (13), we can have the present total tree-level closed string cylinder interaction amplitude as where A n (±) and B n are also given by (43) and (47), respectively, but for now By setting λ = e 2πiν , we then have ( With this, the above amplitude (63) can also be cast in terms of θ-functions and Dedekind η-function as where in the second equality we have also used the identity (51) for various θ-functions, in the last equality we have used the explicit expressions for the Dedekind η-function and the θ 1 -function and again |z| = e −πt < 1.
For large y, the main contribution to the amplitude comes again from the large t integration for which the infinite product can be approximated by 1. Then the integration can be carried out and is finite for p < 7 as in the previous case. This gives Γ ∝ 1/y 7−p > 0, an attractive interaction as expected. For further analysis, we need to consider the following three subcases: 2 and 1 − f f ′ + gg ′ < 0. We now consider each of these subcases in order. Subcase 1): For this, we have from (65) that ν = ν ′ 0 is real and falls in the range of 0 < ν ′ 0 < 1. Also from this equation and from the last equality in (66), we have that every factor in the integrand in the last equality of (66) is positive, therefore the interaction is attractive, resembling a pure magnetic case. For this reason, we expect to see an open string tachyon to appear at small brane separation. For this, we need to re-express the amplitude as the open string annulus one via the Jacobi transformation t → t ′ = 1/t. Using (55), the open string annulus amplitude can be obtained from the second equality of (66) as where in the second equality we have dropped the prime on t and again |z| = e −πt < 1. Each factor in the above integrand is also positive for t > 0, noting that This again gives Γ > 0. The integrand has no simple poles along the positive t-axis as expected. For large t (corresponding to small y), we have implying the appearance of an open string tachyon mode as expected [27,28]. The tachyonic instability will be onset and the tachyon condensation will occur when y ≤ π √ 2να ′ [29,30]. Subcase 2): For this case, we have ν = iν 0 with 0 < ν 0 < ∞. Now (65) becomes For this case, the effect of electric-fluxes dominates over that of magnetic ones. The amplitude from (66) is now For large y, once again only large t-integration is important and this gives the finite amplitude Γ ∝ 1/y 7−p > 0 for p < 7, which is attractive. For small y, the small t-integration can be important. Then the factor (1 − 2|z| 2n cosh 2πν 0 + |z| 4n ) in the denominator of the infinite product in the integrand can become negative and this makes the sign of the amplitude indefinte. Our experience tells that this signals new physics, i.e., the open string pair production, to occur. For this to be manifest, we need to pass the above tree-level closed string cylinder amplitude to the open string annulus one via the Jacobi transformation t → t ′ = 1/t. For this, we also need to use the relations for the Dedekind η-function and the θ 1 -functions (55). We then have where in the second equality we have dropped the prime on t and here |z| = e −πt < 1.
, the integrand in the second equality of (71) looks identical to that in (56). So the physics is the same. For example, we have also an infinite number of simple poles of the integrand occurring at t k = (2k + 1)/ν 0 with k = 0, 1, · · · and the open string pair production rate is For small ν 0 ≪ 1, the above rate can be approximated by the leading k = 0 term as where we have used (69) for small ν 0 . The only difference here from its counterpart in (59) is that the magnetic fluxes appear to give some enhancement of this rate. The discussion about the production rate goes also the same as in the electric-electric case and will not repeat it here.
Subcase 3)
: This is the case which gives the open string pair production a significant enhancement and has not been seen previously, for example, as in the one-flux case [17].
For this case, ν = 1 − iν 0 with 0 < ν 0 < ∞. We then have from (65) , The amplitude (66) in the second and third equalities, respectively, becomes now where in obtaining the first equality from the second equality of (66) we have used the identities θ 1 (1 + ν|τ ) = −θ 1 (ν|τ ) = θ 1 (−ν|τ ) and θ 1 ( 1+ν 2 |τ ) = θ 2 ( ν 2 |τ ). As in the previous cases, the large y amplitude gives an attractive interaction for p < 7. For small y, we need to pass this amplitude to the open string annulus one via the Jacobi transformation t → t ′ = 1/t. Here in addition to the relations given in (55) for the Dedekind η-function and the θ 1 -function, we need also the following for θ 2 -function as The open string annulus amplitude is then where in the second equality we have dropped the prime on t and again |z| = e −πt < 1.
There are two dramatic differences from the previous subcase (71): 1) The integrand has a factor sin πν 0 t in its denominator but without the presence of sin 4 πν 0 t/2 in the numerator.
This sin πν 0 t factor gives then an infinite number of simple poles of the integrand along the positive t-axis at 2) There is an extra exponential factor e πt in the integrand which indicates an open string tachyon mode and the onset of tachyonic instability will occur when y ≤ π √ 2α ′ . These two features are precisely the ones for which one will see when each set of Dp branes carries an electric flux and a magnetic one but the two do not share a common field strength index. We will discuss this in the following section.
Our previous examples already show that the electric flux(es) are responsible for the open string pair production while the magnetic one(s) are for the open string tachyon mode. Then the question is how to understand the appearance of the open string tachyon mode in the present subcase. The simplest is to note that our original ν-parameter is given as ν = 1 − iν 0 , which is complex. The real part '1 ′ , which is due to magnetic fluxes, actually gives rise to the factor e πt , therefore the open string tachyon mode. Let us trace this. Note that 1 gives ν = 1 − iν 0 . However, the real part '1 ′ is precisely due to 1 − f f ′ + gg ′ < 0. In the one-flux case considered in [17], we have either f = 0, g = 0, f ′ = 0, g ′ = 0 or the other way around, then 1 − f f ′ + gg ′ = 1 which can never be less than zero. Therefore this is consistent with what had been found there. That 1 − f f ′ + gg ′ < 0 can hold is precisely due to the presence of the magnetic flux(es). If both g = g ′ = 0, then 1 − f f ′ + gg ′ < 0 would imply 1 < f f ′ which cannot be true since from 1 − f 2 > 0 and 1 − f ′2 > 0 we can have f 2 f ′2 < 1. Let us now assume one of them being zero, say, g ′ = 0. We then still need f f ′ > 1 from 1 − f f ′ + gg ′ < 0. From 1 − f 2 + g 2 > 0 and 1 − f ′2 > 0, we have |f f ′ | < 1 + g 2 which can be consistent with f f ′ > 1. In this case, all we need is to have 1 < f f ′ < 1 + g 2 . If both g and g ′ are non-zero, we have In other words, so long there is a magnetic flux present, 1 − f f ′ + gg ′ < 0 can hold, which gives rise to the real part '1 ′ in ν = 1 − iν 0 , therefore the open string tachyon mode.
As before, the simple poles (78) give rise to the open string pair production at each of them and the pair production rate can be calculated, by the same token, to be For large ν 0 , the discussion goes the same as before and we will not repeat it here. Our focus here is on ν 0 ≪ 1 and we will see the first example of rate enhancement discussed in the present paper. When ν 0 ≪ 1, the rate can be approximated by the leading k = 1 term as We now compare the present pair production rate with the one given in (72). For this, we assume the same ν 0 in both cases and also the same Then the present rate over the previous one gives a factor e π/ν 0 /8, which can be very large for ν 0 ≪ 1, a great enhancement. Note that the smallest p = 2 gives the largest rate when the fluxes are the same. For separation y = π √ 2α ′ + ∆ √ α ′ with ∆ ≪ √ 2 ν 0 , the rate (80) is where we have expressed f, g and f ′ , g ′ in terms of their respective a, θ and a ′ , θ ′ as given in footnote (9). For small ν 0 , since both |a| and |a ′ | with aa ′ < 0 can still take large values, so this rate can still be large.
In summary, we have learned for various systems considered in this section that the electric flux gives rise to the open string pair production while the magnetic one gives rise to a tachyon mode, which can have the onset of tachyon instability when the brane 9 There is no problem with this assumption. For checking this easily, we set f = a sinh θ, g = a cosh θ and f ′ = a ′ sinh θ ′ , g ′ = a ′ cosh θ ′ from the conditions 1 − f 2 + g 2 > 0 and 1 − f ′2 + g ′2 > 0, respectively, for the former subcase. For the present subcase, we use a bar on each of them to make a distinction in notations. Then the assumption gives two conditions: a 2 + a ′2 + a 2 a ′2 =ā 2 +ā ′2 +ā 2ā′2 and 2 = −aa ′ cosh(θ − θ ′ )−āā ′ cosh(θ −θ ′ ). For the former case, cosh πν 0 > 1 from (69) gives 1 + aa ′ cosh(θ − θ ′ ) > (1 + a 2 )(1 + a ′2 ) which implies aa ′ > 0. By the same token, the present subcase from (74) gives −āā ′ cosh(θ −θ ′ ) > 1 + (1 +ā 2 )(1 +ā ′2 ) > 2 which impliesāā ′ < 0. If we setā = a,ā ′ = −a ′ , for example, the above first equation is satisfied. The second one can also be satisfied by choosinḡ separation is small, in terms of the open string annulus diagram description. In the case of one electric flux and one magnetic one carrying by each set of Dp-branes considered in this section, when the two fluxes satisfy certain relations specified in the subcase 3) in subsection 3.2, we find that the open string pair production can be significantly enhanced and this may have a potential realistic application which we will discuss in the discussion and conclusion section.
The rate enhancement found in subcase 3 is due to the sign change of 1 − f f ′ + gg ′ from positive to negative in comparison to the subcase 2 in subsection 3.2. So we have the sign change of the R-R contribution to the amplitude and this appears to be like the brane/anti-brane system against brane/brane system in spirit. The following discussion indicates that this is not the case. This sign change of 1 − f f ′ + gg ′ is entirely due to the added electric and magnetic fluxes and the original two sets of Dp branes are not changed at all (just two sets of Dp branes, not one set of Dp and one set of anti-Dp). As stressed in subcase 3, in addition to the electric flux f on one set of Dp and the electric flux f ′ on the other set of Dp, we have to have at least one magnetic flux present on one of the two sets of Dp branes to have the enhancement to occur. As discussed in subcase 3, we can set g ′ = 0 but keep g = 0. Given |f | < 1 + g 2 and |f ′ | < 1, we can have 1 < f f ′ < 1 + g 2 for which the two electric fluxes point to the same direction and now 1 − f f ′ + gg ′ < 0. As mentioned in footnote 2 of this paper, it is well-known that a constant worldvolume electric flux stands for the fundamental string while a constant magnetic flux stands for a D(p − 2) brane inside the Dp brane. Given these, the Dp carrying the constant electric flux f ′ stands for a 1/2 BPS non-threshold bound state (F, Dp) while the one carrying the constant electric flux f and the magnetic flux g stands for a 1/2 BPS non-threshold bound state ((F, D(p -2)), Dp) as mentioned in footnote 8. The Dp in (F, Dp) is identical to the Dp in ((F, D(p -2)), Dp) and if we restrict f f ′ > 0, the F in (F, Dp) points to the same direction as the F in ((F, D(p -2)), Dp). If we keep fixed both f ′ and g, we have the subcase 2 if 0 < f f ′ < 1 for which 1 − f f ′ > 0 and the subcase 3 if 1 < f f ′ < 1 + g 2 for which 1 − f f ′ < 0. The two subcases differ only by the change of the magnitude of the electric flux f and in either case the ((F, D(p −2)), Dp) bound state is not the anti-system of (F, Dp) in the usual sense. The sign change of R-R amplitude is due to the combined result of interactions of constituent branes in the two bound states. The present system has the advantage over the brane/anti-brane one in that it has a minor instability rather than highly unstable and as such the enhanced open string pair production can have the potential to be detected by an observer living on set of the branes.
We would like to stress that the open string pair production as well as its enhancement and the tachyon mode are due to the open strings connecting the two sets of the Dp branes, not the ones with their both ends on the same set of Dp branes. Since each set of the Dp branes with fluxes by themselves are still 1/2 BPS, we don't expect each set as an isolated system to have the open string pair production to occur even when the flux(es) they carry are electric. This is also consistent with the fact that there is no pair production for neutral open string [14,15]. In the following section, we will provide more evidence to support what has been found in this section when each set of Dp branes carry two fluxes with structures different from what has been considered in this section.
The 8 ≥ p ≥ 3 case
In this section, we will address the same issues as in the previous one but with the following flux structures,F So we will also have two subcases to consider. In the first subcase, we have the electric fluxF 01 = −F 10 = −f and the magnetic fluxF 23 = −F 32 = −g with the rest vanishing. The two non-vanishing fluxes do not share a common field strength index. While for the second subcase, we have two magnetic fluxes: the magnetic fluxF 12 = −F 21 = −g 1 and the other magnetic oneF 23 = −F 32 = −g 2 . These two magnetic fluxes share a common field strength index '2 ′ . In what follows, we will consider each in order.
The electric-magnetic case
We consider the first subcase with two fluxes: one electric and the other magnetic. The p = 3 case has already been studied in a recent paper [31] by the present author. We here discuss the general p for 3 ≤ p ≤ 8. Specifically, we have one set of Dp branes carrying the fluxF ′ and the other carrying the fluxF aŝ where both of them are (1 + p) × (1 + p) matrices. With them, as before, the eigenvalues can be determined to be where we have set λ 0 = λ, λ 1 = λ −1 , λ 2 = λ ′ , λ 3 = λ ′−1 and λ 4 = · · · = λ p = 1. The matrix element for zero-mode in the R-R sector can also be determined to be With these, we have, from (31), in the NS-NS sector while in the R-R sector, we have, from (32), where we have used (86) for the zero-mode matrix element and So the GSO projected amplitude in the NS-NS sector is while the GSO projected amplitude in the R-R sector is We have then the total amplitude where A n (±) and B n are defined in (88) and (90), respectively. Let us try to express this amplitude in terms of various θ-functions and the Dedekind η-function. For this, let us define, Using (85), we have , where we have defined ν = iν 0 with 0 < ν 0 < ∞ and ν ′ = ν ′ 0 with 0 < ν ′ 0 < 1. Note that when either f or f ′ reaches its critical value 10 of unity, ν 0 → ∞. With these, the total amplitude (93) can be expressed as where in the last equality we have used the following identity which is again a special case of more general identity given in [25]. We can now set ν = iν 0 and ν ′ = ν ′ 0 in the last equality of (96) and use the explicitt expressions for the θ 1 function and the Dedekind η function to have For large brane separation y, this amplitude gives a finite positive one Γ ∝ 1/y 7−p > 0 for p < 7, implying an attractive interaction, as expected. As can be seen, Γ = 0 only if cosh πν 0 − cos πν ′ = 0 whose only solution is ν 0 = ν ′ 0 = 0. This gives f = f ′ , g = g ′ and the underlying system is still a 1/2 BPS state.
For small brane separation y, we expect also the open string pair production to occur since we have an electric flux present and the best description is in terms of the open string annulus variable which can be obtained via the Jacobi transformation t → t ′ = 1/t. Using the relations (55) for the θ 1 -function and the Dedekind η-function, we have the open string annulus amplitude from the second equality in (96) as where we have set ν = iν 0 , ν ′ = ν ′ 0 and in the second equality we have dropped the prime on t and |z| = e −πt < 1. Let us examine the behavior of the integrand in the second equality above. The integrand has the following divergent behavior if y < π 2ν ′ 0 α ′ , signaling the onset of tachyonic instability [30,29]. The appearance of the exponential growing factor e πν ′ 0 t for large t in the integrand indicates the existence of an open string tachyon mode which is due to the magnetic fluxes. This integrand blows up also when the factor sin πν 0 t in the denominator vanishes along the positive t-axis at This blowing-up behavior actually gives rise to new physics at each of the infinite number of simple poles, indicating the production of an open string pair under the action of electric fluxes applied. This implies that the amplitude has an imaginary part. The pair production rate per unit Dp-brane worldvolume is the imaginary part of the amplitude, which can be obtained as the sum of the residues of the poles of the integrand in (99) times π following [14,26] and is given as We now discuss certain properties of this rate. First the odd k gives a positive contribution to the rate while the even k gives a negative one. The k = 1 gives the leading positive contribution to the rate. For given fluxes (therefore also ν 0 and ν ′ 0 ), the larger the brane separation y is, the larger the mass of the created open string (the string tension times the brane separation) is. Moreover for given y, the larger the k is, the larger the string tension (k times the fundamental string tension) is and so also the larger the mass is. Either case implies more difficulty to produce the open string pair. This is reflected by the exponentially suppressed factor exp[−k y 2 /(2πα ′ ν 0 )] in the rate (102). Note that this rate appears valid only for y > π 2ν ′ 0 α ′ since the term for large k would diverge, due to the open string tachyon mode mentioned in (100). Now for fixed k and y, the parameters ν 0 and ν ′ 0 can be re-expressed from (95) as Note that 0 < ν 0 < ∞ and |f |, |f ′ | < 1. From the above, we can see that the larger |f | and |f ′ | with f = f ′ are, the larger ν 0 is. This is particularly true if f f ′ < 0. Moreover, when either |f | or |f ′ | reaches its critical value of unity, ν 0 → ∞. When both reach their critical values but with f f ′ < 0, ν 0 → ∞. When both reach their critical values but with f f ′ > 0, we can set f = ±1 ∓ ǫ and f ′ = ±1 ∓ ǫ ′ with both ǫ → 0 + , ǫ ′ → 0 + . For this case, ν 0 → ∞ only if ǫ/ǫ ′ → 0 or ∞. For magnetic fluxes, we have 0 < ν ′ 0 < 1 and |g|, |g ′ | < ∞. When both |g| and |g ′ | are very small or very large with gg ′ > 0, ν ′ 0 → 0. When both are very large but with gg ′ < 0, ν ′ 0 → 1. So we have 0 < ν ′ 0 < 1/2 for −1 < gg ′ < ∞, ν ′ 0 = 1/2 for gg ′ = −1 and 1/2 < ν ′ 0 < 1 for −∞ < gg ′ < −1. The rate will be larger if we have a larger ν ′ 0 and a larger |g − g ′ |. When gg ′ < 0, the larger both |g| and |g ′ | are, the larger ν ′ 0 (1/2 < ν ′ 0 < 1) and |g − g ′ | are. With respect to the above fluxes, we have three cases to consider, We here discuss each of them in order. Case I: Unless both |g| and |g ′ | are 11 very small or very large but with gg ′ > 0, we have in general ν ′ 0 ∼ O(1). This case requires a large ν 0 , therefore large electric fluxes |f | and |f ′ | with f = f ′ . From (102), it is clear that the larger the ν 0 is, the larger each term in the sum and so the larger the rate is. Here each odd k term gives a larger positive contribution while each even k term gives also a larger but almost vanishing contribution to the rate (note that the even k term is in general negative). This is expected. In particular, for any of the cases discussed above with the critical electric flux or fluxes and ν 0 → ∞, the rate blows up, giving the onset of pair production instability.
Case II: This case says ν 0 ∼ O(ν ′ 0 ) ∼ O(1) and the only possible enhancement of the rate is due to the factor |g − g ′ |. For a small |g − g ′ |, the rate is small too. For a large |g − g ′ |, the rate can be significant for a small brane separation but is still small for a large brane separation.
Case III: This case must imply that ν 0 ≪ 1 since 12 0 < ν ′ 0 < 1. One in general would expect a vanishing small rate as the pure electric flux case given in [32] by the present author and his collaborators. It turns out that the story here is quite different and the added magnetic fluxes give an exponential enhancement of the rate via the tachyon mode discussed earlier. A special p = 3 case has been reported recently by the present author in [31]. The simplified one-flux case was also given a while ago by the present author and his collaborator in [17]. We here give a discussion for a general p and with fixed two fluxes, one electric and one magnetic, on each set of Dp branes, with the given requirement. In other words, we have here fixed ν ′ 0 = 0 and ν 0 = 0 with ν ′ 0 /ν 0 ≫ 1. With a very small ν 0 , the infinite product for each k in the sum in (102) can be approximated as unity. Moreover with ν ′ 0 /ν 0 ≫ 1, we can approximate the rate (102) as where the exponentially large factor exp(kπν ′ 0 /ν 0 ) is due to the open string tachyon mode discussed in (100). Let us compare this rate, for the same small ν 0 , with the one without the presence of magnetic fluxes (i.e. g, g ′ = 0 and ν ′ 0 = 0) as given in [32] 13 as where we have set k = 2l − 1 and the even k doesn't contribute to this rate. So it is clear for each odd k = 2l − 1, there is a greatly enhanced factor 12 We here assume that ν ′ 0 is fixed in the range of 0 < ν ′ 0 < 1, not considering the case of ν ′ 0 → 0. 13 This can also be obtained from (102) by setting g, g ′ , ν ′ 0 → 0.
where the superscript 'l' denotes the l-th term in the corresponding rate summation. For small enough ν 0 and reasonable large ν ′ 0 , this enhancement can be very significant. Now the corresponding rate can be approximated by the leading k = 1 or l = 1 term and the enhancement is Let us make the same sample numerical estimation of this enhancement as in [31] for p = 3 to demonstrate its significance. It has a value of 3.2 × 10 35 , a very significant enhancement, for ν 0 = 0.02, ν ′ 0 = 0.5. This can be achieved using (95) via a moderate choice of g 1 = −g 2 = 1 (noting |g a | < ∞) and In spite of this, in order to be physically significant, the rate itself in string units needs to be large enough, not merely the enhancement factor. The rate in string units for the above sample case can be estimated to be with a typical choice of n 1 = n 2 = 10. As discussed in [31], the rate for p = 3 is the largest and the rate for p > 3 is at least smaller by a factor of (ν 0 /4π) 1/2 ≈ 0.04, i.e. two orders of magnitude smaller, for the sample case considered. For p = 3, this rate (2πα ′ ) 2 W(ν ′ 0 = 0.5) = 0.61, quite significant, at y = π √ α ′ + 0 + ≈ π √ α ′ , a few times of string scale and before the onset of tachyon condensation, but decreases exponentially with the separation square y 2 for y > π √ α ′ . For example, the rate becomes half of its maximal value at y − π √ α ′ ≈ 0.01 √ α ′ , just 1% of the string scale. We will come back to discuss the significance, implications and potential applications of this enhanced pair production rate later in section 6. For now, we move to the second subcase in this section.
The magnetic-magnetic case
We consider the second subcase as given in (83) with two fluxes, both magnetic, for 3 ≤ p ≤ 8. Specifically, we have one set of Dp branes carrying the fluxF ′ and the other set carrying the fluxF aŝ where both matrices are (1 + p) × (1 + p). With them, following the same steps as before, we have the eigenvalues λ 1 λ 2 λ 3 = 1, , and λ 0 = λ 4 = · · · = λ p = 1. The zero-mode matrix element (33) for the present case in the R-R sector can be determined to be We have then the amplitude, from (31), in the NS-NS sector as where each term involving the eigenvalues λ 1 , λ 2 and λ 3 in the infinite product can be simplified, using the relations given in (111), as where λ + λ −1 = λ 1 + λ 2 + λ 3 − 1, .
The above NS-NS amplitude can now be expressed as where A n (ηη ′ ) is defined in (43) but for now with the λ given by (115). We have then the GSO-projected NS-NS amplitude By the same token, using (112), we have the amplitude, from (32), in the R-R sector as where B n is defined in (47) but again for now with the λ given in (115). Then the GSO-projected amplitude in the R-R sector is We have then the total amplitude As before, we define λ = e 2πiν for the purpose of expressing this amplitude in a useful form which can facilitate its analysis. We then have from (115) , where we have set ν = ν ′ 0 with 0 < ν ′ 0 < 1, denoting its magnetic nature as before. With this, the amplitude (120) can now be expressed in terms of various θ-functions and the Dedekind η-function as where in the second equality we have used the θ-function identity (51) and once again |z| = e −πt < 1. Note that each factor in the integrand of the last equality is positive, therefore this interaction is attractive as expected. One expects to see an open string tachyon mode to appear and for this, we need to pass the above tree-level closed string cylinder to the open string one-loop annulus amplitude via the Jacobi transformation t → t ′ = 1/t. Using the identities for the Dedekind η-function and θ 1 -function given in (55), we have the open string annulus amplitude, from the second equality in (122), as where in the second equality we have dropped the prime on t and once again |z| = e −πt < 1. From the second equality above, we see the factor in the integrand which blows up if y < π 2ν ′ 0 α ′ , signaling the onset of tachyonic instability. Once again we see that the magnetic fluxes give rise to the open string tachyon mode.
In summary, we have further confirmed what has been learned in the previous section using systems with different structures of fluxes as discussed in this section. In other words, the electric flux(es) give rise to the non-perturbative open string pair production while the magnetic one(s) give rise to the tachyon mode. When both of these are present, the interplay of these two gives to the enhancement of the pair production rate. The pair production enhancement revealed in this section for small ν 0 parameter becomes more suitable in realistic application since it does not necessarily need large fluxes. For this reason, it is more useful as we will discuss in section 6. In the following, we will discuss the only remaining case for 8 ≥ p ≥ 4 which involves only two magnetic fluxes sharing no common field strength index. As expected, this gives only attractive interaction and a tachyonic instability at small brane separation if we use the open string annulus diagram.
The 8 ≥ p ≥ 4 case
We have only one case to discuss in this section for which we have two magnetic fluxes sharing no common field strength index. The structure of the fluxF on one set of Dp branes can be cast without loss of generality aŝ while on the other set we have the same structure for the fluxF ′ but with a prime to distinguish from the former. With them, again following the same steps as before, we can determine the corresponding eigenvalues λ α with α = 0, 1, · · · , p as λ 0 = λ 5 = · · · = λ p = 1, where λ and λ ′ satisfy, respectively, , .
Since both λ and λ ′ are magnetic nature, for the purpose of expressing the interaction amplitude in terms of various θ-functions and the Dedekind η-functions as before, we set λ = e 2πiν ′ 10 and λ ′ = e 2πiν ′ 20 . Then from (127), we have where 0 < |g 1 |, |g 2 |, |g ′ 1 |, |g ′ 2 | < ∞ but with 0 < ν ′ 10 < 1 and 0 < ν ′ 20 < 1, respectively. In particular, 1/2 > ν ′ a0 > 0 for −1 < g a g ′ a < ∞, ν ′ a0 = 1/2 for g a g ′ a = −1 and 1 > ν ′ a0 > 1/2 for −∞ < g a g ′ a < −1. Here a = 1, 2, respectively. The zero-mode matrix element (33) for the present case in the R-R sector can be determined to be With the above preparation, we can obtain the amplitude in the NS-NS sector as where in the first equality we have used (31) for Γ NSNS (±) and in the second equality A n (±) are defined in (88) but here with λ and λ ′ given in (127). Similarly, the amplitude in the R-R sector can be obtained as where in the first equality we have used (32) for Γ RR (±) and also (129) for the zero-mode matrix element, and in the second equality B n is defined in (90) but again with the present λ and λ ′ given in (127). So the total amplitude Γ = Γ NSNS + Γ RR , where in obtaining the third equality we have used the identity (97) for θ-functions and once again |z| = e −πt < 1. Note that every factor in the integrand in the last equality is non-negative, so Γ ≥ 0, which vanishes only if ν ′ 10 = ν ′ 20 and otherwise gives an attractive interaction as expected.
The small brane separation behavior of the amplitude can be best seen in terms of the open string one-loop annulus amplitude which can be obtained from the third equality of (132) via the Jacobi transformation t → t ′ = 1/t. Using the relations for the Dedekind η-function and the θ 1 -function in (55), we have the annulus amplitude as where in the second equality we have dropped the prime on t and again |z| = e −πt < 1. From this amplitude, it is also clear that Γ = 0 only if ν ′ 10 = ν ′ 20 and otherwise it is greater than zero, therefore giving an attractive interaction. For large t, we have an exponentially growing factor if ν ′ 10 = ν ′ 20 in the integrand which indicates the existence of an open string tachyon mode as expected. There is a tachyonic instability to occur when y < π 2|ν ′ 10 − ν ′ 20 |α ′ . Given what we have learned in the previous sections, the nature of the interaction as well as the onset of tachyonic instability is expected.
Conclusion and discussion
In this paper, we consider a system of two sets of Dp branes placed parallel at separation with each carrying two worldvolume fluxes. We focus here on that the two fluxes on one set of Dp branes are the same in structure but different in values as those on the other set. We give a systematic account of computing the stringy amplitude for each allowed such system and analyzing the analytical behavior of this amplitude. We have learned that when the fluxes are electric in nature, they in general give rise to the nonperturbative Schwinger-type open string pair production. On the other hand, when the fluxes are magnetic in nature, they give rise to an open string tachyon mode and there will be the onset of tachyonic instability and its subsequent tachyon condensation when the brane separation is smaller than a certain value determined by the fluxes. The interplay of the non-perturbative open string pair production and the tachyon mode leads to the open string pair production enhancement in certain cases when one flux is electric and the other magnetic. In particular, we find this enhancement even when the electric flux and the magnetic one share one common field strength as reported in the subcase 3 in subsection 3.2, which is quite unexpected since there is no such enhancement in the oneflux case studied previously by the present author and his collaborator in [17] . This pair production enhancement can have potential realistic applications which we will discuss later in this section.
When the two fluxes share one common field strength index, one can examine all the corresponding closed string tree-level cylinder amplitudes computed in the previous sections and find that they can be cast in general as where the ν parameter is determined in the previous sections, i.e. (49), (65) and (121), respectively. When ν is real, it can be set ν = ν ′ 0 with 0 < ν ′ 0 < 1. The corresponding fluxes are magnetic in nature. The interaction amplitude gives an attractive interaction between the two sets of Dp branes until y = π 2ν ′ 0 α ′ when the corresponding open string tachyon condensation occurs. So when the amplitude is expressed in terms of the open string annulus one via the Jacobi transformation t → t ′ = 1/t, we can see the onset of tachyonic instability by noticing an exponential divergent factor in the integrand of the amplitude by setting t ′ → ∞ when y < π 2ν ′ 0 α ′ . For this case, the only way the system can give off its excess energy due to the applied fluxes is via the tachyon condensation and when this is done, the system becomes 1/2 BPS just like each set of the system. The necessary condition for this is to have Γ = 0, which determines the allowed fluxes.
When ν is purely imaginary, it can be set ν = iν 0 with 0 < ν 0 < ∞. The corresponding fluxes are electric in nature. The large brane separation interaction is still attractive but the small brane separation one is rich in physics. In analog of the Schwinger pair production in QED, we know that there will be open string pair production for this case beforehand. This manifests itself again when we express the interaction amplitude in terms of the open string annulus one, implying an imaginary part of the amplitude. When the brane separation is large, the mass of the open string connecting the two sets of Dp brane, which equals to the string tension times the brane separation, is large, therefore the open string pairs are difficult to be produced from the vacuum. So for large brane separation, the energy loss due to the pair production can be ignored and the amplitude has almost no imaginary part. However, when the brane separation is small, the pair production becomes important and the imaginary part of the amplitude, giving the pair production rate, can no longer be ignored which can be computed following [14] as we did in the previous sections. The larger the ν 0 is, the larger the pair production rate. In particular, the rate diverges when ν 0 → ∞, corresponding to the critical electric flux(es). If the parameter ν 0 is not large, for example, ν 0 < 1, the pair production rate is in general small even at brane separation y = 0 and we may treat the pair production as an adiabatic process until the system becomes again 1/2 BPS one for which the pair production stops. This can be determined by Γ = 0 which gives a condition for which the fluxes need to satisfy. Note that the pair production is the process to give off the excess energy of the system before it becomes 1/2 BPS. For this case, there is no open string tachyon mode which appears a bit unexpected since the system itself is not supersymmetric before it becomes 1/2 BPS. One possible explanation to this puzzle is that, unlike the previous magnetic case for which the tachyon condensation serves as an only means to give off the system excess energy at 14 y < π 2ν ′ 0 α ′ , the pair production gives off the system excess 14 If we extrapolate this to ν ′ 0 = 0, it would imply that the tachyon condensation occurs at y < 0 which is impossible and this may also serve to explain the absence of tachyon mode in the pure electric case. energy at any brane separation to relax the system back to 1/2 BPS one and for this the tachyon mode can hardly manifest itself in the annulus amplitude.
When ν is complex as discussed in subcase 3 in subsection 3.2, we have ν = 1 − iν 0 with 0 < ν 0 < ∞. This case is impossible when each set of branes carries only one-flux as addressed previously in [17] by the present author and his collaborator. As discussed in subcase 3 in subsection 3.2, the real part '1' of ν is actually due to the magnetic fluxes applied while the imaginary part ν 0 is due to both the electric and magnetic fluxes. The large brane interaction is still attractive while the small brane separation behavior of the amplitude can be best seen as usual in terms of the open string annulus amplitude. One expects the amplitude to have an imaginary part, resulting from an infinite number of simple poles of its integrand, to give rise to the open string pair production. Moreover the real part unity of ν gives an enhancement of the pair production and this is the first pair production enhancement reported in this paper which is similar in spirit to the enhancement of pair production discussed in subsection 4.1. For large ν 0 , this enhancement plays less important role and the behavior of the pair production rate is more or less the same as the pure electric case discussed above. The most interesting and useful case is for small but fixed ν 0 for which the enhancement is important. The pair production rate (79) can now be approximated as (2πα ′ ) where we have set g = a cosh θ, f = a sinh θ, g ′ = a ′ cosh θ ′ , f ′ = a ′ sinh θ ′ . For small ν 0 , this rate can be possibly significant if y = π √ 2α ′ + 0 + and both |a| and |a| ′ are large. Further the smallest allowed p = 2 gives the largest rate when the fluxes are taken the same for all 2 ≤ p ≤ 8. This can be interesting academically but in potentially realistic applications, we cannot have large |g| or |g ′ | or both since large |a| or |a ′ | or both imply them. Further if ν 0 ≪ 1, the above rate cannot be significant even with large |a| and |a ′ |. However, there is an exception if we are allowed to have the brane separation y < π √ 2α ′ . If so, large magnetic fluxes are not needed to have a significant pair production rate. The rational for the present case is the same as for the case when the electric flux and the magnetic one do not share a common field strength spatial index which we will turn next. So we will leave this discussion in appropriate place there.
When the two fluxes share no common field strength index, the closed string cylinder amplitude can also be cast in general as Γ = 2 2 in 1 n 2 V p+1 [det(η +F ′ ) det(η +F )] 1 2 sin πν sin πν ′ (8π 2 α ′ ) it η 6 (it)θ 1 (ν|it)θ 1 (ν ′ |it) , where the ν and ν ′ parameters are determined in the previous sections, i.e. (95) and (128), respectively. For large brane separation, all the systems considered have a well-defined finite attractive interaction for p < 7. When both ν and ν ′ are real with 0 < ν < 1 and 0 < ν ′ < 1, the corresponding fluxes are all magnetic. For this case, the interaction amplitude is positive, implying attractive interaction, until the brane separation y = π 2|ν − ν ′ |α ′ for which the open string tachyon condensation occurs. As before, the onset of tachyonic instability can be best seen in terms of the open string annulus amplitude and the tachyon condensation once again serves as the only means to give off the excess energy of the system to finally settle it down to its 1/2 BPS state.
When one flux is electric and the other is magnetic, we have, say, ν = iν 0 with 0 < ν 0 < ∞ and ν ′ = ν ′ 0 with 0 < ν ′ 0 < 1. Once again we expect a significant open string pair production at small brane separation and the pair production rate is given in general by (102). The large ν 0 case is not different from the previous cases and once again the magnetic flux plays a minor role. We focus here on the realistic useful case for which we have small but fixed ν ′ 0 and ν 0 with ν ′ 0 /ν 0 ≫ 1. For this, the dimensionless pair production rate can be approximated from (102) as Small ν 0 implies small |f − f ′ |. In other words, the electric flux on one set of Dp branes is almost identical to that on the other set of Dp branes. Since 0 < ν ′ 0 < 1, so small but fixed ν ′ 0 does not necessarily imply small magnetic fluxes |g| and |g ′ |. As our sample estimation demonstrates in Case III in subsection 4.1, the largest rate is for p = 3 and can be significant for a reasonable choice of ν 0 , ν ′ 0 when y = π 2ν ′ 0 α ′ + 0 + . This appears that we can have a real experimental possibility for exploring the existence of extra dimension(s) and as such for testing string theories if we assume to live in a (1 + 3)-dimensional world which are D3 branes. As discussed and stressed in Introduction and at various points in the previous sections, the open string pair production gives rise to the open string pairs connecting the two sets of Dp branes and therefore they are directly related to the existence of extra dimension(s). Since this is based on string theories, a detection of open string pair production indicates not only the existence of extra dimension(s) but also the correctness of string theories. For an observer living on one set of D3 branes, she/he can only detect the ends of each produced open string pair as a particle/anti-particle pair. However, there is a sharp distinction between the pair production here and the Schwinger pair production, say, in QED. In string theories, if the set of D3 branes carrying the same fluxes is an isolated one, the observer living on this set will not detect pair production since a charged neutral string with its both ends on the D3 branes will not give rise to the pair production. While this is not the case for Schwinger pair production, say, in QED. Further the Schwinger pair production on the magnetic flux, even the non-linear effect is considered, is different.
If we indeed want to put the above into test in real-life experiment, for example, in real-life laboratory, the electric flux and the magnetic flux are both very small compared to the string scale even with the consideration of some D-brane phenomenological one, say, around 10 TeV. In what follows, we still want to keep ν ′ 0 /ν 0 ≫ 1. One of the good things for such small fluxes is that we do not expect the pair production along with possible tachyon condensation to perturb the original brane system much. Their roles are to release the tiny excess energy, due to the applied fluxes, in comparing with the rest energy of the original system. So we may expect that the pair production rate can be valid even to zero brane separation. Let us demonstrate this in the most useful case of p = 3 and for this case the rate (138) is Even though the large k term in (139) appears divergent for y < π 2ν ′ 0 α ′ , indicating the tachyonic instability, the rate itself is actually finite if we perform the summation and the result is For large brane separation, this rate gives the leading k = 1 term approximation of (139), as expected, which remains as a reasonably good approximation until y = π 2ν ′ 0 α ′ . There is no divergence in this rate even for y < π 2ν ′ 0 α ′ down to y = 0, giving a fair justification of our above assertion. For y = 0, we have the rate as (2πα ′ ) 2 W(ν ′ 0 = 0) = n 1 n 2 |g − g ′ | 2 /(2π), which can be significant in terms of laboratory scale. As discussed in [31], the rate for p > 3 is at least smaller by an order of (ν 0 /4π) 1/2 , which can be a few order of magnitude smaller for ν 0 using real-life laboratory electric flux(es). So the detection of open string pair production can single out D3 branes as the most preferable to its observer, if he/she just like us knows about string theory. The produced large number of open string pairs can in turn annihilate to give, for example, highly concentrated high energy photons if the fluxes are localized on the branes and this may have observational consequence such as the Gamma-ray burst. This same type of pair production and its subsequent annihilation, if happens at our early Universe, may also be useful in providing a new mechanism for reheating process after cosmic inflation. | 20,797 | 2018-01-10T00:00:00.000 | [
"Physics"
] |
Coherent optical coupling to surface acoustic wave devices
Surface acoustic waves (SAW) and associated devices are ideal for sensing, metrology, and hybrid quantum devices. While the advances demonstrated to date are largely based on electromechanical coupling, a robust and customizable coherent optical coupling would unlock mature and powerful cavity optomechanical control techniques and an efficient optical pathway for long-distance quantum links. Here we demonstrate direct and robust coherent optical coupling to Gaussian surface acoustic wave cavities with small mode volumes and high quality factors (>105 measured here) through a Brillouin-like optomechanical interaction. High-frequency SAW cavities designed with curved metallic acoustic reflectors deposited on crystalline substrates are efficiently optically accessed along piezo-active directions, as well as non-piezo-active (electromechanically inaccessible) directions. The precise optical technique uniquely enables controlled analysis of dissipation mechanisms as well as detailed transverse spatial mode spectroscopy. These advantages combined with simple fabrication, large power handling, and strong coupling to quantum systems make SAW optomechanical platforms particularly attractive for sensing, material science, and hybrid quantum systems.
Non-Collinear Brillouin-like Optical Coupling to Surface Acoustic Waves
The SAW device consists of a Fabry-Perot Gaussian surface acoustic wave cavity on a single-crystalline substrate formed by two acoustic mirrors composed of regularly spaced curved metallic reflectors. Two non-collinear optical beams, a pump field, and a Stokes field, are incident in the region enclosed by the acoustic mirrors (Fig. 1a). The confined Gaussian surface acoustic mode can mediate energy transfer between the two optical fields provided phase-matching (momentum conservation) and energy conservation relations are satisfied, as is the case with Brillouin scattering from bulk acoustic waves 56 . For pump and Stokes fields with wavevector (frequency) ⃗⃗⃗⃗ ( ) and ⃗⃗⃗⃗ ( ), respectively, that subtend equal but opposite angles, , with respect to the surface normal (z-axis), the optical wavevector difference can be approximated as Δ ⃗⃗⃗⃗ ≈ 2 0 sin ̂, assuming ≈ = 0 and for ̂ a unit vector parallel to the surface; the corresponding optical frequency difference is Δω = − (Fig. 1b). Note that the magnitude of the optical wavevector difference is tunable by the optical angle of incidence. For the case of freely propagating surface acoustic waves, the acoustic dispersion relation is linear and can be Figure 1. Parametric optomechanical interactions mediated by Gaussian SAW resonators. a) Two noncollinear traveling optical fields are incident on a Fabry-Perot type Gaussian SAW resonator; interaction between the two optical fields is mediated by a Gaussian SAW cavity mode confined to the surface of the substrate. b) Phase-matching diagram of the parametric process. The vectorial optical wavevector difference, ⃗ = ⃗⃗⃗⃗ − ⃗⃗⃗⃗ = 2 0 ̂, is angle-dependent and points along the direction of SAW cavity axis. c) The acoustic dispersion relation ( ) is discretized in the presence of a SAW cavity. The final optomechanical response is determined by the modes, which are both within the phase-matching and the acoustic mirror bandwidth (blue dots), while radiating longitudinal modes excluded by the acoustic mirror and the optical phase matching (grey dots) do not yield an optomechanical response. d) Finite element calculation of the acoustic displacement magnitude, | |, in a SAW cavity along [100] direction on [100] GaAs illustrating the Gaussian mode (upper panel) with the designed acoustic waist of = 3 and an approximate penetration depth of ~ (lower panel). Panels e) and f) display YX cross-sections of acoustic displacement for e) anti-symmetric and f) symmetric higher-order transverse modes of the SAW cavity. expressed as Ω = qv R , where Ω, , and v R are the phonon frequency, phonon wavevector, and Rayleigh SAW velocity, respectively. The phase-matched phonon wavevector ( 0 ) and frequency (Ω 0 ) are then given by the relations: 0 = Δ = 2 0 sin and Ω 0 = 0 v R . A propagating SAW, therefore, yields a single-frequency optomechanical response, similar to the standard Brillouin response in bulk materials from propagating longitudinal waves. However, the accessible phonon spectrum is significantly modified in the presence of a surface acoustic cavity and optical beams with finite beam sizes, as illustrated by the modified acoustic dispersion plot in Fig. 1c. First, because standing SAW cavity modes are formed, the phonon wavevectors and frequencies become discretized to specific values = eff and Ω = v R , respectively, characterized by mode number , where the free spectral range of the cavity is ΔΩ = v R eff , and eff is the effective cavity length. Second, unlike ideal mirrors, acoustic Bragg mirrors only efficiently confine a finite number of longitudinal modes (blue circles in Fig. 1c) determined by the reflectance and periodicity of the metallic reflectors 1 . Finally, because the optical fields are Gaussian beams with finite spatial extents, appreciable optomechanical coupling exists over a range of optical wavevectors values centered around the phase-matched configuration, Δ = . The effective optomechanical coupling rate to the cavity mode , 0 , varies as a function of optical wavevector mismatch as 0 (Δk) ∝ exp (−(Δk − q m ) 2 /δk 2 ), where = 2√2/ 0 and 0 is the radius of incident optical fields. Equivalently, the coupling rate can be expressed as a function of the angle of incidence as 0 ( ) ∝ exp (−(θ − θ m ) 2 /δθ 2 ), for small angles such that sin ≈ and where is the phase-matching angle of the acoustic cavity mode given by = 2 0 . The corresponding angular bandwidth is given as = 0 = √2 0 0 (see Section S2 of supplementary information). The resultant optomechanical spectrum consists of several discrete resonances from SAW cavity modes which lie both within the acoustic mirror and optical phase-matching bandwidths (unconfined, radiative, longitudinal modes are indicated by grey circles in Fig. 1c).
Gaussian SAW cavities are designed to achieve small acoustic mode volume and appreciable coupling strengths (see Methods and section S3 of supplementary information). Diffraction losses are mitigated by accounting for the anisotropy of the acoustic group velocity on the underlying crystalline substrate 57 . GaAs is chosen because of its large photoelastic response, ease of fabrication, and integration with other quantum systems such as qubits. 3-dimensional numerical finite element simulations are performed of a SAW cavity on [100]-cut GaAs oriented along [100]-direction. The acoustic wavelength of = 5.7 and Gaussian waist, = 3 , are near-identical to the experimental devices described below, while the number of reflectors and the mirror spacing are reduced to maintain computational feasibility (see section S3 of supplementary information). The simulated cavities display a series of stable SAW cavity modes with Hermite-Gaussian-like transverse profiles (Fig. 1d-f) separated by the free spectral range of the cavity. As expected, the modes are confined to the surface and steeply decay into the bulk of the substrate (e.g. lower panel Fig. 1d). The observed beam waist of the fundamental Gaussian mode agrees well with the designed full-waist (2 ) of 6 . Higher-order anti-symmetric (Fig. 1e) and symmetric (Fig. 1f) mode solutions are also observed.
Optomechanical spectroscopy of SAW cavities
To demonstrate coherent optical coupling to SAW devices, Gaussian SAW cavities are fabricated (see Methods and section S4 of supplementary information) on a single crystal GaAs substrate (inset Fig. 2a).
Optomechanical measurements are made for two sets of cavities, one oriented along the crystalline [110] direction, which is piezo-active, and one along the [100] direction, which is piezo-inactive. The cavities are designed for an acoustic wavelength of ≈ 5.7 ( ≈ 7.8 ∘ ), acoustic waist of ≈ 4 and mirror spacing of ∼ 500 . The cavity parameters are chosen to optimize for practical constraints including finite optical apertures, electronics bandwidths, and the optical beam sizes. The effective cavity length ( eff ) is calculated to be ∼ 610 by accounting for the penetration depth into the mirrors 1,58 . The large mirror separation relative to the optical beam size minimizes absorptive effects arising from spatial overlap of optical fields with acoustic metallic reflectors (See Section S9 of supplementary information). , which is consistent with the designed acoustic mirror bandwidth. High-resolution spectral analysis of one of the observed SAW cavity resonances of the piezoinactive (active) cavity ( Fig. 2b(d)) reveals a spectral width, Γ/2 , of 4 kHz (72 kHz), corresponding to an acoustic quality factor of 120,000 (7000). The measured traveling-wave zero-point coupling rate 59,60 , 0, for the piezo-inactive(active) cavity of 2 × 1.4 (2 × 1.7 ) is consistent with predicted values of 2 × 1.9 (2 × 1.8 ) obtained using known material parameters in conjunction with the device geometry (see section S1 and section S7 of supplementary information). As expected, no measurable acoustic response is observed when either of the optical drive tones is turned off. Additionally, as predicted by theoretical coupling calculations (see section S1 of supplementary information), no resonance is observed when the acoustic drives are orthogonally polarized to each other, or when the LO is orthogonally polarized to the incident probe (see section S11 of supplementary information). The demonstrated acoustic quality factors of the SAW devices are among the highest Figure 3. Measured spatial mode spectrum and angular dependence. a) The optomechanical response from higher-order acoustic modes is observed in a [100]-oriented cavity with a mirror spacing of ≈ 350 by laterally displacing the optical fields as illustrated in the inset figure. The higher-order frequency spacing of 1.4 MHz, is consistent the theoretical estimate. b) Finite element simulations of the corresponding acoustic mode profiles. c) Optomechanical coupling strength as a function of angle of incidence with a Gaussian fit overlayed. d) A graphical representation of the positions of the phase-matching envelope relative to the acoustic mirror response (not to scale) for illustrative angles of incidence indicated with the same color outline as for the respective points in c). Left: The phase-matching envelope coincides with the mirror response for maximal optomechanical coupling strength. Right: The phase matching envelope is detuned from the mirror-defined SAW mode resulting in a weaker optomechanical response. measured for focused SAW cavities on any substrate, corresponding to an product of 6 × 10 13 , which is also comparable to that of the best electromechanical SAW devices 15,58,61 . Moreover, the accessed SAW cavity modes are along electromechanically inaccessible directions, demonstrating a key merit of the coherent optical coupling in enabling access to long-lived SAW modes regardless of their piezoelectric properties. The larger relative loss in the [110]-oriented cavities is consistent with excess ohmic loss from the metallic reflectors owing to non-uniform strain from the Gaussian modes and the resulting piezoelectric potential on the reflectors 62-64 .
While the high-order spatial modes of a Gaussian SAW resonator are challenging to probe electromechanically, the coherent optomechanical technique allows for precise and direct excitation of spatial modes through fine control of the optical spatial overlap with specific acoustic mode profiles. By laterally displacing the optical beams away from the cavity axis, optomechanical coupling to a specific higher-order SAW cavity mode is observed (green in Fig. 3a-b) in addition to the response from the fundamental mode (red in Fig. 3a-b). The frequency separation between the fundamental and corresponding higher-order mode of 1.4 is consistent with the predicted difference of 1.4 MHz. The exquisite spatial control available through optical techniques could form the basis of novel SAW-based spatially resolved sensing and metrology.
Finally, the accessible phonon-mode bandwidth determined by phase matching is characterized through measurements of the Brillouin coupling coefficient ( ∝ | 0 | 2 ) as a function of the angle of incidence of the optical fields (see section S2 of supplementary information). The coupling strength exhibits a Gaussian dependence on the angle (Fig. 3c) for peak coupling centered at 0 = 7.8 ∘ , with an angular bandwidth of 0.9 ∘ , which agrees with the predicted bandwidth of 0.96 ∘ . The peak coupling at 0 = 7.8 ∘ is determined by the angle of incidence of the optical fields and the resultant wavevector difference, but the acoustic mode frequencies are independently fixed by the cavity geometry and the acoustic mirror response. The effective optomechanical coupling rate is maximized when the center of the optical phase matching envelope coincides with the peak reflection frequency of the acoustic mirrors (point outlined with purple circle in Fig. 3c and illustrated in the left panel of Fig. 3d) and decreases as they are mismatched (cyan circle in Fig. 3c and illustrated in right panel of Fig. 3d). Because the optomechanical gain bandwidth and associated driven acoustic modes result from the optical spatial profiles, this technique presents the unique capability of tailoring the optomechanical gain profile for specific applications, from multi-mode optomechanics to tunable single frequency applications.
Non-contact Probing of SAW Cavity Dissipation Mechanisms
Acoustic dissipation is typically measured using electromechanical techniques, which also include the dissipation from external device structures such as the electrodes, electrical ports, and impendence matching circuits, limiting insights into material and structural dissipation mechanisms. In contrast, the coherent optical interaction is contact-free and not limited by these extrinsic effects. A direct probe into phonon loss mechanisms will be valuable for basic material science as well as for optimizing novel SAW device technologies. Here the coherent optical technique is used to determine the dominant loss mechanisms between SAW propagation and mirror losses for Gaussian resonators and to extract the temperature dependence of the dissipation. The acoustic quality factor is measured as a function of mirror separation (i.e. cavity length) and temperature for cavities in both the [100]-oriented (piezo- inactive) and [110]-oriented (piezo-active) cavities. The measured cavities are all designed to have identical parameters except for the mirror separation, which varies from 150 to 500 . The cavity lengths are chosen to minimize effects resulting from optical absorption in the metallic reflectors (See Section S9 of supplementary information). For both cavity orientations, the acoustic quality factor displays a linear dependence on cavity length (Fig. 4a, Fig. 4b), with the quality factor increasing for larger cavity lengths. A linear dependence on cavity length suggests that the losses in SAW cavities primarily occur within the acoustic mirrors through mechanisms such as scattering into the bulk, ohmic losses, and acoustic losses within the reflectors (see section S8 of supplementary information).
The dependence of quality factor on temperature is also investigated from = 4 to 160 K (Fig.4c-d) for a fixed mirror separations ( = 500 for the [100]-oriented cavities and = 350 for the [110]oriented cavities). The [100]-oriented cavities exhibit a sharp fall and a subsequent plateau at ~ 20,000 within the measured temperature range. In contrast, the [110] cavities exhibit a linear decrease of with temperature. These measurements suggest that while both cavity types are limited by losses occurring within the acoustic mirrors, the specific mirror loss mechanisms likely differ. Previous measurements of Gaussian SAW cavities on GaAs without the potential for ohmic losses in the mirrors (through superconducting reflectors 65 as well as non-metallic reflectors 61 ) demonstrating large acoustic quality factors (2 × 10 4 ), suggest that the losses observed in the [110]-oriented, piezo-active, cavities primarily result from ohmic losses within the metallic reflectors. This is also consistent with the observations that the [100]-oriented cavities without piezoelectricity support much higher quality factors and have a distinct temperature dependent behavior. Additional insights could be derived from temperature dependent quality factor measurements at additional cavity lengths including longer lengths where the effects of mirror loss are reduced, from cavities where ohmic losses are reduced such as superconducting-mirror cavities, as well as from alternative cuts and material types. Importantly, because of the non-contact nature of coherent optical coupling, these measurements directly reflect intrinsic device properties, as opposed to details of the probe, providing a rich source of information across a wide range of relevant SAW device parameters. This is illustrated in Section S9 of supplementary information as well where the effects of optical absorption are clearly delineated from those of electrostriction through controlled measurements.
Discussion and Conclusion
This report introduces a powerful new coherent optomechanical platform in which two non-collinear optical fields parametrically couple through surface acoustic modes of Gaussian SAW cavities. The platform offers high power handling capabilities, requires minimal fabrication, and enables a contact-free piezo-electricity independent coupling to SAW devices enabling record-high quality factor devices in GaAs crystalline substrates. From the results presented here there are several directions in which specific metrics of interest for applications can be improved. For example, the principles outlined here can be readily applied for the coherent optical coupling of SAW cavity devices with frequencies of several GHz by changing the optical angle of incidence (see section S10 of supplementary information). The optomechanical coupling rate of devices can also be improved significantly through reduced acoustic mode volumes in cavities with smaller acoustic waists (see section S10 of supplementary information). Moreover, because the acoustic mode volume of the Gaussian SAW cavities scale inversely with the acoustic frequency, GHz SAW cavities naturally offer increased coupling strengths. Acoustic cavity losses can also be further improved by adopting etched groove reflectors in favor of metallic strips to eliminate both ohmic losses within the reflectors on piezoelectric substrates as well as additional acoustic losses within the reflectors.
A natural extension of the technique presented here would be to enclose the system within optical cavities. A SAW-mediated cavity optomechanical systems with an operation frequency of ~4 , Qfactors well exceeding 10 5 , and coupling rates comparable to nanomechanical systems( 0 2~1 0 kHz) could be achieved through straightforward improvements detailed in supplementary information section S10. The power-handling capability of this system, limited only by material damage, allows for large intracavity photon numbers ( > 10 9 ) which can consequently enable large optomechanical cooperativities ( om > 1000) (see section S10 of supplementary information). This platform therefore yields the high-power handling capability of bulk optomechanical systems 59,66 while also offering large coupling rates, small sizes, and ideal integrability to quantum systems and sensing devices.
A SAW cavity-optomechanical platform may have several straightforward applications. Strain fields of surface acoustic phonon modes can be readily coupled to a range of qubit systems, including spin qubits, quantum dots, and superconducting qubits, enabling novel quantum transduction strategies. Optical coupling to several other strain-sensitive quantum systems, including superfluids and 2D materials, can also be realized, which could yield new fundamental insights into novel condensed matter phenomena. The SAW-based cavity optomechanical system could also serve as an alternate platform for microwaveto-optical transduction schemes circumventing conventional challenges such as poor phonon-injection efficiencies, low-power handling capabilities, and fabrication challenges 7,49,50 . Beyond novel devices for quantum systems, the demonstrated techniques and devices also represent an attractive strategy for realizing a new class of non-contact all-optical SAW-based sensors with targets ranging from small molecules to large biological entities including viruses and bacteria, without electrical contacts or constraints. Moreover, in contrast to prior electromechanical techniques, the material versatility available to the optomechanical coupling presented enables broadly applicable material spectroscopy for basic studies of phonons and material science.
In summary, here we demonstrate coherent optical coupling to surface acoustic cavities on crystalline substrates. A novel non-collinear Brillouin-like parametric interaction accesses high-frequency Gaussian SAW cavity modes without the need for piezo-electric coupling, enabling record cavity quality factors. Optomechanical coupling in SAW cavities could be enabling for hybrid quantum systems, condensed matter physics, SAW-based sensing, and material spectroscopy. For hybrid quantum systems, this interaction, in conjunction with demonstrated techniques of strong coupling of SAWs to quantum systems (e.g. qubits, 2D materials, and superfluids), could form the basis for the next generation of hybrid quantum platforms. For sensing, this platform could enable a new class of SAW sensors agnostic to piezoelectric properties and free of electrical constraints and resulting parasitic effects. Finally, the coherent coupling technique enables detailed phonon spectroscopy of intrinsic mechanical loss mechanisms for a wide array of materials without the limitations of extrinsic probing devices.
Methods
Device Fabrication: To fabricate the GaAs devices, a single crystal [100]-cut GaAs is coated with a PMMA polymer layer and the required reflector profiles are drawn on the polymer with an e-beam lithography tool. Subsequently, the required thickness of metal, in this case 200nm Aluminum, is deposited using an ultra-high vacuum e-beam evaporation tool system. Finally, the excess polymer is removed using an acetone bath to obtain the experimental devices. A more detailed description of the device fabrication is provided in section S4 of supplementary information.
Numerical Methods: Determining the exact acoustic reflector profiles requires the SAW group velocity as a function of angle from the chosen SAW cavity axis, i.e., the anisotropy of the substrate. This is calculated by numerically solving acoustic wave equations with appropriate boundary conditions. To efficiently confine SAW fields, the shape of the reflector must match the radius of curvature of the confined gaussian mode. The calculated group velocity can then be used to determine the radius of curvature of the reflectors as a function of the axial location and angle from the cavity axis ( ( , )). These reflector profiles are imported into finite element software to validate the cavity designs by verifying the stability of high-Q Gaussian-like SAW modes ( Fig. 1d-1f). A detailed description of the FEM simulation procedure is provided in section S3 of supplementary information.
Phonon Spectroscopy: A sensitive phonon-mediated four-wave mixing measurement technique is developed, building off of related techniques for measuring conventional Brillouin interactions. The SAW cavity mode is driven with two optical tones, which are incident at angles designed to target specific phonon frequencies. A probe beam at a disparate wavelength incident collinear to one of the drive tones scatters off the optically driven SAW cavity mode to generate the measured response. The angle of incidence of the optical fields is controlled through off-axis incidence on a well-calibrated aspheric focusing lens (see section S6 of supplementary information). The optomechanically scattered signal is collected on a single-mode collimator and spectrally filtered using a fiber-Bragg grating to reject excess drive light. The resulting signal is combined with a local oscillator (LO) and measured with a balanced detector (see section S5 of supplementary information). The measured signal is a coherent sum of frequency-independent Kerr four-wave mixing in the bulk of the crystalline substrate and the optomechanical response, giving rise to Fano-like resonances. This spectroscopy technique can resolve optomechanical responses with < fW optical powers. A detailed description of the experimental apparatus and the angle tuning technique is provided in sections S5 and S6 of supplementary information, respectively.
S1. Theoretical Estimation of Optomechanical Coupling Strength
In this section, we derive an analytic expression for the coupling rate for parametric coupling between traveling-wave non-collinear optical fields and a standing wave SAW cavity mode. The assumptions made during the derivation are minimal and provide a useful alternative to computationally intensive FEM calculations. A simple analytical expression allows the extraction of dependencies between coupling rate and material parameters.
The system is modeled as follows (Fig. S1)-a semi-infinite crystalline medium occupies the region
A) TE-polarized optical fields
In the case of TE-polarized optical fields, the electric field for the pump (E p ) and Stokes (E s ) fields outside the medium close to the surface (z → 0 + ) are given as- Where A p (A s ) and k p (k s ) refer to the amplitude and wavevector of the pump (Stokes) field, respectively. Since the acoustic frequencies are much smaller than optical frequencies, we assume k p ≈ k s = k 0 . r ox and r oy refer to the effective optical beam radius along the x and y-axis, respectively. Since the fields are incident at an angle, the resultant distribution on the surface is not symmetric along the x and y-axes. The effective beam waist along the x-axis (r ox ) can be expressed as r ox = r 0 / cos θ, while the beam waist along the y-axis remains unchanged, r oy = r 0 , where r 0 is the incident beam radius. The electric fields on the other side (z → 0 − ) of the interface can be readily derived from Eqn.1 and Eqn. 2 by multiplying the appropriate Fresnel transmission coefficients (τ(θ) ) as follows.
Accounting for the Gaussian mode profile of the m th SAW cavity mode, the acoustic field can be expressed as 1,2 Where, u x , u z are the x and z-component of the acoustic displacement, respectively.
U 0 , q m , r ax , r ay refer to the amplitude, wavevector of the m th cavity mode, waist of the cavity mode along the x-axis given by r ax = L eff /2 , and waist along the y-axis, respectively. η, ϕ, γ are material-dependent parameters obtained by solving the acoustic wave equation with appropriate boundary conditions 1,2 . 'c.c' refers to the complex conjugate of the term preceding it.
Given the acoustic and electric fields, we define the traveling wave coupling rate (g 0 ) based off conventional definitions as 3 - Where ω, f, u refer to the optical frequency, optical force distribution, and acoustic field distribution. The acousto-optic overlap is defined through an overlap integral given by-⟨f ⋅ u⟩ = ∫ f ⋅ u * dV . Note that the traveling-wave coupling rate, as defined in this work, is similar in form to the coupling rate defined in conventional cavity optomechanics 3,4 , except that the integrals are only performed over the interaction volume and not over the entire optical cavity, as would be the case in standard cavity-optomechanical calculations 4,5 . An equivalent cavity optomechanical ( g 0 c ) coupling rate can be derived from g 0 as follows- Where l a is the effective length of the interaction volume as used when determining g 0 , and l opt is the length of the optical cavity. Alternatively, the traveling-wave coupling rate, g 0 , can be understood as being the largest possible cavity optomechanical coupling rate possible in an SAWbased cavity optomechanical system, achieved when the optical cavity mode and acoustic mode perfectly overlap, i.e. l opt = l a . Where P p , P s are incident pump and Stokes powers, and Q m is the mechanical quality factor. The coupling rate and the Brillouin gain coefficient are related as Next, we derive the acousto-optic contributions from optical forces, namely, radiation pressure and electrostriction on the surface and the bulk of the medium.
Radiation Pressure
Radiation pressure force denoted as P rp is given by 7 Where E p(s)t(n) , D p(s)t(n) refer to tangential (normal) pump (Stokes) electric and displacement fields. ϵ 0 , ϵ refer to the dielectric permittivities of the vacuum and the material, respectively.
Radiation pressure force points along the surface normal, i.e., along the positive z-axis. For TE fields E pn = E sn = 0, and the resultant expression for P rp can be simplified as follows-P rp (x, y) = 1 2 ϵ 0 (ϵ − 1)|τ| 2 A p A s * e −2x 2 /r ox 2 e −2y 2 /r 0 2 e i2k 0 sin θ x ẑ ( 12 ) The Since the integrals concerning the spatial variables x and y are independent, we separately calculate the two integrals The acousto-optic overlap resulting from radiation pressure forces can now be expressed as-
Photoelastic forces
Time-varying electric fields within a dielectric material can generate time-varying photoelastic optical forces. Photoelastic stresses resulting in optical forces are derived from the photoelastic tensor. For a material with a cubic crystalline lattice whose principal axes are oriented along the assumed cartesian axis, the stress tensor in the Voigt notation is now given 7,8 - Here we have invoked the crystal symmetry to assume 12 = 13 = 32 , this may not be true if σ xy = σ yz = σ zx = 0 ( 25 ) In a system comprising homogeneous materials, photoelastic forces can exist inside each material, resulting in body forces in the bulk of the medium and at material interfaces where discontinuous stresses are present, resulting in surface pressure (analogous to radiation pressure). We separately calculate the contribution to the acousto-optic overlap of photoelastic forces on the surface and within the bulk of the medium.
B) TE-polarized pump and TM-polarized Stokes optical fields
For the case of cross-polarized optical fields, the pump field is assumed to be TE-polarized, as in
Radiation Pressure
Since the pump and Stokes optical fields do not have overlapping non-zero electric field components (since they are perpendicularly polarized), the net radiation pressure is zero, and by extension, the acoustic overlap is zero.
Photoelastic forces
Photoelastic stresses for cross-polarized optical fields, as in Case A is, given by- Analogous to Eqn. 28, the photoelastic surface force is now given by- Similarly, the z-component of the photoelastic body force also yields no overlap ⟨f z ⋅ u z ⟩ = 0 As a consequence, the total bulk electrostriction overlap is-⟨f ⋅ u⟩ eb = 0 As a result, the total overlap and, consequently, the optomechanical coupling rate for this configuration is The absence of optomechanical coupling for the TE-TM scattering is primarily a result of the assumed crystal symmetry (cubic) and the resulting symmetry in the photoelastic tensor. In crystal structures with reduced symmetry, such as crystalline quartz and LiNbO3, SAW mediated optomechanical processes can couple orthogonal polarizations.
C) TM-polarized optical fields
For the case where both pump and Stokes fields are TM-polarized, the electric fields inside the medium close to the surface (z → 0 − ) are given as-
Radiation Pressure
Radiation pressure force, similar to cases A and B can be expressed as- 2 ) e i2k 0 sin θ x (cos 2 θ − ϵ sin 2 θ) ẑ The corresponding acousto-optic overlap is given as-
Photoelastic forces
The components of the photoelastic stress tensor are- The optomechanical coupling rate for the TM-TM scattering process is then be given by- Note that strength of the TM-TM scattering process strongly depends on the optical angle of incidence and at larger angles can be significantly stronger than TE-TE scattering process.
S2. Phase matching envelope
When calculating the acousto-optic overlaps in Section 2, as in Eqn. 19, optical fields were assumed to be perfectly phase-matched to acoustic modes Δk = q m . In the absence of the above assumption, the dependence of the optomechanical coupling rate on phase mismatch can be expressed as- where Δq = q m − Δk quantifies the phase mismatch, and parameter is defined in Eqn. 15 . For small θ , cos θ ≈ 1 and δk ≈ 2√2/r 0 . As detailed in the main text, for small angles, the coupling rate has a Gaussian dependence on phase mismatch, with a characteristic width given by the inverse of the optical beam size.
The dependence of the coupling rate on the angle of incidence can be derived by expanding the phase mismatch as Δq = q m − 2k 0 sin θ Δq = 2k 0 (sin θ m − sin θ) Where the phase matching angle θ m is defined as For small angles of incidence θ, sin θ ≈ θ, the phase mismatch can be approximated as The dependence of the coupling rate can now be expressed as- where δθ = Gaussian SAW cavities in this work are based on a Fabry-Perot cavity design where two acoustic Bragg mirrors, each consisting of numerous metallic strips, confine surface acoustic modes in the region enclosed between them. Each metallic strip reflects a small portion of the incident acoustic field, and the cumulative interference by all the reflectors achieves the desired acoustic confinement. The geometry of the cavity is specified by four independent parameters-the direction of the cavity axis relative to a principal crystal axis (shown along the x-axis in Fig. S2a), acoustic wavelength ( ), the acoustic beam waist at the center of the cavity ( ) and mirror separation ( ) (Fig. S2a). For the chosen cavity axis, the acoustic group velocity ( (Θ)) is ~ 100 and 0 = 3 . The thickenss of the metallic reflectors is specified as a fraction of the acosutic wavelength and is set to be = 0.035. This thickness found to be a good balance between achieving tight confinement and achieve small acoustic mode volumes (thick electrodes) and mitigate acoustic scattering into the bulk of the substrate (thin electrodes). This value of reflector thickness is in agreement with other similar works 12,13 . To minimize computational resources required, we leverage the symmetry in device and simulate one-fourth of the entire device. The FEM geometry of the device (Fig. S2b) consists of a substrate with a thickness of 3 is surrounded by phase matched layers with thickness of 2 . In all areas of the device, the mesh size is ensured to be less than or equal to /4. Simulated devices only have 50 as opposed to 200 in the fabricated devices to limit computational resources required for the simulation. Given the large number of nodes in the simulation, the simulations are run on a super computing cluster with 56 nodes and 500-1000 GB RAM.
S4. Device Fabrication
SAW resonator designs are translated on GaAs substrate via a standard e-beam lithography process (Fig. S3). First, we coat a double-side-polished GaAs chip with ∼ 500 nm thick PMMA polymer. The design is drawn onto the polymer with a beam of electrons using an electron-beam lithography tool (Fig. S3a). In the subsequent step, the polymer broken by e-beam exposure is washed away, resulting in a negative image of the pattern (Fig. S3b). Next, 200 nm thick Al film is deposited on the chip in an ultra-high vacuum e-beam evaporation system (Fig. S3c). Finally, the chip is removed from the chamber and submerged in a hot acetone bath, which removes PMMA and metal film from unwanted areas, leaving behind Al electrodes (Fig. S3d).
S5. Optomechanical Spectroscopy Setup
This section presents additional details for the experimental spectroscopy apparatus used for measuring the Optomechanical response from SAWs (Fig. S4a). A continuous-wave (CW) laser at 1550 nm (the carrier, ω C ) is divided into two fiber paths. Along one path, the optical field is modulated by a null-biased intensity modulator with a fixed frequency of ω 1 = 2π × 11 GHz, followed by a narrow fiber Bragg grating (FBG) to filter out the upshifted optical frequency sideband. The remaining lower frequency optical sideband serves as one of the acoustic drive tones, drive1, with frequency ω d1 = ω C − ω 1 . Similarly, along the second path, the carrier is modulated with a frequency of ω 2 = 2π × (11 + Ω) GHz and subsequently filtered with an FBG to generate the second acoustic drive, drive2, with frequency ω d2 = ω c + ω 2 . ω d1 and ω d2 are chosen such that the difference between the two (Ω = ω d2 − ω d1 ) can be continuously varied through targeted acoustic resonance frequencies (Fig.S4b).
S6. Calibrating Optical Angle of Incidence
In the paraxial limit, the off-axis optical rays axis passing through an ideal lens with a focal length of f intersects the optical axis at the focal point with an angle of incidence given by θ = aberrations) could result in significant deviations from the paraxial relation. To accurately determine the angle of incidence as a function of off-axis displacement, we develop an apparatus to image the focus of two intersecting beams and calculate the angle of incidence by observing the resulting interference pattern (Fig. S5b). An optical beam, named beam1, is first incident along the optical axis of the lens under test, which subsequently focuses on a partially reflective sample placed at the focal plane of the lens. The focused beam is aligned to the surface normal of the sample by maximizing the back-reflected beam. This configuration is assumed to represent θ = 0 ∘ . Next, beam1 is laterally displaced by a known distance away from the optical axis. A second beam, beam2, is aligned such that the partially reflected beam1 maximally couples into the beam2 collimator. This alignment ensures the two beams are focused on the same spot on the sample with equal but opposite angles of incidence. A 90:10 beam splitter samples a small portion of back reflected beams which are focused using an imaging lens on a near-infrared camera. The image observed on the camera consists of spatial fringes resulting from interference of beam1 and beam2 (inset Fig.S5c). The spatial periodicity (Λ) of the observed fringes can then be used to infer the angle of incidence on the sample through the relation Λ = λ 0 2 sin θ . The angle of incidence measured as a function of the off-axis displacement of beam1 shows excellent agreement with predictions from paraxial theory (Fig. S4c) for an aspheric lens with a focal length of f = 75 mm. Observed results confirm that geometric aberrations within the lens and other optical components within the system are small, and paraxial analysis is warranted.
S7. Estimating Optomechanical Coupling Rate
This section discusses the theory of estimating the optomechanical coupling rate ( g 0 ) from experimentally measured spectra. The system under consideration is as follows (Fig. S4b)-two optical drive fields with amplitudes a d1 and a d2 resonantly drive a SAW cavity mode with amplitude b. A third optical probe with an amplitude a pr scatters of the driven phonon mode and scatters into optomechanically scattered Stokes (a S ) and anti-Stokes sidebands (a AS ). Here two simplifying assumptions are made-first, the system is operated in the small gain limit, in which pump depletion does not occur and, as a result, incident optical fields a d1 , a d2 and a pr do not evolve in space. Second, the weak phonon drive generated from the scattered signals (a S and a AS ) and the incident probe is neglected. In the strong gain limit, for instance, within a cavity optomechanical cavity, both of these assumptions would break down and require a more general analysis. Additionally, we assume that for small angles of incidence (near normal incidence), which is the case in this work, the optical beams approximately travel along the z-axis. In the limit of large angle this analysis can be suitably modified. The equations of motions for the driven cavity phonon amplitude (b) and scattered signals are given by (a S , a AS ) 3,14,15 - The optical power in terms of field amplitude is given by- Using Eqn. 48,49 and 50, the optomechanically scattered sideband powers can be expressed as-P AS = P S = β 2 ℏ 2 ω d1 ω d2 v g 2 P d1 P d2 P pr ( 92 ) Where β = 2|g 0 | 2 l a 2 Γv g . These optomechanically scattered sidebands are spectrally separated by Ω 0 on either side of the incident probe. Assuming a local oscillator with an optical power P LO , the resulting heterodyne beat note oscillating at a frequency Ω 0 is given by-
S8. Quality factor vs. length
The quality factor of an acoustic cavity (Q) can be expressed as a function of roundtrip loss ( ), resonant frequency ( 0 ), acoustic velocity (v R ), cavity length ( ), and linewidth (Δ ) as 12 - The acoustic round trip loss can be expressed as a sum of the propagating loss and losses occurring in the acoustic mirror. Propagation losses which scale with propagation length, can be characterized through an attenuation coefficient ( ) while mirror losses ( ) are independent of length. The total loss can now be expressed as- The factor of two in Eqn. 98 is a result of acoustic fields propagating for a total round trip length of 2 and encounter the acoustic mirrors twice, one on each side of the cavity. Inserting Eqn. 98 into Eqn. 97 we get For small cavity lengths, assuming ≪ , that is, losses are dominated by mirror losses, we get- quality factor as observed in the main text, depends linearly on the cavity length.
S9. Absorption-mediated optomechanical effects
In addition to optomechanical interactions enabled by nonlinear optical forces (photoelastic and radiation pressure), devices investigated in this work, by virtue of having metallic reflectors, also display optomechanical interactions mediated by absorption within the metallic reflectors. The mechanism of this interaction is as follows-the two drive fields separated by the resonant cavity frequency act as an intensity-modulated source which, when absorbed by the acoustic reflectors, excites SAWs due to thermo-elastic expansion. Subsequently, the optical probe field can scatter off the excited SAW cavity mode to produce an analogous optomechanical response 16,17 . Note that Figure S6. Absorption-mediated optomechanical processes. a) Optomechanical response as a function of incident drive power when the optical fields have minimal overlap with the metallic reflectors with illustration inset. b) Optomechanical response as a function of incident optical power when the optical fields overlap with the metallic reflectors (inset illustration). c) Change in the resonant frequency (Δ ) of the observed cavity mode as a function of incident optical drive power. We define Δ = 0 at an incident drive power of 150 mW. d) Acoustic quality factor as a function of incident optical power with and without optical overlap with the metallic reflectors.
such processes do not require phase-matching of the optical drives which generate the acoustic fields since the acoustic fields are driven solely by time-modulated absorptive effects.
Since absorptive effects are typically accompanied by thermal effects, such residual thermal effects are used to discriminate the two optomechanical effects (parametric and absorptive).
Optomechanical response in the [100]-oriented devices is measured as a function of incident optical power when the optical beams are in the center of the cavity (parametric) (Fig. S6a), and when the optical fields have significant overlap with the surrounding metallic reflectors (absorption-mediated) (Fig.S6b). For the case where parametric interactions dominate the optomechanical response (Fig. S6a), negligible power-dependent effects are observed (Fig. S6c-S6d). In stark contrast, the resonant frequency and the quality factor vary significantly as a function of incident power for the absorption-mediated response ( Fig. S6b-d). The observed differences suggest that excess optical absorption in the metal strips modifies the characteristics of the resonant mode, consistent with spurious heating of the substrate and associated changes in local elastic properties of the SAW cavity. As demonstrated in previous works, such absorptive effects could be employed for various classical applications, including optical signal processing 16,17 . However, given their incoherent nature, absorptively mediated effects would generally be undesirable for applications requiring coherent interactions, including quantum control, transduction, and sensing.
Additionally, spurious heating resulting from absorption could prevent the robust ground-state operation of quantum systems such as qubits. These parasitic thermal effects are minimized for devices investigated in this work by ensuring g the mirror separation is much larger than the incident optical beam waist. For example, for devices employed in this work with an approximate mirror separation of 500 and an optical beam of waist diameter of 60 , the fraction of optical power spatially overlapping with the acoustic mirrors can be reduced to the level of ~10 −15 . Alternatively, any residual thermal effects can be eliminated by replacing the metallic stripe reflectors with etched grooves to confine SAWs 13,18,19 .
S10. SAW mediated cavity optomechanical devices
Here, we propose an possible iteration of a SAW-mediated cavity optomechanical system. For the SAW cavity system we assume a SAW cavity on [100]-cut GaAs optimized for an acoustic wavelength (frequency) of = 700 nm (Ω 0~ 4 GHz ) with a Gaussian waist size of 0 = 2 and a cavity length of eff~ 30 . This SAW cavity can be phase matched to = 1 optical fields incident at = 45 ∘ . Assuming the fields are TM polarized Eqn. 78 can be used to estimate traveling-wave coupling rate of 0 ~ 2 × 400 . This estimated coupling rate is approximately 250x of experimentally measured cavities which had a frequency of 500 MHz. This large enhancement is a result of the acoustic mode volume scaling with the wavelength.
Next we propose a possible optical cavity compatible with SAW cavities. Consider a coated DBR fiber-optic optical cavity enclosing a 4-GHz SAW cavity. Optical cavities like these have been commonly used membrane-type cavity optomechanical systems and CQED systems 20,21 . The ability to miniaturize these optical cavities is ideal to obtain small optical mode volumes and consequently larger cavity optomechanical coupling strengths. Assuming optical cavity lengths of tens of micron (~10 − 100 ) and a conservative optical finesse of ℱ = 10 4 (these could be as large as 10 6 ). Using Eqn. 8 we estimate the cavity optomechanical coupling rate as (for Where Γ and refer to acosutic and optical cavity decay rates. Assuming experimentally observed quality factor ≈ 10 5 and 0 ≈ 2 × 5.5 kHz the cavity optomechanical cooperativity is estimated to be om ≈ 2500 ( 103) This platform retains the high-power handling capability of bulk optomechanical systems 5,23 while offering much larger coupling rates (50-500x), devices with a smaller footprint, and simpler integrability to other quantum systems and sensing devices.
S11. Additional experimental data
Here additional experimental data is presented for the [100]-oriented device on [100]-cut GaAs ( Fig. S7) for the cases where one of the acoustic drives is turned off (green trace Fig. S7a), when the two acoustic drives are orthogonally polarized with respect to each other (TE-TM scattering, purple trace Fig. S7a), and when the optical LO is polarized orthogonally to the probe field (yellow trace Fig. S7a). These results are in excellent agreement with theoretical predictions and strongly suggest that the observed resonance results from optomechanical processes. The TM-TM scattering trace is also shown (Fig. S7b), displaying a resonance with an estimated quality factor of 120,000.
1.
Schuetz, M. J. A. et al. Universal quantum transducers based on surface acoustic waves. Figure S7: Additional optomechanical measurement data. a) Additional TE-TE scattering data including when the acoustic drives are turned off (green), the two acoustic drives are orthogonally polarized with respect to each other (purple) and when LO is orthogonally polarized (yellow) with respect to incident probe. TE-TE scattering response is also shown for reference (blue). b) Optomechanical response when the all the fields are TM polarized. | 10,795.4 | 2023-07-02T00:00:00.000 | [
"Physics"
] |
The 2014 Beatson International Cancer Conference: Powering the Cancer Machine
Here, we present a report of the 2014 annual Beatson International Cancer Conference, Glasgow, July 6–9, 2014. The theme was “Powering the Cancer Machine”, focusing on oncogenic signals that regulate metabolic rewiring and the adaptability of the metabolic network in response to stress.
Isocitrate dehydrogenase (IDH) mutations are found in a variety of cancers including acute myeloid leukaemia. IDH1 normally interconverts isocitrate and α-ketoglutarate (αKG). The mutant isoform, expressed in a subset of gliomas and acute myeloid leukaemias, instead converts isocitrate to the non-degradable 2-hydroxyglutarate (2HG). William Kaelin (Dana-Farber Cancer Institute) showed that expression of the R132H mutant IDH1 drives growth factor-independent proliferation of TF-1 leukaemia cells, which are otherwise dependent on GM-CSF. He further demonstrated that (R)-2HG, the enantiomer produced by the R132H IDH mutant, could also support GM-CSFindependent growth. It was found that (R)-2HG and (S)-2HG differentially interact with a class of enzymes that use αKG as a co-factor, including JmjC, Tet, PHD and EglN family proteins, through which they elicit epigenetic alterations and affect hypoxia-inducible factor (HIF) levels.
The mTOR pathway represents a central cellular nutrient-sensing hub, which controls many aspects of cellular metabolism. The tuberous sclerosis (TSC) tumour suppressor proteins (TSC1 and TSC2) are inhibitors of mTOR activation. Michael Hall (University of Basel) showed that deletion of TSC1 in the liver drives mTORC1 hyperactivation, resulting in elevated secretion of FGF21, a reduction in body temperature and reduced nocturnal activity. Treatment of mice with rapamycin lowered FGF21 levels and restored normal body temperature and activity levels. These studies represent a novel observation that mTOR activation could reduce body temperature, while previous studies have shown that mTOR inhibition by rapamycin could affect body size and prolong lifespan.
David Sabatini (Whitehead Institute, MIT), who discovered mTOR while a graduate student with Solomon Snyder at Johns Hopkins, discussed cell-autonomous regulation of TORC1 and focused specifically on amino acid sensing by the TORC1 pathway. In the presence of amino acids, TORC1 is recruited to lysosomes via Rag GTPase heterodimers. However, neither the Rag proteins themselves nor their guanine nucleotide exchange factor, Ragulator, directly bind to amino acids. The amino acid-sensing component of the pathway likely resides within the lysosome, where amino acids are concentrated up to millimolar levels.
Metabolic signalling
It has been documented that MYC drives the Warburg effect, which describes the propensity for cancer cells to import large amounts of glucose and excrete most of it as lactate. Lactate is transported through monocarboxylate transporters (MCTs). John Cleveland (Moffitt Cancer Center) documented that MYC could directly induce the expression of MCT1 (SLC16A1) and discussed the therapeutic potential of MCT1 inhibition in the context of MYC-driven breast cancer and Burkitt's lymphoma. MCT1 inhibition suppresses activity of the lower half of the glycolytic pathway and glutathione production, resulting in the accumulation of reactive oxygen species (ROS). MCT1 inhibition moreover suppresses the Cori cycle, through which lactate is removed from the circulation by the liver and used to drive gluconeogenesis. Treatment with the MCT1 inhibitor thus lowers circulating glucose and lactate, suggesting an additional use for such agents to treat type II diabetes.
The cell's energetic state is continuously monitored by the AMP-activated protein kinase AMPK, which responds dynamically to changes in the ATP:AMP ratio in order to suppress anabolism and promote catabolic activity, thereby maintaining energetic homeostasis. Daniel Murphy (University of Glasgow) spoke of exploiting this feedback mechanism as a strategy for targeting MYC in cancer. MYC accelerates ATP consumption, resulting in progressive activation of AMPK. Surprisingly, a related kinase ARK5 (aka NUAK1) is required for efficient activation of AMPK by MYC, and depletion of either is synthetically lethal with MYC overexpression. Inhibition of ARK5 acutely perturbs mitochondrial respiration, and genetic experiments suggest that targeting ARK5 may be effective against some forms of colorectal cancer. Reuben Shaw (Salk Institute) discussed an affinity trap strategy for isolation of physiological targets of AMPK using a SILAC approach adapted for use in vivo and noted that AMPKα1/α2 double-knockout cells also exhibit mitochondrial defects. Consistent with this observation, D. Grahame Hardie (University of Dundee) stressed that promotion of glycolysis by AMPK is an acute phenomenon whereas sustained activation promotes oxidative phosphorylation. As a negative regulator of TORC1-driven cell growth, AMPK is implicated in tumour suppression and may thus serve different roles at different stages of tumour development. Recent reports that AMPK can be activated by binding to ADP, as well as to AMP, were discussed as were arguments against a major contribution by ADP levels to AMPK regulation under physiological conditions.
Owen Sansom (Beatson Institute) discussed perturbations to mTORC1 signal transduction in APC-deleted intestinal epithelium. Loss of APC leads to deregulated β-catenin activity and consequently to elevation of MYC expression. In intestinal epithelium, this correlates with increased mTORC1 activity, as evidenced by elevated levels of phospho-S6K and phospho-4E-BP1, and MYC deletion coincident with APC loss restores mTORC1 activity to normal. Treatment of floxed APC mice with rapamycin reversibly prevents tumour progression in the intestine whereas mutation of KRas in combination with APC loss is associated with resistance to rapamycin.
Increased protein translation in tumour cells generates its own limitations as cells struggle to maintain the required supply of amino acids. Brendan Manning (Harvard University) presented evidence that TORC1 deregulation simultaneously drives increased protein turnover as well as increased protein production and both effects are sensitive to rapamycin. Accordingly, cells with deregulated mTORC1 are selectively sensitive to bortezomib-mediated inhibition of the 40S proteasome. SREBP1 activation downstream of TORC1 drives NRF1-mediated coordinated transcription of proteasome subunits in a pre-programmed adaptive response to anabolic stress.
Macropinocytosis, a process associated with Ras mutation through which cells absorb nutrients in bulk, may reflect another mechanism to replenish amino acid levels. Dafna Bar-Sagi (New York Langone Medical Center) described recent work on the role of macropinocytosis in tumour metabolism. Using a combination of microscopy and isotope tracing, it was found that macropinocytosis enables the consumption and degradation of extracellular protein to support metabolism. Various aspects of the significance of this novel mode of eating were discussed.
In addition to its roles in driving increased protein translation and elongation, John Blenis (Weill Cornell Medical College) presented evidence that TORC1 also regulates splicing efficiency. Downstream of TORC1, recruitment of S6K1, but not S6K2, occurs to the exon junction complex via specific interaction with SKAR (polymerase delta-interacting protein 3). Recruitment of both factors is required for the efficient expression of intron-containing pre-mRNAs.
Reactive Oxygen species (ROS) play a complex role in tumourigenesis being at once mutagenic and cytotoxic, depending on levels. Boudewijn Burgering (UMC, Utrecht) discussed direct sensing of ROS by Forkhead transcription factors via oxidation of cysteine residues, allowing the formation of stable intermolecular disulfide bridges between FOXO and other proteins, such as p300. Treatment of cells with hydrogen peroxide drives FOXO into the nucleus, activating a transcription programme to counteract ROS.
Metabolic pathways and stress
As reported earlier, HIFs play an important role in tumour progression. While in gliomas and leukaemias HIF levels are affected by 2HG, in clear cell renal cell carcinoma (ccRCC), HIF1α is stabilised by loss-of-function of the Von Hippel-Lindau (VHL) protein. A newly discovered mechanism of HIF activation in ccRCC was presented by Celeste Simon (University of Pennsylvania), who discussed the role of the gluconeogenic enzyme fructose-1,6-bisphosphatase 1 (FBP1). This enzyme is almost uniformly depleted in 600 ccRCC tumours examined. FBP1 opposes aerobic glycolysis in renal tubular epithelial cells. Unexpectedly, FBP1 was also found to inhibit nuclear HIF function in a non-canonical fashion by interacting with its inhibitory domain. Loss of FBP1 therefore may collaborate with the prominent VHL mutations in ccRCC to promote HIF activation and thus tumour growth.
The metabolism of cancer cells is likely affected by the stressful conditions of the tumour microenvironment. Ralph DeBerardinis (University of Texas, Southwestern) therefore emphasised the need for protocols for in vivo isotope tracing. A case study was shown where human glioblastoma (GBM) patients were infused with 13 C-glucose prior to excision of their tumour and analysis. Using this approach, it was found that in vivo, GBM tumours consistently had surprisingly high pyruvate dehydrogenase and hence citric acid cycle activity, driving net production of glutamate.
The enzyme phosphoglycerate dehydrogenase (PHGDH) catalyses the first step in the conversion of the glycolytic intermediate 3-phosphoglycerate to serine and glycine. PHGDH has been found to be amplified or overexpressed in a variety of cancers, indicating the importance of these amino acids in cancer cell proliferation. While serine and glycine can be interconverted by serine hydromethyl transferases (SHMT 1 and 2), the relative importance of these amino acids in tumour growth remains unclear. Oliver Maddocks (Beatson Institute) showed that only serine rescued reduced proliferation of cancer cells in medium lacking both serine and glycine. This corroborated the finding that most cell lines avidly consume exogenous serine and only uptake glycine when serine depletes. Serine plays an important role in folate metabolism, supplying one-carbon units for purine nucleotide synthesis.
Anne Brunet (Stanford University) discussed the metabolic regulation of aging using the C. elegans model. She focused on histone methylation as deficiencies in trimethylation of histone 3 at lysine 4 (H3K4me3) were found to increase lifespan. Regulation of H3K27me3 was also involved in lifespan determination. Brunet further discussed the interesting observation that H3K4me3deficient worms have high levels of fat and are long-lived, in apparent contrast with the also long-lived but lean worms subjected to dietary restriction. The H3K4me3deficient worms were found to have particularly high levels of the fatty acids palmitoleic and cis-vaccenic acid. This was found to be due to the high activity of the desaturase FAT-7/SCD1, and hence, it was postulated that this enzyme plays a role in longevity.
Autophagy is a cellular degradation pathway that can, depending on the context, have a tumour-suppressing or tumour-promoting role. Alec Kimmelman (Dana-Farber Cancer Institute) explained how this role of autophagy in cancer depends on both the type and timing of genetic alterations that occur. Recent experiments in a genetically engineered mouse model of pancreatic cancer (PDAC) with oncogenic KRas and homozygous loss of Trp53 showed that loss of autophagy accelerated tumour progression. Kimmelman showed that in a similar mouse model with instead sporadic LOH of Trp53, tumour growth was dependent on autophagy. In this setting, autophagy may promote tumour progression by maintaining metabolic homeostasis during nutrient starvation. Kimmelman further described how specific cargo is targeted for selective autophagy. Nuclear receptor coactivator 4 (NCOA4) was found to be highly enriched in lysosomes where it acts as a selective cargo receptor for turnover of ferritin to sustain iron homeostasis.
Daniel Peeper (Netherlands Cancer Institute) discussed a surprising role for pyruvate dehydrogenase kinase 1 (PDK1) in suppressing oncogene-induced senescence (OIS) in BRaf V600E -driven melanoma. Pyruvate dehydrogenase is rate limiting for entry of pyruvate into the TCA cycle and contributes to maintenance of senescence in pre-neoplastic nevi. Forced expression of PDK1 inhibits PDH, bypasses OIS and thereby facilitates progression to melanoma. Importantly, suppression of PDK combines with paclitaxel to drive regression of established melanomas.
Therapeutic opportunities
Susan Critchlow (AstraZeneca) described efforts to target MCTs. As reported above, MYC induces MCT1 while HIF1 could activate the expression of MCT4. Given that the tumour microenvironment permits the commensal existence of hypoxic cells that export lactate and respiring cells that could import lactate for oxidation, inhibition of MCTs would be of therapeutic interest. Highly glycolytic tumour cells depend on these transporters to export rapidly produced lactate. Inhibition of MCT1 with a novel small molecule (AZD3965) decreased the proliferation rate of Raji Burkitt lymphoma cells, both in vitro and in vivo. These observations are consistent with the work of John Cleveland.
Chi Van Dang (University of Pennsylvania) provided a background on the MYC oncogene and briefly discussed two recent publications from the Amati and Eilers groups in support of the case that MYC does indeed have specific transcriptional targets. He provided a conceptual framework for oncogene-dependent nutrient addiction, reasoning that constitutive activation of growth factor-independent cell growth and proliferation would render cancer cells addicted to nutrients to support deregulated growth. He also showed that MYC-dependent transformation systems are dependent on both glucose and glutamine, hence causing MYC-dependent cancers to be sensitive to inhibition of glycolytic and glutaminolytic enzymes. This was demonstrated by using lactate dehydrogenase A (LDHA) as an example. He further reported that survival in a transgenic model of MYCdependent liver cancer could be prolonged by treatment with BPTES, an inhibitor of glutaminase.
Pharmaceutical efforts to target mutant IDH already provide clinical proof of concept that acute myelogenous leukaemia could be treated in humans in phase I studies.
Katharine Yen (Agios) showed that targeting mutant IDH can provide clinical benefit. Mutant IDH1/2 drives 2HG accumulation, leading to histone and DNA hypermethylation, suppressing hematopoietic differentiation. Inhibitors of mutant IDH were able to reverse this hypermethylation and to induce differentiation in leukaemia models, resulting in significant survival benefit, in vivo.
Although cancer cells have been shown to use glucose and glutamine, alternative nutrient sources are less well understood. Eyal Gottlieb (Beatson Institute) discussed the role of acetate metabolism in hypoxic cancer cells. Hypoxia limits production of acetyl-CoA from glucose, which is largely converted to lactate through anaerobic glycolysis. By using siRNA screens, it was found that acetyl-CoA synthetase 2 (ACSS2), which catalyses the production of acetyl-CoA from acetate, was essential for cellular growth in hypoxic and nutrient-stressed conditions. ACSS2 is highly amplified in breast cancer, and further investigation confirmed the role of ACSS2 in driving acetate consumption for fatty acid synthesis. Silencing of ACSS2 suppresses cancer cell growth, both in vitro and in vivo. Thus, ACSS2 could be an attractive therapeutic target.
Conclusion
The conference provided the audience a broad sampling of up-to-date knowledge and state-of-the-art technological developments. Next year's Beatson Conference will be between the 5 th and 8 th of July, and its theme will be Control of Cell Polarity and Movement in Cancer. | 3,221.4 | 2014-11-28T00:00:00.000 | [
"Physics"
] |
Selective Excitation of Subwavelength Atomic Clouds
A dense cloud of atoms with randomly changing positions exhibits coherent and incoherent scattering. We show that an atomic cloud of subwavelength dimensions can be modeled as a single scatterer where both coherent and incoherent components of the scattered photons can be fully explained based on effective multipole moments. This model allows us to arrive at a relation between the coherent and incoherent components of scattering based on the conservation of energy. Furthermore, using superposition of four plane waves, we show that one can selectively excite different multipole moments and thus tailor the scattering of the atomic cloud to control the cooperative shift, resonance linewidth, and the radiation pattern. Our approach provides a new insight into the scattering phenomena in atomic ensembles and opens a pathway towards controlling scattering for applications such as generation and manipulation of single-photon states.
I. INTRODUCTION
Since Dicke's original work in 1954, the physics of collective effects and multiple scattering of light by a dense ensemble of atoms has attracted significant attention [1][2][3][4][5]. In particular, remarkable phenomena such as Anderson localization [6,7], coherent backscattering [8], random lasing [9], superradiance [1,[10][11][12], subradiance [1,11,13], and cooperative shift [14] have been explored for cold ensembles of atoms. The physical origin of these phenomena can be understood by multiple scattering of light in a collection of atoms [15]. An ideal platform for observation of these cooperative effects is an array of cold atoms with subwavelength distances [16][17][18][19][20][21][22][23][24][25][26][27][28][29][30][31][32][33][34][35]. However, arranging atoms in arbitrary subwavelength structures is highly demanding and cannot be achieved easily [30]. On the other hand, it has been demonstrated that a cloud of cold atoms can reach densities with atomic distances less than the resonant wavelength where a strong coherent dipole-dipole interaction couples the atoms [36,37]. Therefore, the atoms interact with light collectively [36][37][38][39][40][41]. Nonetheless, the linewidth and frequency of each collective mode depends strongly on the exact spatial arrangement of the atoms, which changes randomly even in a cold ensemble of atoms. As a consequence, the atomic cloud exhibits both coherent and incoherent scattering [39,40]. Moreover, the random motion of the atoms seems to weaken the cooperative effects significantly and causes a subwavelength cloud of atoms to scatter fewer photons on average compared to a single atom, in contrast to Dicke's work [42].
In this paper, we show that the cooperative shift and resonance linewidth of a subwavelength cloud of cold atoms can be controlled by structuring the excitation field. Structured light beams enable properties and applications in both classical and quantum optics [43][44][45]. In particular, structured light offers unique control of many phenomena including angstrom localization and detec-tion of nanoparticles [46][47][48][49], Kerker effects and directional scattering [50][51][52], counter-intuitive optical pulling and lateral forces [53,54], and nonlinear microscopy [55], among other feats [43][44][45]56]. However, the potential of structured light to manipulate cooperative effects remains unexplored.
We introduce a multipolar decomposition and demonstrate that both coherent and incoherent scattering of a subwavelength atomic cloud can be fully characterized by electric and magnetic multipole moments. Using conservation of energy and multipolar decomposition, we find analytical expressions that relate fluctuating and averaged electric and magnetic polarizabilities. Then, by employing superposition of four plane waves, we selectively excite the electric and magnetic multipole moments. As a result, we can control the cooperative shift and resonance linewidth of the atomic cloud by changing the relative phase between the plane waves.
II. WEAK EXCITATION LIMIT
We consider a subwavelength cloud of atoms uniformly distributed in a sphere with radius R [see Fig. 1(a)]. The atomic cloud is assumed to be dense, i.e., ρ/k 3 > 1, where ρ = N/V is the spatial density, N is the number of atoms, and V is the volume of the atomic cloud. We assume cold atoms without nonradiative losses and with a negligible Doppler effect compared to their radiative linewidth as in experimental realizations [36,37]. The atomic cloud is investigated in the weak excitation limit such that the atomic transition is far below the saturation limit. Thus, each atom in the cloud is modeled by an isotropic electric polarizability given by α (ω) = −(α 0 Γ 0 /2)/ (ω − ω a + iΓ 0 /2), where Γ 0 is the radiative linewidth, ω a is the atomic transition angular frequency, and ω − ω a ω a is the detuning of the illumination from the atomic resonance. k = ω/c is the wavenumber of the illumination [57][58][59][60].
III. COHERENT AND INCOHERENT MULTIPOLE EXPANSION
We assume that the atoms in the subwavelength cloud have random spatial distributions. We consider many realizations for which the position of the atoms are changed with a uniform probability distribution [see Fig. 1(a)]. The atomic cloud is illuminated by plane waves and the total scattered field can be decomposed into two parts: E sca = E sca + δE sca , where E sca and δE sca are the coherent (ensemble-averaged) and incoherent (fluctuating) fields, respectively [39,40,42]. The induced polarization current density of the atomic cloud is given by [31,32,61,62], where δ is the Dirac delta function and p (r i ) is the induced electric dipole moment of the ith atom placed at r i [see Fig. 1(a)]. Now by employing multipole decomposition of the current J (r, ω) [63,64], we can calculate the induced effective multipole moments of the atomic cloud which can be decomposed into coherent and incoherent parts (see Appendix B for details): µν are the effective electric dipole (ED), magnetic dipole (MD), electric quadrupole (EQ), and magnetic quadrupole (MQ) moments of the atomic cloud, respectively. The symbol represents an ensemble-average.
Using the induced multipole moments, we obtain the electric and magnetic dipole and quadrupole polarizabilities where α i and δα i are the coherent and incoherent polarizabilities, respectively.
IV. SINGLE PLANE WAVE EXCITATION
We consider a subwavelength atomic cloud composed of N atoms as shown in Fig. 1 (a) and illuminated by an x-polarized plane wave E inc = E 0 e ikz e x propagating in the z direction. The ensemble-averaged induced multipole moments are given by (see Appendix C for details) where, Q E and Q M are tensors of rank two, e µ e ν is the unit dyad, µ, ν ∈ x, y, z, and H 0 is the amplitude of the magnetic field of the plane wave. Having the polarizabilities, we can calculate the coherent and total scattering cross sections by (see Appendix C) where α 0 = 6π/k 3 (α 0 = 120π/k 5 ) is related to the radiation loss of a dipole (quadrupole) moment. Equation (4), which is the first main result of this paper, allows us to calculate the coherent and incoherent scattering cross sections of the atomic ensemble. Note that the incoherent scattering cross section is given by C incoh sca = C total sca −C coh sca . We consider now a spherical subwavelength atomic cloud with radius R = 0.2λ a composed of 25 atoms which can be fully characterized by dipole and quadrupole moments. Figure 1 (c) shows that the atomic cloud exhibits strong electric and magnetic responses. Figure 1(d) shows the coherent, incoherent and total scattering cross sections (normalized to λ 2 /2π) calculated from Eq. (4), and the contribution of different multipole moments as a function of frequency detuning. It can be seen that the maximum total scattering cross section of the ensemble is approximately equal to the scattering of a sin-gle atom, even though the atomic cloud consists of 25 atoms [42]. Furthermore, the maximum cross section of coherent scattering is much smaller than that of a single atom [39,40,42].
In order to establish the relation between the coherent and incoherent polarizabilities, we focus only on the dipolar response of the atomic cloud for simplicity. The supplementary material provides the relations for other multipole moments. Note that the ensemble-averaged polarizability of a spherical atomic cloud is isotropic, i.e. α i = α i I, where I is the identity matrix. Therefore, all the diagonal matrix elements of the induced electric dipole polarizability are identical and are represented by α D ED . All the off-diagonal elements, α OD ED , are also identical. Therefore, the induced electric dipole polarizability can be written as where J is the all-ones matrix. Note also that α ED = α ED I and α OD we get (see Appendix B) Equation (6) is the second main result of this paper; it shows how the fluctuations of the polarizabilities can be obtained from the ensemble-averaged values (see Appendix B) for the details of the derivation and similar expressions for MD, EQ, MQ polarizability tensors). In Fig. 2(a)-(b), we plot the coherent and incoherent electric dipole polarizabilities retrieved from multipolar decomposition. In contrast to the off-diagonal terms, the diagonal term exhibits a non-zero ensemble-averaged polarizabiltiy. Note that the components of the electric dipole moment tensor satisfy Eq. (6) [see Fig. 2(c)]. Using the duality of the electric and magnetic fields in Maxwell's equations and conservation of energy, we can obtain a similar relation for the components of the magnetic polarizability tensor (i.e., replacing MD with ED in Eq. (6), see Appendix B). Figure 2(d)-(e) shows the coherent and incoherent components of the magnetic polarizabilities. The magnetic response is smaller than the electric one. The induced multipole moments exhibit asymmetry in their resonance lineshape which explains the non-Lorentzian lineshape of the scattering cross sections in Fig. 1(d).
V. SELECTIVE EXCITATION OF ELECTRIC DIPOLE OR MAGNETIC QUADRUPOLE MOMENT
Although the constituent atoms have only electric dipole transitions, the entire atomic cloud can support higher order electric and magnetic multipole moments [see Fig. 1 (d)]. Here, we show that it is possible to selectively excite a particular multipole moment by tailoring the excitation field. To this end, we consider an excitation by four plane waves with TE polarization, i.e., . Hence, the ensemble-averaged induced multipole moments are given by (see Appendix D) Equation (7) clearly shows that by changing the relative phase φ, one can control which multipole moment to be excited. Consequently, the scattering cross sections are given by (see Appendix D) Equations (7) and (8) are the third main result of this paper which show that the induced dipole moments and the scattering cross sections can be controlled by a simple four-beam configuration and the relative phase φ between the plane waves. Figure 3(b) plots the scattering cross section as a function of the relative phase and the frequency detuning. Interestingly, as can be seen from Fig. 3(c) and (d), the cooperative resonance linewidth can also be tuned by varying the phase φ due to selective exciation of different multipole moments.
We note three different scenarios based on the relative phase φ: (i) At φ = 2mπ, where m is a non-negative integer, only the electric dipole moment of the atomic cloud is excited [see Eq. (7) and Fig. 3(c)]. In this case, the atomic cloud exhibits an omnidirectional radiation pattern.
(iii) At 2mπ < φ < (2m + 1) π, all multipoles can be excited, see for example Fig. 3(e) for φ = (2m + 1)π/2. Thus, one can selectively excite the electric dipole or magnetic quadrupole moment of the atomic cloud by just controlling the relative phase of the plane waves with TE polarizations and achieve arbitrary radiation patterns.
VI. SELECTIVE EXCITATION OF MAGNETIC DIPOLE OR ELECTRIC QUADRUPOLE MOMENT
To selectively excite magnetic dipole or electric quadrupole, we employ superposition of four plane waves with TM polarization: H inc = H 0 /4 4 n=1 e i(kn·r+φn) e y , where k n and φ n are defined similar to the TE polarization, see the previous section and Fig. 4(a). The coherent and total scattering cross sections are thus given by (see Appendix D) Figure 4(b) plots the total scattering cross section as a function of the relative phase φ and the frequency detuning. As shown in Fig. 4(c) and 4(d), by varying the phase φ, one can excite different multipole moments and thus control the cooperative shift and resonance linewidth. In particular, the MD and EQ moments can be selectively excited by the TM polarized plane waves. As a consequence, the atomic cloud will scatter light in a selective direction depending on the relative phase between the plane waves.
Arrays of cold atoms with subwavelength spacing scatter light coherently and thus have been modeled by effective electric and magnetic multipole moments [31][32][33]. In contrast, an atomic cloud composed of randomly distributed atoms exhibits not only coherent, but also incoherent scattering due to the motion of the atoms [39,40,42]. In this paper, we showed that the multipolar decomposition can model not only the coherent, but also the incoherent response of the atomic cloud accurately. We also demonstrated that the ensemble-averaged polarizabilities are adequate to model the response of the atomic cloud. Furthermore, using superposition of plane waves, we showed that one can selectively excite the induced electric and magnetic multipole moments and thus manipulate the resonant linewidth and cooperative shift of the ensemble, as well as its radiation pattern. Our study paves the way towards controlling cooperative effects in atomic systems through structured light [43,44]. Our approach to control the cooperative effects is not restricted to subwavelength cold atomic clouds and can be realized both experimentally and theoretically in different systems of interacting quantum emitters including ultracold quantum metasurfaces [30], nanoscale atomic vapor layer [65], two-dimensional semiconductors heterostructures [66], and atomic arrays in waveguides and cavities [67]. Let us consider an atomic cloud composed of neutral atoms with only electric dipole transition moments and illuminated by a plane wave [see Fig. 1 (a)]. The atoms confined in a volume smaller than the wavelength of the resonant light, i.e., D < λ. We consider the weak-excitation limit where the atomic response is isotropic and linear. The electric polarizability of each atom amounts to α where Γ 0 is the radiative linewidth of the atomic transition at frequency ω a , and ω − ω a ω a represents the frequency detuning between the illumination and the atomic resonance, α 0 = 6π/k 3 and k is the wavenumber [57,58]. We assume elastic scattering events and therefore the non-radiative decay rate is zero, i.e., Γ nr = 0. The induced dipole moment of the ith atom p (r i ) = 0 αE loc (r i ) can be obtained by using the coupled-dipole equations [57,58,60] where E inc (r i ) is the incident field at the position r i of the atom, and α is the atomic polarizability. The total field at the position of the ith atom E loc (r i ) is the sum of the incident field and the scattered field from the other atoms. The electric dipole at position r j radiates an electromagnetic field which when measured at r i can be calculated from G (r i , r j ) p (r j ), where G (r i , r j ) is Green's tensor given by [61,62] G (r i , r j )= 3 2α 0 0 e iζ g 1 (ζ)Ī + g 2 (ζ) nn , where I is the identity dyadic, n = ri−rj |ri−rj | , and ζ = |k (r i − r j )| [31,32]. Having the induced dipole moment of each atom, we can define the induced displacement current J (r, ω) = −iω N i=1 p (r i ) δ (r − r i ), where δ is the Dirac delta function, and p (r i ) is the induced electric dipole moment of the ith atom at r = r i [see Fig. 1 (a)]. Here, we assumed e −iωt as a time harmonic variation.
Appendix B: Multipole expansion and cross sections
Coherent and incoherent multipole moments
In this subsection, we present expressions for the effective induced electric and magnetic moments in Cartesian coordinates [31].
Using the multipole expansion of the induced current J (r, ω), the induced effective multipole moments of the atomic cloud (at the center r = 0) can be calculated [63,64]: where, µ, ν ∈ x, y, z. The quantities d E µ , d M µ , Q E µν , and Q M µν are the electric dipole (ED), magnetic dipole (MD), electric quadrupole (EQ), and magnetic quadrupole (MQ) multipole moments, respectively. j n are the spherical Bessel functions. Note that Q E = µ,ν Q E µν e µ e ν and Q M = µ,ν Q M µν e µ e ν are tensors of rank two and e µ e ν is unit dyad. We consider N R realizations for which the positions of the atoms are changed with a uniform probability distribution in a spherical volume. Then, the induced multipole moments of the atomic cloud can be decomposed into coherent (ensemble-averaged) and incoherent (fluctuating) parts: where µ, ν = x, y, z. The symbols · represent the ensemble-averaged multipole moments. Note that the incoherent multipole moments are related to the quasiisotropic speckle originating from the random positions of the atoms in the spherical cloud.
Coherent and incoherent cross sections
In this subsection, we derive coherent and incoherent scattering and extinction cross sections using the induced electric and magnetic multipole moments in Eqs. (B1)-(B2). The total scattering cross section can be decomposed into coherent and incoherent parts, i.e., C total sca = C coh sca + C incoh sca which are and given by [63,64] C coh sca = and the extinction cross section of the cloud is given by [63,64] where Z 0 is the impedance of free space, c is the speed of light in free space, and r = µ r µ e µ = xe x + ye y + ze z . Note that in Eq. (B5) and also in the remainder of the supplementary material, E and H show the incident fields, i.e. we omit the subscript "inc" for simplifying the notation.
Conservation of energy: coherent and incoherent cross sections
According to the conservation of energy, the extinction cross section is equal to the sum of the coherent and incoherent scattering cross sections, i.e., C ext = C total sca = C coh sca +C incoh sca . Therefore, from Eqs. (B3)-(B5), we obtain the following relations between the coherent and incoherent multipole moments: where α 0 = 6π/k 3 (α 0 = 120π/k 5 ) is related to the radiation loss of a dipole (quadrupole) moment.
Coherent and incoherent dipole polarizabilities
From Eq. (B6) the relation between the coherent and incoherent electric dipole polarizabilities is found to be: For a spherical cloud, the averaged electric polarizability tensor is isotropic and reads as α ED = α ED I, where I is the identity matrix. Now by substituting For an x-polarized plane wave excitation E = E 0 e ikz e x , we get Using the symmetry of the electric dipole polarizability tensor δα ED zx = δα ED yx = δα ED yz and α ED = α ED I, the electric dipole polarizability tensor can be written as where α ED = α ED I and α OD ED = 0. I and J are the identity and all-ones matrices, respectively. Therefore, Eq. (B9) can be simplified as and we obtain Eq. (6) of the main text. Using the duality in Maxwell's equations, a similar expression for a magnetic polarizability can be obtained: (B11)
Coherent and incoherent quadrapole polarizabilities
In this subsection, we obtain the relation between the coherent and incoherent quadrupole polarizabilities using Eq. (B6): The electric quadrupole tensor is a tensor and is given by Therefore, Q E has five independent components in Cartesian coordinates. These five independent components are represented by Q E xx , Q E xy , Q E xz , Q E yy , Q E yz . Now, for a single plane wave excitation E = E 0 e ikz e x , we have ∇E + E∇ = ikE 0 (e x e z + e z e x ), and we get Using the duality in Maxwell's equations, a similar expression can be found for coherent and incoherent magnetic polarizabilities: Appendix C: Single plane wave illumination In this section, we provide analytical expressions for coherent and incoherent scattering cross sections of an atomic cloud when illuminated by a single plane wave.
Ensemble-averaged multipole moments
Let us consider a cloud illuminated by a plane wave E = E 0 e ikz e x propagating in the z direction, where e x is the unit vector in the x direction. The ensemble-averaged induced multipole moments of the cloud at r = 0 are given by where α ED ( α MD ) and α EQ ( α MQ ) are ensembleaveraged electric (magnetic) dipole and quadrupole polarizabilities, respectively. E and H in Eq. (C1) are the incident electric and magnetic fields, respectively.
Coherent and incoherent cross sections
In this subsection, we find the scattering cross sections as a function of ensemble-averaged dipole and quadrupole polarizabilities. By substituting Eq. (C1) into Eqs. (B3) and (B5) we obtain After applying some simple algebra and using α 0 = 6π/k 3 and α 0 = 120π/k 5 , we obtain Eq. (4) of the main text: Using above equations, we can calculate incoherent scattering cross section from C incoh sca = C ext − C coh sca .
where k 1 · r = −k 2 · r = k x x + k z z, k 3 · r = −k 4 · r = k x x − k z z, and k x = ksinψ, k z = kcosψ. Thus, the total magnetic field at r = xe x + ye y + ze z can be written as and the corresponding electric field is given by Using the above electric and magnetic fields and their derivatives, we can obtain the ensemble-averaged induced multipole moments at the center of the cloud (r = 0) − H 0 k 2 α MQ cosψ (e y e z + e z e y ) sin φ 2 cos φ 2 .
Now by substituting Eq. (D3) into Eqs. (B3), we obtain And by substituting Eq. (D3) into Eqs. (B5), the total (sum of incoherent and coherent) scattering (extinction) cross section can be obtained Finally, in order to selectivity excite different multipole moments, we assume ψ = π/4 and consider two cases: i) φ = 2mπ: the induced moments read as thus, only the magnetic dipole moment is excited and the scattering cross sections read as ii) φ = (2m + 1)π: the induced moments read as thus, only the electric quadrupole moment is excited and the scattering cross sections read as
Four plane waves with TE polarization
In this subsection, we consider an atomic cloud when illuminated by four plane waves with TE polarization. The electric fields of the plane waves are defined as where k 1 · r = −k 2 · r = k x x + k z z, k 3 · r = −k 4 · r = k x x − k z z, and k x = ksinψ, k z = kcosψ. Thus, the total electric field at r = xe x + ye y + ze z can be written as and the corresponding magnetic field is given by I. Selective excitation of subwavelength atomic clouds using superposition of four plane waves. The first column shows the polarization and the relative phase of the waves. See Fig. 3 (a) of the main text for the geometry. The 2nd to 5th columns show the fields' amplitudes and their gradient at the center of the atomic cloud, based on which a particular multipole moment is excited as shown in the last column. TM: φ = (2m + 1) π 0 0 ikE0 (ezez − exex) 0 EQ: Q E = 1 2 ε0 α EQ (∇E + E∇)| r=0 Using the above electric and magnetic fields and their derivatives, we obtain the ensemble-averaged induced multipole moments for four plane waves at the center of the cloud (r = 0) And by substituting Eq. (D11) into Eq. (B5), the total scattering (or extinction) cross section can be obtained Finally, in order to selectivity excite different multipole moments, we assume ψ = π/4 and consider two cases: i) φ = 2mπ: the induced moments read as thus only the electric dipole moment is excited we excite and the scattering cross sections read as ii) φ = (2m + 1)π: the induced moments read as Table I presents a summary of selective excitation with four plane waves. It shows the fields amplitudes and their gradients at the center of the cloud for different polariza-tions and phases of four plane waves. The last column indicates which multipole moment is excited based on the field amplitudes and gradients at the center. | 5,801.6 | 2021-02-22T00:00:00.000 | [
"Physics"
] |
Mapping the Hyaluronan-binding Site on the Link Module from Human Tumor Necrosis Factor-stimulated Gene-6 by Site-directed Mutagenesis*
Link modules are hyaluronan-binding domains found in extracellular proteins involved in matrix assembly, development, and immune cell migration. Previously we have expressed the Link module from the inflammation-associated protein tumor necrosis factor-stimulated gene-6 (TSG-6) and determined its tertiary structure in solution. Here we generated 21 Link module mutants, and these were analyzed by nuclear magnetic resonance spectroscopy and a hyaluronan-binding assay. The individual mutation of five amino acids, which form a cluster on one face of the Link module, caused large reduc-tions in functional activity but did not affect the Link module fold. This ligand-binding site in TSG-6 is similar to that determined previously for the hyaluronan receptor, CD44, suggesting that the location of the interaction surfaces may also be conserved in other Link module-containing proteins. Analysis of the sequences of TSG-6 and CD44 indicates that the molecular details of their association with hyaluronan are likely to be significantly different. This comparison identifies key sequence positions that may be important in mediating hyaluronan binding, across the Link module superfamily. The use of multiple sequence alignment and molecular modeling allowed the prediction of functional residues in link protein, and this approach can be extended to all members of the superfamily.
Hyaluronan (HA) 1 is a ubiquitous high molecular weight glycosaminoglycan, composed of repeating disaccharides of Dglucuronic acid and N-acetyl-D-glucosamine, which has diverse biological roles in vertebrates. For instance, this polysaccharide, a vital structural component of extracellular matrix (e.g. cartilage, skin, and brain), is required for successful embryonic development (1), and is involved in cell migration (2,3). The wide range of functional activities derives from the large num-ber of HA-binding proteins, which can be intracellular, secreted, or on the cell surface. Many of the extracellular hyaladherins contain a common domain of ϳ100 amino acids, termed a Link module, which is involved in HA binding (4,51). This domain was first described in link protein (containing an immunoglobulin module and two contiguous Link modules (5)), which together with HA and aggrecan forms huge multimolecular complexes that provide articular cartilage with its load bearing properties. Aggrecan interacts with HA via its N-terminal G1 domain, and this has the same organization of modules as link protein (6); it also has another pair of tandem Link modules within its G2 domain, but these do not bind HA (7)(8)(9). In the aggrecan G1 domain and link protein it has been found that both Link modules participate in HA binding (9,10).
CD44 is the primary receptor for HA and has a range of functions such as anchoring the extracellular matrix to the surface of cells (e.g. in cartilage (11)) and mediating the migration of activated lymphocytes to sites of inflammation (3). CD44 has a single Link module that forms part of its HA-binding domain (12), and functionally important amino acids within this region have been identified (13,14).
The inflammation-associated protein TSG-6 (the secreted product of tumor necrosis factor-stimulated gene-6 (15)) contains a single Link module. TSG-6 has been implicated in the regulation of leukocyte migration (16,17), and its pattern of expression and ligand specificity indicates that it may be involved in extracellular matrix remodeling (18 -20). Previously, we have expressed the Link module from human TSG-6 in Escherichia coli (21,22) and shown that this material (referred to here as Link_TSG6) interacts with HA using a microtiter plate assay (18,19,23). In addition, nuclear magnetic resonance (NMR) spectroscopy on Link_TSG6 has revealed that the Link module is comprised of two ␣-helices and two triplestranded -sheets arranged around a large hydrophobic core (23).
Here, we report the production of 21 Link_TSG6 mutants and their characterization by NMR spectroscopy and a HAbinding assay. Five amino acids, which are clustered on one face of the TSG-6 Link module, were identified as having an important role in binding. Comparison of the HA interaction surfaces in TSG-6 with those determined previously for CD44 has allowed the prediction of functional residues in link protein and other members of the Link module superfamily.
Expression, Purification, and Characterization of Wild-type and Mutant Link_TSG6 -Wild-type and mutant proteins were expressed, refolded, and purified to homogeneity as described previously (21,22). Mutants (at 5-7 pmol/l in 50% (v/v) acetonitrile, 0.2% (v/v) formic acid) were analyzed by electrospray ionization mass spectrometry on a Micromass BioQ II-ZS spectrometer calibrated with horse heart myoglobin (average molecular mass of 16,591.48 Da) and scanned over the mass range 600 -1,600 Da.
NMR Spectroscopy-Lyophilized wild-type and Link_TSG6 mutants were resuspended in 600 l of 10% (v/v) D 2 O, 0.02% (w/v) NaN 3 and adjusted to pH 6.0 with NaOH to give concentrations in the range 0.4 -1.3 mM. One-dimensional NMR spectra (128 scans) were recorded at 25°C on a home-built/GE Omega spectrometer, operating at a frequency of 500 MHz. The NMR data were processed using FELIX 2.3 (Biosym Inc.), applying sine bell and Gauss-Lorentz window functions for resolution enhancement. Proton chemical shifts were referenced to H 2 O at 4.74 ppm.
Protein Concentration-The concentrations of the wild-type and mutant proteins, used in the NMR analysis, were determined by amino acid analysis (24) on an Applied Biosystems 420A derivatizer/analyzer and on-line narrow bore high performance liquid chromatography system (Applied Biosystems). These "stock solutions" were stored at 4°C and used subsequently in the HA-binding assays (see below).
Biotinylation of HA-Rooster comb HA (Sigma) was biotinylated using a modification 2 of the method of Yu and Toole (25). Briefly, 20 l of 250 mM biotin-LC-hydrazide (Pierce and Warriner, Chester, U. K.) in dimethyl sulfoxide was added to 1 ml of 5 mg/ml HA (in 0.1 M MES, pH 5.5) followed by 13 l of 25 mg/ml EDAC in 0.1 M MES, pH 5.5, and the reaction mixture was stirred at room temperature overnight. The sample was dialyzed extensively against water and particulate material removed by centrifugation (12,000 ϫ g for 1 min). The concentrations of HA samples (either biotinylated or unmodified) were determined using the metahydroxybiphenyl reaction (26) relative to standards made from rooster comb HA dried in vacuo over cobalt chloride.
Analysis of HA Binding-The HA-binding activities of wild-type and Link_TSG6 mutants were determined colorimetrically using a microtiter plate assay that measures the binding of biotinylated HA to proteincoated wells (18,19,23). All dilutions, incubations, and washes were performed in 50 mM sodium acetate, 100 mM NaCl, 0.05% (v/v) Tween 20, pH 6.0, unless otherwise stated; it has been shown previously that the interaction between Link_TSG6 and HA is maximal at pH 6.0 (19). Maxisorp F96 plates (Nunc) were coated overnight with 200 l/well protein solution (25 pmol/well; protein concentrations were determined for stock solutions as described above) in 20 mM Na 2 CO 3 , pH 9.6. Control wells were incubated with buffer alone and then treated as for sample wells. The coating solution was removed and the plates washed three times. Nonspecific binding sites were blocked by incubation with 1% (w/v) bovine serum albumin for 90 min at 37°C followed by three more washes. A 200-l solution containing 12.5 ng of biotinylated HA was added to each well, in the absence or presence of 2,500 ng of unmodified rooster comb HA and incubated at room temperature for 4 h. Plates were washed three times, and 200 l of a 1:10,000 dilution of ExtraAvidin alkaline phosphatase (Sigma) was added and incubated for 30 min. After three more washes, wells were incubated for 10 min with 200 l of a 1 mg/ml solution of disodium p-nitrophenyl phosphate (Sigma) in 100 mM Tris-HCl, 100 mM NaCl, 5 mM MgCl 2 , pH 9.3. The absorbance at 405 nm was determined on an MKII Titertek Multiscan Plus plate reader. All absorbance measurements were corrected by subtracting values from uncoated control wells. Each interaction was investigated in quadruplicate in three separate plate assays (i.e. n ϭ 12). 2 S. Banerji, personal communication.
FIG. 1. DNA and translated protein sequences of the VII-6-1mut8-5 plasmid used in site-directed mutagenesis. The amino acids of Link_TSG6 targeted for mutagenesis are indicated in bold; residues are numbered according to the sequence of the expressed Link module (21). The sequences of oligonucleotides, denoted 1-18, used in the mutagenesis reactions are shown in italics, aligned below the wild-type sequence, with the altered nucleotides in bold. Oligonucleotides 1 and 2 are selection primers that change the wild-type NcoI restriction site (CCATGG) to NdeI (CATATG). Oligonucleotide 1 is used in conjunction with the mutagenesis primers 3-18, whereas oligonucleotide 2 is a combined selection and mutagenesis primer. Oligonucleotides 5, 7, and 14 are degenerate primers, each with the potential to produce four different mutant sequences.
Isothermal Titration Calorimetry-The interactions between six Link_TSG6 mutants and an octasaccharide of HA (HA 8 ) were investigated on a MicroCal VP-ITC instrument at 25°C in 5 mM Na-MES, pH 6.0. A 335 M solution of HA 8 , prepared by digestion of human umbilical cord HA with ovine testicular hyaluronidase and purified by gel filtration and ion exchange chromatography, 3 was added in 5-l injections (28 in total) to protein (ranging from 7.3 to 25.7 M) in the 1.4-ml calorimeter cell. Data were fitted to a one-site model by nonlinear least squares regression with the Origin software package, after subtracting the heats resulting from the addition of HA 8 into buffer alone as described previously (27).
Homology Modeling-The three-dimensional structures of the two Link modules from Lp1 and Lp2 were each modeled using the program Modeller4 (38) on the basis of the coordinates of the human TSG-6 Link module (23) and the alignment in Fig. 2. In each case, 100 independent models were generated, and the model with the lowest energy (based on the value of the molecular probability density function) was chosen. XPLOR version 3.8 (39) was used to add hydrogen atoms and disulfide bonds and to carry out energy minimization and molecular dynamic simulations with the CHARMm22 force field (40). Briefly, three rounds of energy minimization were carried out with the backbone fixed. In the first, the electrostatic term was excluded, and a purely repulsive nonbonded force field was used. In the following rounds, the full CHARMm22 force field, which included a Lennnard-Jones potential, was used, but electrostatic interactions were only included in the third round. This was followed by molecular dynamics where all atoms of residues 1-71 and 76 -99 in Lp1, and amino acids 1-11, 14 -39, and 43-96 in Lp2 were fixed so that only regions corresponding to insertions or deletions, when compared with Link_TSG6 (Fig. 2), were free to move. A final energy minimization was performed using the full CHARMm22 force field as in round three above. PROCHECK (41) was used to determine that the number of sterochemical violations in the final models were similar to that of the solution structure of TSG-6 Link module.
RESULTS AND DISCUSSION
Residue Selection and Mutagenesis-15 sequence positions of Link_TSG6 were selected for mutagenesis. Eight of these residues (i.e. Lys-11, Tyr-12, Tyr-59, Lys-72, Asp-77, Tyr-78, Arg-81, and Glu-86), which form a coherent patch on the Link module surface, were chosen because they have been predicted previously to be involved in HA binding (23). Arg-8 was picked because it is adjacent to this patch, and a basic amino acid at this position is involved in HA binding in human CD44 (14). Asp-89, which is completely buried in the hydrophobic core, was chosen because it could be involved in mediating the unusual pH dependence of HA binding to Link_TSG6 (19,42). Asn-67, Phe-70, and Ile-75 are located on the 4/5 loop and have been demonstrated to be perturbed significantly on binding to HA 8 (27). Glu-6 and Lys-13 were selected because they have been implicated in TGS-6-mediated inhibition of neutrophil migration in an in vivo model of inflammation (16).
In total, 21 Link_TSG6 mutant constructs were generated and verified by DNA sequencing (listed in Table I). All of the mutants were found to express at levels similar to wild-type Link_TSG6 (21). Electrospray ionization mass spectrometry revealed that the mutant proteins had molecular masses that differed by less than 1.5 Da from their theoretical masses (data not shown).
Structural Characterization of Link_TSG6 Mutants-Onedimensional NMR spectroscopy was used to assess the effect of FIG. 2. Alignment of Link_TSG6 with Lp1 and Lp2. Lp1 and Lp2, which correspond to residues 159 -257 and 259 -354, respectively, in human link protein (32), are aligned with residues 1-97 of Link_TSG6 (23). This alignment (23) was used for molecular modeling of Lp1 and Lp2 on the basis of the Link_TSG6 coordinates.
TABLE I
HA-binding activities of Link_TSG6 mutants and the effect of mutagenesis on protein structure Each mutant was analyzed by one-dimensional NMR spectroscopy to determine the effect of mutagenesis on the Link module structure. As shown in Fig. 3, mutants can be classified into three groups (wild-type fold, perturbed fold, and unfolded). The HA binding activities of all mutants were determined and compared with wild-type protein. Only mutants that have wild-type folds (see Fig. 4) provide information on whether a particular amino acid is involved in HA binding. each of the mutations on the Link module fold. Wild-type Link_TSG6 has a characteristic one-dimensional NMR spectrum (Fig. 3), with well dispersed signals (e.g. in the amide region ϳ7.5-9.5 ppm) and the methyl resonances from Val-57 shifted to high field (-0.5 and -1.1 ppm) because of their proximity to Trp-51 and Trp-88 in the hydrophobic core (19,23). 13 of the mutants (Table I) give NMR spectra (data not shown) that are essentially identical to that of the wild-type protein (e.g. Y59F illustrated in Fig. 3). Thus, it can be concluded that these amino acid substitutions have no effect on the Link_TSG6 fold. However, other mutations give rise either to unfolded protein (e.g. E86A; Fig. 3) or a Link module that, while folded, is structurally different from that of wild-type Link_TSG6 (e.g. Y78S; Fig. 3). Therefore, all of the mutants can be classified as having a wild-type fold, a perturbed fold, or being unfolded on the basis of their NMR spectra (Table I).
Clearly, only the mutants that have wild-type folds can be used to provide information on the role of a particular amino acid sequence position in ligand binding.
HA-Binding Experiments-The HA-binding activities of wild-type and mutant Link_TSG6 were analyzed using a microtiter plate assay that we have described previously (18,19,23). For wild-type Link_TSG6, maximum binding of biotinylated HA (12.5 ng) was seen when protein was coated at 25 pmol/well (data not shown). Amino acid analysis of coating solutions, following incubation overnight in the microtiter plate, indicated that greater than 90% of the protein (wild-type and eight mutants tested) was adsorbed onto the well (data not shown). From Fig. 4, which shows the experimental data for the mutants with wild-type folds, it can be seen that the binding of biotinylated HA is highly specific because this is greatly reduced by the presence of unlabeled HA. Some mutants show a degree of nonspecific (i.e. non-competable) binding, but in the worst case (N67L) this is less than 15% of the value for wildtype protein determined in absence of competitor (Fig. 4). Table I shows the HA binding activities of all of the Link_TSG6 mutants (as a percentage of wild-type binding). The mutants that have either a perturbed fold (i.e. N67S, I75A, D77A, Y78S, and D89A) or are unfolded (i.e. R81A, E86A, and E86S) have a greatly reduced HA binding function (with between 2 and 26% of wild-type binding). Because these mutations affect the Link module structure (see above) it is impossible to tell whether the loss of activity results from the residue being involved in binding or from the perturbation of the interaction surface. Therefore, they provide no information on role of a particular amino acid in binding.
The binding data for the 13 mutants with wild-type folds are presented in Fig. 4. From this it can be seen that E6A, R8A, K13A, N67L, and K72A have functional activities similar to that of wild-type protein with 93, 108, 89, 79, and 88% of FIG. 3. One-dimensional 1 H-NMR spectra of wild-type and mutant proteins. The wild-type Link_TSG6 (WT) has a well dispersed spectrum with the methyl resonances of Val-57 (V57), which forms part of the stable hydrophobic core, being shifted to high field. Y59F has a NMR spectrum that is essentially identical to the wild-type and therefore can be classified as having a wild-type fold. E86A has poorly dispersed resonances, with no high field-shifted Val-57 methyls, and the spectrum is characteristic of that of an unfolded protein. The spectrum of Y78S, although having some features of a folded protein (i.e. with high field shifted methyls), is significantly different from that of wild-type. This mutant therefore can be classified as having a perturbed fold.
wild-type binding, respectively (Table I). Therefore, it can be concluded that Glu-6, Arg-8, Lys-13, Asn-67, and Lys-72 are unlikely to be involved in the interaction of Link_TSG6 with HA.
The mutation of Lys-11, Tyr-12, Tyr-59, Phe-70, or Tyr-78 (i.e. mutants K11Q, Y12F, Y12V, Y59F, Y59S, F70V, Y78F, and Y78V) each leads to a large reduction in activity (7-30% of wild-type binding; see Table I). Table II shows that the mutation of these amino acids also leads to a significant reduction in the affinities of HA binding in solution, whereas K72A exhibits wild-type activity. This clearly demonstrates that the results obtained with the microtiter plate assay are reliable and are not an artifact caused by immobilization of the protein on the plate. These data indicate that Lys-11, Tyr-12, Tyr-59, Phe-70, and Tyr-78 are likely to participate directly in HA binding. For example, Lys-11 could be making an ionic interaction with a carboxyl group of HA; basic amino acids have been implicated previously in protein-HA interactions (13,14,(43)(44)(45). Recent calorimetry studies indicate that the interaction of Link_TSG6 with HA 8 involves the formation of one or two salt bridges (46).
The conservative replacement of any of the three tyrosines (i.e. Tyr-12, Tyr-59, and Tyr-78) with phenylalanine leads to a large drop in functional activity (Fig. 4), indicating that the hydroxyl groups in these residues make an important contri- FIG. 4. Comparison of the HA-binding activities of Link module mutants with wild-type Link_TSG6. The binding of biotinylated HA to wild-type (WT) or mutant proteins was determined using a colorimetric assay in the absence or presence of competing unlabeled HA (200-fold molar excess). Values are plotted as the mean absorbance (n ϭ 12) at 405 nm after a 10-min development time Ϯ the S.E. The mutants shown here are those that have wild-type folds (Table I). FIG. 5. Position of the HA-binding site on Link_TSG6. The TSG-6 Link module structure (23) is shown as a spacefilling representation (generated using the program RasMol (50)) in four orientations. Mutated amino acids are colorcoded according to the effect of the amino acid substitution on HA-binding activity or the structural integrity of the Link module fold. Residues in which all of the mutations made lead to a perturbed/unfolded structure are shown in pink; no conclusions can be made about their role in ligand binding. Amino acids that are important for HA binding (i.e. the mutation leads to a large reduction in functional activity) are colored red; those that are not involved are denoted in green. bution to HA binding. Y12F and Y78F have activities that are similar to those of Y12V and Y78V, respectively, showing that in Tyr-12 and Tyr-78 the hydroxyls alone (but not the aromatic rings) are involved in the interaction. The serine mutant of Tyr-59 (Y59S) has a slightly reduced binding capacity compared with Y59F, suggesting that the aromatic ring, in this case, may also take part. Mutation of Phe-70 to Val (F70V) reduces HA-binding activity significantly, indicating that its aromatic ring makes an important contact with HA. The individual mutation of these five amino acids (i.e. Lys-11, Tyr-12, Tyr-59, Phe-70, and Tyr-78) leads to a large reduction in functional activity, indicating that there is an extensive network of interactions between the protein and polysaccharide, and loss of any one of these (such as a hydrogen bond from Tyr-12 to HA) can have a dramatic effect on HA binding.
Localization of the HA-Binding Site on Link_TSG6 -The positions of the 15 amino acids mutated here were mapped onto the structure of the TSG-6 Link module (Fig. 5). The five residues that are implicated in HA binding (colored red) form a cluster on one face of the molecule. Therefore, it is likely that this represents the position of the HA-binding surface on Link_TSG6. This is consistent with recent NMR studies (27) identifying the residues of Link_TSG6 which exhibit significant chemical shift changes (for H N , N H , C ␣ , and C  atoms) on binding to HA, which includes Lys-11, Tyr-59, Phe-70, and Tyr-78. Other amino acids, located on the 4/5 loop (residues 61-74), which were found here not to be involved in HA binding, also experienced large shift perturbations (i.e. Asn-67 and Lys-72 (27)), indicating that this region of the Link module undergoes a ligand-induced conformational change. This structural alteration may be mediated, in part, by the interaction of the aromatic ring of Phe-70 with HA.
As described above, eight of the amino acids selected for mutagenesis were predicted to be involved in the interaction with HA (23). Of these, Lys-11, Tyr-12, Tyr-59, and Tyr-78 have been found to participate in HA binding, whereas Lys-72 is not involved. Mutation of Asp-77, Arg-81, and Glu-86 compromises the structural integrity of the Link module, such that no conclusion can be made regarding their role in HA binding. However, their involvement cannot be excluded.
Glu-6 and Lys-13, which have been implicated in TGS-6mediated inhibition of neutrophil migration (16), are clearly FIG. 6. Comparison of HA-binding sites on TSG-6 and CD44. In A, the Link modules from TSG-6 and CD44 (modeled on the Link_TSG6 coordinates (4)) are shown in similar orientations on the basis of their secondary structural elements. Residues that are involved in HA binding in Link_TSG6 (determined here) are colored red; amino acids of CD44 which are critical or important for interaction with HA are shown in dark blue or light blue, respectively. The functional residues of CD44 were identified by site-directed mutagenesis as described in Bajorath et al. (14) and are numbered accordingly. All of the HA-binding residues on the CD44 Link module are visible apart from Lys-68, which is on the opposite face of the protein. B, the Link_TSG6 structure (as in A) showing the 11 sequence positions that can contribute to HA binding in TSG-6 and/or CD44 and form a coherent patch on one face of the Link module surface. These are color-coded, as described below, and numbered 1-11. Sequence positions that are involved in HA binding in either TSG-6 or CD44 alone are colored as in A (i.e. red or blue, respectively). Positions 2 and 3, which mediate HA binding in both TSG-6 and CD44, are depicted in purple.
FIG. 7. Alignment of Link module sequences.
Residues of TSG-6 and CD44 which have been demonstrated by mutagenesis to interact with HA are colored as in Fig. 6A; amino acids that are not involved in HA binding are shown in lowercase. Asterisks denote sequence positions that can contribute to HA binding in TSG-6 and/or CD44 (numbered 1-11 as in Fig. 6). These are colored (as in Fig. 6B) to indicate whether the sequence position is functionally TSG-6-specific, CD44-specific, or utilized by both proteins. This color coding is also used to indicate whether an amino acid capable of making an interaction with HA (i.e. salt bridges or hydrogen bonds) is found at these positions in the Link modules from other members of the Link module superfamily. Residues are underlined if they are identical to, or a conservative replacement of, functional amino acids in TSG-6 or CD44. Residues shown in green in Lp2 may also be involved in HA binding (see Fig. 8 and "Results and Discussion"). not involved in HA binding ( Fig. 3 and Table I). In this study we mutated both of these residues to alanine, whereas Wisniewski et al. (16) altered them to lysine and glutamic acid, respectively. It is possible, therefore, that these latter mutants (equivalent to E6K and K13E) may have reduced HA-binding capabilities (e.g. because of having perturbed structures).
HA-Binding Sites in TSG-6 and CD44 Link Modules-The residues of the CD44 Link module which mediate HA binding have been identified by site-directed mutagenesis (14). Four amino acids (shown in dark blue on Fig. 6A) are essential for high affinity binding (i.e. mutation of any one of these greatly reduces functional activity), and five other amino acids (light blue) are involved but not critical. From Fig. 6A it can be seen that the positions of the HA-binding sites on Link_TSG6 and CD44 map to the same face of the Link module. In addition, the essential HA-binding residues Arg-41 and Tyr-42 in CD44 are found at sequence positions identical to those of Lys-11 and Tyr-12, respectively, in Link_TSG6 (Fig. 7). This indicates that the location of the HA-binding surface may be conserved across the Link module superfamily. Consistent with this, the epitope recognized by a monoclonal antibody that inhibits HA binding to link protein (47) also maps to this face of the module.
Further consideration of the HA-binding amino acids in CD44 and TSG-6 suggest that, although the positions of the binding surface are similar, the molecular details of the interactions are likely to be significantly different. Apart from the amino acids described above (i.e. Lys-11 and Tyr-12 in Link_TSG6, and Arg-41 and Tyr-42 in CD44) none of the other HA-binding residues are found at equivalent sequence position in these proteins. As can be seen from Fig. 7, the critically important residues Arg-78 and Tyr-79 in CD44 are replaced in TSG-6 by alanines (Ala-48 and Ala-49), which are unable to make ionic or hydrogen bonds to the sugar. In addition, Lys-38 and Asn-101 in CD44 are both involved in the interaction with HA, whereas the corresponding residues in Link_TSG6 (i.e. Arg-8 and Lys-72, respectively) have been shown here not to participate in binding (Fig. 4).
HA-Binding Consensus-Comparison of the functional residues determined for TSG-6 and CD44 indicates that at least 12 amino acid sequence positions of the Link module can be involved in HA binding. As shown in Fig. 6B (for Link_TSG6) 11 of these form a coherent surface patch on one face of the module. Only positions 2 and 3 are utilized in both TSG-6 and CD44, whereas all of the others are either TSG-6-specific (positions 6, 7, and 11) or CD44-specific (positions 4, 5, 8, 9, and 10). These 11 sequence positions are likely to be of functional importance in other members of the Link module superfamily; it is expected that a particular protein will utilize a certain combination of these "consensus" residues to form its HA-binding surface. Given the conservation of binding residues at positions 2 and 3 in TSG-6 and CD44 it is probable that these represent key determinants in the HA interaction, across the superfamily as a whole. Fig. 7 shows 18 Link modules from 10 different human proteins aligned with TSG-6 and CD44, where the residues (at consensus positions 1-11) which have the potential to mediate carbohydrate binding (i.e. by making ionic or hydrogen bonds (48,49)) are highlighted. For example, Lp1 has potential HA-binding residues at consensus positions 1, 2, 3, 4, 6, 8, 9, 10, and 11 (Fig. 7). This is also illustrated in Fig. 8, which shows the locations of these residues on a homology model of Lp1, generated on the basis of the Link_TSG6 coordinates (see "Experimental Procedures"). It is possible that some of these residues may be more likely to participate in the interaction with HA than others because they correspond to identities or conservative replacements of functional amino acids in TSG-6 or CD44 (i.e. in Lp1 this corresponds to positions 1, 2, 3, 6, 8, 9, and 11, which are underlined on Fig. 7).
It should be noted that the Link module-containing proteins KIA0246, CAB61358, and KIA0527 (Fig. 7) have not yet been shown to have an HA-binding function. As mentioned above, there are likely to be large networks of interactions required to stabilize HA-protein complexes. Given the number of conserved residues (compared with CD44 and TSG-6; underlined in Fig. 7) at the consensus sequence positions (one in KIA0246 at position 3; three in CAB61358 at positions 3, 6, and 11; none in KIA0527) we predict that KIA0246 and KIA0527 are unlikely to be HA-binding proteins, although it is possible that CAB61358 may be functionally active.
All of the HA binding activity of aggrecan is located in its G1 domain, which contains two contiguous Link modules (aggre-can1 and aggrecan2 in Fig. 7) (7-9). Both of these Link modules are required for high affinity HA binding (9). Visual inspection of the nonfunctional Link modules of the G2 domain (i.e. ag-grecan3 and aggrecan4), show that these have sequences similar to those of aggrecan1 and aggrecan2 (Fig. 7), respectively. Therefore, it is not obvious why modules 3 and 4 are inactive. The only significant difference is that aggrecan4 does not have a basic residue at consensus position 2. However, given the importance of this sequence position in TSG-6 and CD44 this may be enough to render this module, and hence the G2 domain, inactive.
The analysis described above does not exclude the possibility that amino acids at other sequence positions are also involved in HA binding. In this regard, visual inspection of the Lp2 model indicates that Arg-66, Lys-85, and Tyr-87 (colored green on Figs. 7 and 8) may contribute to the interaction with HA because they are located in close proximity to the consensus HA-binding residues. From Fig. 8 it can be seen that Tyr-87 is at a location on the Link module surface very similar to that usually occupied by consensus position 3 (which is a leucine in Lp2 rather than a tyrosine). It is also interesting to note that all of the Link modules that have a leucine at position 3 (Lp2, BRAL1-2, brevican2, and neurocan2) have an arginine in an equivalent sequence site to Arg-66 in Lp2. Therefore, careful analysis of the Link module alignment (Fig. 7) in conjunction with individual Link module models (such as those for Lp2 in Fig. 8) allows the identification of amino acids that have a and Lp2 were modeled on the basis of the Link_TSG6 coordinates. The models are shown in the same orientation (on the basis of secondary structure elements) as for Link_TSG6 and CD44 in Fig. 6. Amino acids that could participate in HA binding are colored (as in Fig. 6B and Fig. 7) to indicate whether the sequence position at which they are found is TSG-6-like (red), CD44-like (dark or light blue), or common (purple). In Lp2, additional amino acids can be identified (green), which could contribute to HA binding and are in close proximity to the consensus residues.
reasonable probability of being involved in HA binding. Clearly, not all of the residues identified in this manner will be involved in binding, but as such these provide excellent candidates for programs of site-directed mutagenesis.
Conclusions-Site-directed mutagenesis has identified five amino acids in the Link module of human TSG-6 which contribute to HA binding. Comparison of this ligand-interaction surface with that determined previously for CD44 has led to a prediction of the HA-binding residues in other members of the Link module superfamily by using a combination of sequence alignment and molecular modeling. | 7,316.4 | 2001-06-22T00:00:00.000 | [
"Biology",
"Chemistry"
] |
Trace elements and rat pouchitis
The procedure of restorative proctocolectomy is associ- ated with a complete removal of the colon and slight reduction of ileum length, which together can lead to sys- temic shortages of trace elements. Inflammatory changes in the pouch mucosa may also have some impact. However, there is no data on trace elements in pouchitis. Therefore, in the present study we aimed to assess the effect of acute pouchitis on the status of selected trace elements in rats. Restorative proctocolectomy with the construction of intestinal J-pouch was performed in twenty-four Wistar rats. Three weeks after the surgery, pouchitis was induced. Eight untreated rats created the control group. Liver concentrations of selected micro- nutrients (Zn, Cu, Co, Mn, Se) were measured in both groups six weeks later, using inductively coupled plasma mass spectrometry. Liver concentrations of trace elements did not differ between the study and the control groups. However, copper, cobalt and selenium concentrations [μg/g] were statistically lower ( p <0.02, p <0.05 and p <0.04, respectively) in rats with severe pouchitis (n=9) as compared with rats with mild pouchitis (n=7) [median (range): Cu — 7.05 (3.02–14.57) vs 10.47 (5.16– 14.97); Co — 0.55 (0.37–0.96) vs 0.61 (0.52–0.86); Se — 1.17 (0.69–1.54) vs 1.18 (0.29–1.91)]. In conclusion , it seems that acute pouchitis can lead to a significant defi- ciency of trace elements.
InTRoduCTIon & aIM
Trace elements are a large group of chemical elements present in living matter in concentrations below 1000 ppm (< 0.1%). This group of over 60 elements demonstrates very different biochemical properties. Body resources of trace elements depend not only on an adequate supply in the diet, but also on their proper absorption and excretion (Sandström et al., 2001;Serra-Majem L et al., 2009). Pathological changes in morphology and functioning of the gastrointestinal tract can compromise trace element absorption (Sandström et al., 2001, Berdanier, 2004. A potential example of such an effect could be inflammation of the intestinal reservoir (pouchitis) appearing in many patients who underwent restorative proctocolectomy. During the procedure, the whole colon is removed, and the length of the small intestine is slightly reduced (and thus the total surface absorp-tion). In addition, the presence of bloody, loose stools (that can be correlated with severe pouchitis) predisposes these patients to deficiencies of iron as well as other nutrients (Yu et al., 2007). There are only few studies assessing the status of selected trace elements in patients with pouch and pouchitis.
Iron and calcium deficiencies in patients with pouch and pouchitis are quite common. However, there is no comprehensive data describing the status of other elements (M'Koma et al., 1994;Kuisma et al., 2001;Pastrana et al., 2007). In the only published study, copper and selenium concentrations in patients with pouch did not differ significantly from those in healthy subjects (El Muhtaseb et al., 2007). Similarly, the daily intake of those elements in the diet was comparable to that documented in the control group. It should be emphasized that the evaluation of trace element status was based on serum levels, which do not reflect the real body resources. It seems that the content in the liver, being the storage organ, better reflects the long-term effects. However, there is no data on the trace element status in pouchitis. Therefore, in the present study we aimed to assess the effect of pouchitis on the status of selected micronutrients in rats. To increase the reliability of the assessment we measured liver concentrations of microelements.
MaTERIaL & METHodS
Restorative proctocolectomy with the construction of intestinal reservoir was performed in twenty-four Wistar rats (study group). The total proctocolectomy was performed by resecting of the colon and ligating the mesentery with 4-0 silk. The intestinal segment was excised from 0.1 cm proximal to the ileocecal junction. The rectum was resected at the level of the pelvic floor, with a 0.5 cm rectal stump. The ileal J-pouch was created by the duplication of the distal end of the small intestine by a single-layer interrupted 6-0 prolene suture. The pouch anal anastomosis was performed with a single layer interrupted 6-0 prolene suture (Babu et al., 2005). Additional, eight rats (which did not undergo any surgery) created the control group.
After the first fasting day (with exclusive supply of 8% glucose solution), the animals from the study group received fiber-free, semi-synthetic AIN-93 diet, in increasing amounts in subsequent 10 days (respectively from 5, 8, 10 and 12 g/d up to 25 g/d). The feeding of rats was maintained at the 25 g/d level for 11 following days. Subsequently, inflammation of J-pouch was induced following a procedure developed in our previous studies (Drzymała-Czyż et al., 2012). For that purpose, the animals were given fiber-enriched AIN-93 diet for seven days (with growing fiber quantities, 1% on the first day, 2% on the second, up to a maximum of 4% of the content). The control group was fed ad libitum AIN-93 diet, supplemented for a period of seven days with fiber (at the same time as in the study group). Over the next six weeks, the animals from both groups were fed ad libitum semisynthetic AIN-93 diet. After the scheduled feeding period the animals were euthanized and specimens of Jpouch mucosa for the histopathological and immunohistochemical analysis were obtained. Liver concentrations of trace elements were assessed in all animals.
Toxicological examination. After determining the dry weight of samples from liver biopsies, the mineralization was carried out following rehydration with concentrated 65% HNO 3 (Supapur, Merck, Darmstadt, Germany). The levels of trace elements in prepared samples were determined using inductively coupled plasma mass spectrometry (ICP-MS; camera: ELAN DRC, Perkin Elmer, Waltham, Massachusetts, USA) (Olivares, 1998). ICP-MS is a technique which is used to measure the intensity of an ion flux generated in the plasma. Ions generated inductively in the coupling plasma are then distributed through the mass analyzer based on the amplitude of their weight relative to their charge. Samples are analyzed in a liquid form by introducing them into a system consisting of a spray and fog chamber.
Histopathological examination. Microscopic assessment was performed according to standard histological techniques (hematoxylin and eosin staining). In addition to routine histopathological examination, the collected specimens were evaluated for the intensity of inflammation (Moskowitz scale) (Sandborn et al., 1994). Based on the results of this examination rats were divided in to two subgroups -with acute (inflammation assessed to 4-6 in Moskowitz scale; n = 9) and mild pouchitis (Moskowitz 1-3; n = 7).
Statistical analysis. For the results obtained, medians and ranges of values are given. The Mann-Whitney test was used to compare the results between the study groups. The hypotheses were verified at a 0.05 significance level. Correlations were evaluated using the Spearman test.
Ethical considerations. All surgical procedures were conducted by one qualified surgeon in accordance with the guidelines of the European Community Council directive 86/609/EEC and with the approval of the Local Ethics Committee (42/2006).
RESuLTS
The liver concentrations of trace elements in the study and control groups are summarized in Table 1. For elements tested no statistically significant differences were stated.
The liver concentrations of trace elements depending on the severity of inflammation (expressed in the Moskowitz scale) are presented in Table 2. Copper, cobalt and selenium concentrations were statistically lower in rats with severe pouchitis as compared with rats with mild pouchitis and the control group.
The liver concentrations of trace elements did not correlate with the severity of inflammation expressed in the Moskowitz scale.
dISCuSSIon & ConCLuSIonS
In the present study, liver concetrations of zinc, copper, cobalt, manganese and selenium did not differ significantly between rats with J-pouch and a control group. Bones and muscle are significant reservoirs of trace elements, but their hightest concentration is present specifically in the liver (Sandström et al., 2001;Berdanier, 2004;Serra-Majem L et al., 2009). Therefore, liver seems to be the most appropriate organ that best reflects the body resources of these micronutrients.
El Muhtaseb and coworkers (2007) documented that plasma concentrations of zinc, copper and selenium, as well as their intake in the diet of patients with pouch (n=55) did not differ significantly from those observed in healthy subjects (n = 46), although the procedure of restorative proctocolectomy involves a complete removal of the large intestine and a reduction in the length of the ileum, which can affect the efficiency of nutrient absorption. Coexisting inflammation can additionally contribute to the intensity of possible abnormalities. On the other hand, El Muhtaseb and coworkers documented that the plasma concentration of manganese was higher in patients who underwent restorative proctocolectomy than in those from the control group. The symptoms and consequences arising from excess (overload) manganese are not completely understood; however, they are seen mostly in the course of parenteral nutrition (Arjona et al., 1997; Bertinet et al., 2000). The authors attributed the excessive manganese levels to the use of antidiarrheal drugs containing this trace element and to the coexisting iron deficiency, which in this case could encourage the excessive absorption of manganese. It should be emphasized that these results cannot be directly compared with our data. None of the patients involved in the El Muhtaseb et al. (2007) study showed any clinical symptoms of pouchitis, their inflammatory markers (CRP) were negative and the use of antidiarrheal drugs was common. The assessment of microelement status (Zn, Cu, Mn, Se) was based on plasma concentrations which cannot create the basis for a definite assessment of their body resources.
In the present study, rats were suffering from significant pouchitis, while trace element assessments were based on liver tissue samples. M' Koma and coworkers (1994) noted that 2-5% of patients post ileal pouch-anal anastomosis had low zinc levels. However, Pironi et al. (1991) documented low serum zinc levels in 60% of patients with ileal pouch. Those authors attributed the lowered zinc levels to greater requirements of that trace element and secondarily to increased muscle protein synthesis. For the purpose of the study, they evaluated 18 patients witch pouch. Interestingly, although the prevalence of hypozincemia was very high, pouch enteritis according to approved criteria (the presence of diarrhoea accompanied by endoscopic features of acute inflammation and by histological evidence of a prominent polymorphonuclear cell exudate), was observed only in 2 patients.
Liver levels of copper, cobalt and selenium in the subgroup with severe pouchitis were lower than in the subgroup with mild pouchitis. There is no published data concerning the body resources of these trace elements in patients or animals with severe pouchitis. The assessment of trace element status was conducted in patients with ulcerative colitis (clinically most similar to the model of pouchitis), serum zinc levels were lower and serum copper levels higher in the active colitis group than in controls (Ringstad et al., 1993). More than 50% of patients with active form of ulcerative colitis showed zinc levels below the 15th percentile of the control group. It should be noted that in spite of the decrease in disease activity, serum zinc levels remained low also after introduction of total enteral nutrition. In contrast, Vagianos et al. (2007) documented that the prevalence of subnormal serum zinc levels in 126 adults patients with ulcerative colitis was not high -12.5%. Moreover, the median serum level of zinc was comparable in subjects with active ulcerative colitis (assessed using Powell-Tuck Index) and subjects in remission.
The obtained results suggest that not restorative proctocolectomy itself but acute pouchitis can lead to a significant deficiency of trace elements. Therefore, specific human studies should be performed. It seems that in future in selected groups of patients with pouchitis supplementation of trace elements should be considered.
All authors declare no conflict of interest.
author's contribution SDC -study design, data collection, analysis of samples, statistical analysis, data interpretation, manuscript preparation, literature search. TB -study design, data collection, data interpretation, manuscript preparation, literature search. SW & EWC -analysis of samples, data interpretation. TK & MD -study design, data interpretation. JW -study design, statistical analysis, data interpretation, manuscript preparation, literature search. | 2,777.6 | 2012-11-30T00:00:00.000 | [
"Medicine",
"Biology"
] |
The Convex Minorant of the Cauchy Process
We determine the law of the convex minorant $(M_s, s\in [0,1])$ of a real-valued Cauchy process on the unit time interval, in terms of the gamma process. In particular, this enables us to deduce that the paths of $M$ have a continuous derivative, and that the support of the Stieltjes measure $dM'$ has logarithmic dimension one.
For a given real-valued function f defined on some interval I ⊆ R, one calls the convex minorant of f the largest convex function on I which is bounded from above by f . The case when I = [0, ∞[ and f is a sample path of Brownian motion has been studied in depth by Groeneboom [11], see also Pitman [13] and Ç inlar [6]. In particular, it has been shown in these works that the convex minorant of Brownian motion is almost surely a piecewise linear function on the open interval ]0, ∞[, and that the distribution of its derivative can be characterized in terms of a certain process with independent (non-stationary) increments. The more general case when f is a sample path of a Markov process (respectively, a Lévy process) has been considered by Bass [1] (respectively, by Nagasawa and Tanaka [12]). In this note, we carry out a similar study for the Cauchy process on the unit time interval; we shall establish in particular a simple connection with the gamma process which yields several interesting consequences. More precisely, we shall see that the derivative of the convex minorant of the Cauchy process is continuous on ]0, 1[, specify its behavior near the boundary points 0 and 1, and determine the exact Hausdorff measure of the set of points on which it increases. Let (C s , s ∈ [0, 1]) be a standard one-dimensional Cauchy process and (M s , s ∈ [0, 1]) denote its convex minorant. The right-derivative (M s , s ∈ [0, 1[) is an increasing process with rightcontinuous paths, and we write for its right-continuous inverse. Recall that the standard gamma process, (γ s , s ∈ [0, 1]) is a subordinator with marginal distributions Our analysis relies on the following.
Lemma 1
The process (µ x , x ∈ R) has the same law as On the one hand, µ x is the (a.s. unique) instant in [0, 1] at which the Cauchy process with drift C (x) reaches its overall infimum on [0, 1], so On the other hand, it is immediately seen that L (x) , x ∈ R is a nested family of random sets, in the sense that L (x) ⊆ L (x ) for x ≤ x . Moreover, the strong Markov property of the Cauchy process easily entails the following regenerative property. Consider an arbitrary finite sequence x 1 ≤ x 2 ≤ . . . ≤ x n of increasing real numbers, and T a stopping time in the filtration (F s ) s≥0 such that T ∈ L (x1) a.s. Then the shifted ladder time sets are jointly independent of F T and have the same (joint) distribution as L (x1) , . . . , L (xn) . Note also that the probability P C (xi) s ≤ 0 is the same for all s > 0, and if we denote this quantity by ρ(x i ), then Hence each regenerative set L (xi) is stable with index ρ(x i ), i.e. it can be identified with the closed range of some stable subordinator with index ρ(x i ), see Lemma VIII.1 in [2]. We have checked that the general framework of [3] applies to the present setting, and Proposition 9 there entails that the law of the n-tuple , that is the same as that of It follows that the two processes in the statement have the same finite-dimensional distributions, and as both are increasing and right-continuous, they have the same law.
Remark.
Two different nested families of stable regenerative sets have been considered in [5].
In this direction, we point out that the argument in section 4 there can be adapted to give an alternative proof of Lemma 1.
Our main result derives immediately from Lemma 1 and the fact that the gamma process has strictly increasing paths with probability one.
Theorem 2
The process (M s , 0 < s < 1) is continuous and has the same law as where γ is a standard gamma process and denotes its inverse process.
Proof: It follows from Lemma 1 that the process (µ x , x ∈ R) has strictly increasing paths with probability one; as a consequence, (M s , 0 < s < 1) can be recovered from (µ x , x ∈ R) by the identity This entails the continuity of M and the stated identity in law (again by Lemma 1).
As a check, let us calculate the distribution of the variable C 1 , which coincides with M 1 = 1 0 M s ds. It follows from Theorem 2 that the latter has the same law as Next, recall that γ 1 has an exponential distribution with parameter 1 and is independent of the Dirichlet process (γ s /γ 1 , s ∈ [0, 1]), and a fortiori ofM 1 . It follows that for every λ ∈ R, the Stieltjes transform (i.e. the iterated Laplace transform) of M 1 is given by Using the identity E(exp {iλγ s }) = (1 − iλ) −s , we see that the right-hand side equals When λ > 0 (respectively, λ < 0), the function z → (1 + z 2 ) −1 log(1 + iλz) is meromorphic on the lower (respectively, upper) complex half-plane with a single pole at z = −i (respectively, at z = i). Using a contour integral, one gets Putting the pieces together, we conclude that E (1/(1 + iλM 1 )) = 1/(1 + |λ|) , and hence that M 1 follows the standard Cauchy distribution. The same method applies for instance to calculate the distribution of the length of the graph of the convex minorant. One gets after a few lines of elementary calculation Next, we deduce from known properties about the regularity of L the following results on the rate of growth of M near the boundary points 0 and 1. (the latter can be seen for instance from Theorem 1 in [4] and the fact that the Laplace exponent of the gamma process is log(1 + ·)). The proof of the lim sup result is similar, using Theorem 2 on page 321 in [10]. In particular, Supp(dM ) is a random closed set with logarithmic Hausdorff dimension 1. Proof: It follows from Theorem 2 that Supp(dM ) = {µ x , x ∈ R} cl is distributed as the normalized closed range of a gamma process on the unit time interval. It is a consequence of a general result by Fristedt and Pruitt [9] that there is some constant c > 0 such that for every t > 0 the h-Hausdorff measure of {γ s , 0 ≤ s ≤ t} cl is ct. This entails our claim.
Remark. Theorem 2 and Corollaries 3 and 4 bear the same flavor as the results of Cranston et al. [7] and Evans [8] on the convex hull of planar Brownian path on the unit time interval (recall e.g. that the convex hull is a C 1 curve and that the set of times in [0, 1] at which the Brownian path touches the convex hull has Hausdorff dimension zero). These similarities are certainly not surprising, considering the close connections between the planar Brownian motion and the linear Cauchy process. | 1,797.6 | 2000-01-20T00:00:00.000 | [
"Mathematics"
] |
Fine Balance in the Regulation of DnaB Helicase by DnaC Protein in Replication in Escherichia coli*
The DnaC protein of Escherichia coli is essential for replication in vivo and in vitro. In the initiation of replication of a minichromosome at its origin, DnaC delivers the DnaB helicase from a DnaB .DnaC complex to the future replication fork and then departs. However, if an excess of DnaC was present in subsequent steps, it severely inhibited replication by slowing the DnaB helicase at the replication fork. When DnaB was present at a level equimolar with the excess DnaC, the inhibition was relieved, implying that the ratio of DnaC to DnaB is critical for achieving optimal repli- cation activity and avoiding inhibition by DnaC. In uiuo, overproduction of DnaC slowed cell growth. This slowing was alleviated by overproducing DnaB at the same time. E. coli strains with a dnaCts gene defective in chromosomal initiation were complemented by the wild-type gene in trans. On the other hand, strains with an elongation-defective dnaCts gene were not complemented by the wild-type dnaC gene. The dominance of the mutant protein suggests that it remains tightly complexed with DnaB at the replication fork, inhibiting elongation even in the presence of the wild-type DnaC. 13-bp
and DNA polymerase I11 holoenzyme (10). The primosome is needed for the conversion of the viral single-stranded DNA of bacteriophage 4x174 to the duplex replicative form (10) and for the replication of the lagging strand of ColEl plasmids (11). In these replication systems, the role of DnaC is to form a tight complex with DnaB from which it delivers the helicase to its site of action on the DNA template. Unlike the other proteins required for the formation of the primosome or for chromosome initiation, even a slight excess of DnaC profoundly inhibits repli~ation.~ The genes encoding these various proteins required for chromosomal replication have been classified as being involved in either initiation or elongation or in both. Among these genes, dnuC has some alleles defective in initiation (12-15) and others defective in elongation (12,13,15). Whereas DnaC is required in initiation to deliver DnaB to the replication fork, its role in elongation has remained unclear. The present study was undertaken to explore the basis for the extraordinary sensitivity of the in vitro initiation system to a small excess of DnaC and to determine whether such an effect is seen in vivo. We also explored the relationship between the in vitro inhibition of replication and the mutant alleles of dnaC that affect the elongation stage of DNA replication.
Enzymes and Plasmid Constructions-Restriction enzymes were obtained from either New England Biolabs or Bethesda Research Laboratories. Restriction enzyme digests and plasmid constructions were according to standard procedures (17). DNA fragments with overhanging termini were converted to blunt ends using T4 DNA polymerase (17). Purified replication proteins were prepared as previously described (1). Plasmid pBSoriC (3) contains a 678-bp HincII-PstI fragment spanning oriC (bp -189 to +489) cloned into the pBluescript vector (Stratagene, Inc.). The PING1 plasmid was obtained through the courtesy of the INGENE Corp. (16). The plasmid, pJK169, containing the dnaC gene was as described (18). The plasmid pINGK consists of the kanamycin resistance gene from pUC4K (Pharmacia LKB Biotechnology Inc.) on a 1.2-kilobase PstI fragment cloned into the PstI site in the 8-lactamase gene of pING1. The plasmid pINC,., contains the dnaC coding region (from the EstEII site at bp 868 to the AccI site at bp 1820 (19)) cloned into the SmaI site of pINGK. The pINB plasmid is the 2.4-kilobase NdeIIEglI fragment of pKAl containing the dnaB coding region (bp 74 to 1661 plus flanking vector sequences (20)) cloned into the SmaI site of pINGK. The plasmid pINCB has the same dnuE coding region cloned A. Kornberg, unpublished observations from many studies in this laboratory.
22096
into the EcoRI site of pINC, placing it downstream of dnuC. The pINCsso plasmid contains the translation initiation sequences of bacteriophage T7 gene 10 linked to the dnaC coding region from the start codon (bp 1065) to the AccI site (bp 1820) generated based on the described procedure (21). The resulting dnaC coding region was then cloned into the SmuI site of pINGK.
When unwinding was assayed by the formation of Form I*, the reactions were as described above except that only magnesium acetate, gyrase, and additional SSB (900 ng) were added and the mixtures incubated for 6 min at 28 "C. Sample preparation and agarose gel electrophoresis were as described (8).
Isolation of Prepriming Complexes by Gel Filtration-The standard prepriming complex reaction was scaled up, and 100 pl were applied to a 1-ml Bio-Gel A-5m column equilibrated in Tricine-KOH (pH 7.6), 30 mM; glycerol, 10%; Brij-58, 0.01%; potassium glutamate, 60 mM; ATP, 5 mM; BSA, 250 pg/ml. The void fractions were collected, pooled, and then assayed by addition of the elongation components as noted above except that more SSB (450 ng) was added.
RESULTS
Excess D m C Inhibits oriC Replication-Replication of oriC plasmids can be divided into several stages. The initiation events comprise the prepriming stage which requires DnaA, DnaB, DnaC, HU, 5 mM ATP, and a high temperature (22). Subsequent stages reflect elongation during which the template is unwound, primed, and replicated. The replication of oriC plasmids absolutely depended on the addition of DnaC in the prepriming stages ( Fig. 1 and Refs. 1 and 23). Replication reaches a maximum at 1 pmol of DnaC added and then is inhibited by the addition of more DnaC. As little as 2.5 pmol of additional DnaC was sufficient to reduce the amount of replication to background levels.
That this inhibitory activity is a property of active DnaC protein is based on two criteria. First, this property was observed throughout the purification of DnaC protein: low levels of fractions containing DnaC-stimulated replication, whereas higher amounts inhibited replication (data not shown). Second, the inhibitory property of DnaC was sensitive to the sulfhydryl-specific reagent, NEM (Table I). Treatment of DnaC with 10 mM NEM at 0 "C for 15 min completely abolished its replication activity ( ( 5 % remaining). This same treatment removes more than 80% of the inhibitory activity present in these fractions (1 pmol of mock-treated DnaC inhibits by 76% whereas 1 pmol of NEM-treated DnaC inhibits by only 14%).
The DnaC:DmB Ratio Is Critical for Optimal Actiuity-
TABLE I NEM inactivates the DnaC replication and inhibitory activities
DnaC protein was treated with 10 mM NEM at 0 "C for 15 min as described or mock treated with water. NEM-and mock-treated samples were then treated with 50 mM dithiothreitol at 0 "C for 15 min to quench unreacted NEM. These DnaC samples (1 pmol) were assayed for replication activity in the prepriming complex reaction (A) and for their ability to inhibit replication when added to a prepriming reaction containing 1 pmol of untreated DnaC (B). Given that the functional form of DnaC is in a complex, inhibition of replication by DnaC may depend on the amount of DnaB present, that is on the ratio of DnaC to DnaB. In the previous experiments where 1 pmol of DnaC was optimal, DnaB was present at 1.25 pmol per reaction, a ratio of 0.8 pmol of DnaC to 1.0 pmol of DnaB. To address the importance of the ratio of DnaC to DnaB, replication was assayed in reactions containing 3.0 pmol of DnaC and increasing amounts of DnaB (Fig. 2). As expected with the normal 1.25 pmol of DnaB,3.0 pmol of DnaC completely inhibited replication (2.4 pmol of DnaC:l.O pmol of DnaB). Additional DnaB "stimulated" replication (relieved inhibition) to saturation at 3.75 pmol of DnaB at which point the ratio was nearly 1:l (0.8 pmol of DnaC to 1.0 pmol of DnaB). This relief of inhibition by additional DnaB also implies that the inhibition is a property of free DnaC protein with which it is known to interact (4-6).
A. DnaC sample
Excess DnuC Inhibits the Function of the Prepriming Compkx, Not Its Formation-DnaC protein might inhibit replication in one of several ways: by preventing the formation of prepriming complexes, by destabilizing formed complexes, Activity carried out with excess DnaC present (3.0 pmol/20-pl reaction). After the incubation a t 37 "C for 30 min, the reactions were placed on ice and more DnaB was added, as indicated. Replication was assayed by addition of the elongation components and incubation at 18 "C for 15 min.
Removal of DnuC by gel filtration relieves inhibition
The prepriming complex reaction was scaled-up to 5-fold and carried out in the presence of the normal amount of DnaC (1 pmol/ 20 pl) or with a high, inhibitory level of DnaC (4.3 pmol/20 pl). Protein.DNA complexes were isolated by gel filtration, and 2 0 4 portions were assayed for replication activity a t 18 "C for 15 min after addition of the elongation components. An additional 4.25 pmol of DnaB/2O pl was added to the reaction before the elongation components, where indicated (+ DnaB). The isolated prepriming complexes were supplemented with 2 pmol of DnaC per 20 pl prior to the addition of the eloneation comuonents ( or by inhibiting the function of formed complexes. T o distinguish among these possibilities, a prepriming reaction containing excess, inhibitory DnaC protein was performed. Protein-DNA complexes were isolated by gel filtration chromatography and assayed for replication activity (Table 11). As before, excess DnaC completely inhibited replication. Supplementing this fraction with additional DnaB revealed that active prepriming complexes were present. Moreover, simply removing the excess DnaC by gel filtration allowed recovery of all of the prepriming complex replication activity.
The full recovery of replication activity after gel filtration implies that the excess, inhibitory DnaC was not stably associated with the prepriming complex. If prepriming complexes and free DnaB .DnaC complexes were in dynamic equilibrium, addition of excess DnaC might destabilize the prepriming complexes by mass action. Yet there were as many active complexes isolated when DnaC was in excess as when it was present a t low levels. This equivalence suggests that excess DnaC does not act by destabilizing prepriming complexes already formed. Prepriming complexes isolated from either the high-level DnaC and the normal-level DnaC reactions were equally susceptible to inhibition by additional DnaC (Table 11). These experiments imply that excess DnaC affects replication by inhibiting the function but not the formation of the prepriming complex.
DnuC Inhibits the DnuB Helicme Present in Prepriming
Complexes-Prepriming complexes contain several activities. Among them is the ability of DnaB protein at the forks to unwind the template (in the presence of gyrase and SSB) to generate a highly unwound structure called Form I* (FI*) (8,9). This FI* can be resolved from supercoiled template (FI), nicked circular template (FII), and other topoisomers by agarose gel electrophoresis in the presence of ethidium bromide. By uncoupling unwinding from priming and DNA synthesis (as in the FI* assay), the susceptibility of unwinding to additional DnaC can be independently assessed. The addition of more DnaC to a reaction mixture already containing 1 pmol inhibited FI* formation (Fig. 3). At a total of 3 pmol of DnaC, FI* formation was greatly inhibited and completely so at 4 pmol of DnaC.
The topoisomer that migrates more slowly than FI is an unwinding intermediate with a superhelicity between that of FI and FI* (data not shown). At levels of DnaC that completely inhibit FI* formation and replication, this unwinding intermediate was still present. Eventually at higher levels of DnaC, its formation was inhibited. These findings suggest that small excesses of DnaC act by slowing the DnaB helicase, thereby preventing FI* formation but still allowing formation of the early intermediate. Higher amounts of DnaC inhibited the helicase sufficiently that the intermediates were not formed either.
Overproduction of DnuC Inhibits Cell Growth-Strains bearing plasmids which allow the controlled expression of dnuC exhibited a slowed growth rate when DnaC was overproduced. A plasmid derivative of PING1 (16) was constructed which contained the kanamycin resistance gene, pINGK. Open reading frames were cloned into pINGK under control of the uru promoter including the dnuC gene with its natural translation initiation signals (pINC,.J and the dnuC gene with the bacteriophage T 7 gene 10 translation initiation signals (pINCssD). Strains harboring pINCssD induced with 1% arabinose for 2 h overexpressed DnaC, producing a visible band on a Coomassie-stained polyacrylamide gel and -1000-fold more DnaC replication activity compared with control strains harboring pINGK (data not shown). Strains bearing pINCnat overexpressed less DnaC; no protein was visibly overproduced on a Coomassie-stained gel although induction of DnaC was seen on Western blots (data not shown). In the absence of induction, strains with pINGK, pINCnat, or PINCSSD all grew equally well with the same doubling time (39 min, data not shown). After induction, the pINGK-containing strains continued to grow at the same rate (Fig. 4). Strains overexpressing DnaC continued growing rapidly for a period after which cell growth slowed considerably. The length of lag before the downshift of growth rate correlated with the level of DnaC produced. The p1NCss~ strain shifted after 60 min, whereas the pINCnat strain shifted after 120 min.
Growth Inhibition by DnaC Is Suppressed by
DnuB-Inasmuch as additional DnaB protein alleviated the inhibition of replication by excess DnaC in vitro, we explored the possibility that DnaB might also relieve the growth inhibition. Two additional pINGK derivatives were constructed one that contained the dnuB gene with its natural Shine-Dalgarno sequence (pINB) and one derived from pINCnat with the dnuB reading frame downstream of that for dnuC (pINCB). Western blots confirmed that the strain harboring pINB expressed DnaB and that with pINCB expressed both DnaB and DnaC when induced (data not shown), The ability of these strains to grow after induction was determined by measuring their plating efficiencies with and without arabinose (Table 111). In agreement with the growth rate studies, induction of pINGKbearing strains had no effect on its ability to grow. In addition, both pINCnat and pINCssD strains failed to plate when induced with 1% arabinose. Strains bearing pINB grew equally well with or without induction implying that the overproduction of a replication protein is not, in general, inhibitory to cell growth. When the pINCB strains were induced with 1% arabinose, they plated 260-fold better than without DnaB overexpression (pINCnat) though still not as well as strains overexpressing only DnaB. At lower levels of induction, similar results were obtained. Consistent with the different times of downshift in growth rate between pINCnat and pINCssD strains, pINC,, strains plated more efficiently than did pINCssD ones. Also, co-
FIG. 4. Cell growth is inhibited by DnaC overexpression.
MC1061 bearing the indicated plasmid was grown in L broth with 0.5% fructose and kanamycin (50 pg/ml) at 37 "C overnight. The pINGK plasmid was the parental contruct. Both pINCnat andpINCssD plasmids overexpressed DnaC when induced. The pINC,., strain overproduced low levels of DnaC, and the pINCssD strain overproduced high levels, as indicated. Cultures were diluted 100-fold into fresh media, grown to ODm = 0.5, then diluted 20-fold into the same media also containing 1.0% arabinose. All media were prewarmed to 37 "C.
TABLE IV The d m C + gene jails to complement elongation-defective dnaCts alleles
The strains were transformed with the indicated plasmid, and transformants were selected by plating with ampicillin (50 pglml) at the permissive temperature. A single colony was grown overnight in L broth with ampicillin at the permissive temperature, diluted, plated with selection, then incubated overnight at 42 "C or at the permissive temperature, as indicated. The same plating procedure was employed with the strains lacking plasmids except that ampicillin was omitted. The strains are all temperature-sensitive (ts) mutants defective in initiation or elongation, as indicated; the wild-type gene is designated dnuC+. All plasmids are derivatives of pBR322; pBR322 was used in some instances as the parental, control plasmid.
PC7
Elongation None a The permissive temperature for CT28-3b is 25 "C rather than 30 "C. overproduction of DnaB rescued the low plating efficiency of DnaC-overproducing strains. In all instances of measurable plating, the strains were viable and continued to grow as observed by increased colony size after a second overnight incubation.
Elongation Alleles of dnaC Are Not Complemented by dnuC+-E. coli strains containing alleles of dnaC that are temperature-sensitive for initiation can be rendered temperature-resistant by providing dmC+ in trans on a plasmid (Table IV and Ref. 18). This complementation by dnaC+ was seen for both initiation alleles tested suggesting that the mutant protein does not prevent initiation but is simply unable to promote it.
Were the mutant protein produced by elongation-defective alleles of dnuC incapable of a positive action at the replication fork, then the wild-type protein would complement this deficiency. However, were the elongation-defective mutant exhibiting a negative effect at the replication fork, this inhibition would be seen even in the presence of the wild-type DnaC protein. Indeed, there was no complementation of growth at 42 "C by the d n a C plasmid for either of the two strains with dnuC alleles defective in elongation.
DISCUSSION
The initiation of chromosomal replication requires the DnaC protein both in vivo (12-15) and in vitro (1). Prior to its action at oriC, DnaC forms a complex in solution with DnaB (4-6). From this complex DnaC performs its role of delivering DnaB to a template. This action of DnaC is manifest in a general priming system, containing only DnaB, primase, and single-stranded DNA, where DnaC increases the affinity of DnaB for the single-stranded DNA (24) but is not essential (25). In contrast, for the oriC-specific initiation system, DnaC is absolutely essential and larger amounts of DnaB do not overcome the requirement for DnaC.4 These findings argue that DnaC has some other function in this specific reaction, such as a direct interaction with DnaA in loading DnaB (26). Upon delivering DnaB to form a prepriming complex, DnaC is released' and is not needed for the elongation stage in uitro. Genetic experiments, however, in-dicate that DnaC can affect elongation (12,13,15). Moreover, slight excesses of DnaC added during elongation inhibited replication in uitro (Fig. 1).
In studies of the nature of this inhibition, we observed that stable, isolable prepriming complexes are formed in the presence of inhibitory levels of DnaC (Table 11), implying that DnaC neither destroys prepriming complexes nor prevents their formation. Excess DnaC was found to slow the DnaB helicase at the replication fork. Because the rate of replication is limited in uitro by the rate of unwinding by DnaB (9), slowing or stalling of this unwinding is sufficient to account for the inhibition of replication. Additional effects of DnaC on the ability of primase to synthesize primers cannot be ruled out and might further inhibit replication.
When free in solution, DnaB exhibits an ATPase activity which DnaC completely inhibits (6, 24, 25) when they form a complex. On model helicase substrates, increasing amounts of DnaC first stimulate then inhibit the DnaB helicase (24). After delivering DnaB to the prepriming complex, DnaC leaves, allowing DnaB to display its ATPase and helicase activities. Since DnaC is not found in isolated prepriming complexes, it must have a lower affinity for DnaB in this context than in solution. The extreme sensitivity of the system to DnaC is unusual and is not well understood. Presumably, some conformation of DnaB at the replication fork is especially susceptible to binding and inhibition by additional DnaC.
Initiation of replication at the bacteriophage X origin proceeds in a manner similar to that at oriC, except that the X 0 and X P proteins act as the respective analogues of DnaA and DnaC. DnaI3 forms a complex in solution with X P protein even in the presence of DnaC (27). Like DnaC, X P protein completely inhibits the ATPase of DnaB. In directing DnaB to the replication fork, the XP protein has been shown to interact with X 0 protein (28) as proposed for DnaC interacting with DnaA in delivering DnaB (26). Furthermore, an excess of the X P protein can inhibit DNA synthesis at a rolling-circle replication fork?
Although similarities exist between the action of the DnaC and X P proteins, they are not alike in all respects. For X, several heat shock proteins including DnaK are required to activate the DnaB protein after its delivery. The DnaK protein removes the XP protein from the complex at oriX (29), forms stable complexes with the X P protein (30), and may thereby prevent its reassociation with DnaB. In the oriC system, the addition of DnaK had no effect on the inhibition by DnaC.4 The existence of elongation mutants of dnuC implies that the mutant gene product is present at the replication fork. If these mutant proteins reflect the in vitro action of DnaC at ' G. C. Allen, Jr. and A. Kornberg, unpublished results. ' K. Stephens and R. McMacken, personal communication.
the fork, they may have a higher affinity for DnaB at the nonpermissive temperature and thereby bind and stall the helicase. The dominance of these elongation alleles over the wild-type gene lends support to this proposal. If the temperature sensitivity were a consequence of temperature-induced overexpression of wild-type DnaC, then protein synthesis would be necessary to see this elongation defect. The elongation mutants used in this report display their temperaturesensitive phenotype in the presence of chloramphenicol (12, 13, 15) implying that new protein synthesis is not required for this effect. These alleles might also be expected to affect initiation by preventing early unwinding, but initiation defects cannot be seen in vivo when elongation defects are also present. Although DnaC can clearly have negative effects on elongation, there is no evidence that it has a positive role other than for initiation.
The dnaC and dnaT genes are in the same operon. Had these elongation alleles been misclassified as dnuC-defective when they were actually in dnaT, the wild-type dnaC gene would then not be able to complement them. This simple possibility was ruled out by demonstrating that a plasmid which complements a dnuTts strain fails to complement the elongation alleles of dnuC.* It remains possible, however, that these elongation-defective strains bear an additional mutation which would not be complemented by the dnaC' gene.
It seems paradoxical that DnaC inhibits the function of a complex whose formation it promotes. This dual action may reflect a feedback control mechanism. When conditions are unfavorable for rapid DNA synthesis, the cell may utilize this control to slow the replication fork and lengthen the replication (C) period. In addition, the precise ratio of DnaC and DnaB necessary for optimal replication suggests that the cell has a mechanism to regulate the coordinate expression of these two genes located in different operons. Little is known about the expression of dnaC and dnaB as a function of cell cycle or growth conditions. There may also be factors in the cell that attenuate the inhibitory actions of DnaC. | 5,304.6 | 1991-11-25T00:00:00.000 | [
"Biology",
"Chemistry"
] |
Microbial Enzymes with Special Characteristics for Biotechnological Applications
This article overviews the enzymes produced by microorganisms, which have been extensively studied worldwide for their isolation, purification and characterization of their specific properties. Researchers have isolated specific microorganisms from extreme sources under extreme culture conditions, with the objective that such isolated microbes would possess the capability to bio-synthesize special enzymes. Various Bio-industries require enzymes possessing special characteristics for their applications in processing of substrates and raw materials. The microbial enzymes act as bio-catalysts to perform reactions in bio-processes in an economical and environmentally-friendly way as opposed to the use of chemical catalysts. The special characteristics of enzymes are exploited for their commercial interest and industrial applications, which include: thermotolerance, thermophilic nature, tolerance to a varied range of pH, stability of enzyme activity over a range of temperature and pH, and other harsh reaction conditions. Such enzymes have proven their utility in bio-industries such as food, leather, textiles, animal feed, and in bio-conversions and bio-remediations.
Enzymes from Microbial Sources
Enzymes are the bio-catalysts playing an important role in all stages of metabolism and biochemical reactions. Certain enzymes are of special interest and are utilized as organic catalysts in numerous processes on an industrial scale. Microbial enzymes are known to be superior enzymes obtained from OPEN ACCESS different microorganisms, particularly for applications in industries on commercial scales. Though the enzymes were discovered from microorganisms in the 20th century, studies on their isolation, characterization of properties, production on bench-scale to pilot-scale and their application in bio-industry have continuously progressed, and the knowledge has regularly been updated. Many enzymes from microbial sources are already being used in various commercial processes. Selected microorganisms including bacteria, fungi and yeasts have been globally studied for the bio-synthesis of economically viable preparations of various enzymes for commercial applications [1].
In conventional catalytic reactions using biocatalysts the use of enzymes, either in free or in immobilized forms, is dependent on the specificity of enzyme. In recent advances of biotechnology, according to the requirements of a process, various enzymes have been and are being designed or purposely engineered. Various established classes of enzymes are specific to perform specialized catalytic reactions and have established their uses in selected bio-processes. A large number of new enzymes have been designed with the input of protein-engineering, biochemical-reaction engineering and metagenomics. Various molecular techniques have also been applied to improve the quality and performance of microbial enzymes for their wider applications in many industries [2]. As a result, many added-value products are being synthesized in global market with the use of established bioprocess-technology employing purposely engineered biocatalyst-enzymes.
Most of the commercially applicable proteases are alkaline and are bio-synthesized mainly by bacteria such as Pseudomonas, Bacillus, and Clostridium, and some fungi are also reported to produce these enzymes [3]. The xylanases with significant applications in bio-industries are produced by the fungal species belonging to genera Trichoderma, Penicillium and Aspergillus; the xylanases produced by these microorganisms have been found to possess high activity over a wide range of temperatures (40-60 °C ) [4].
Enzymes with Special Characteristics
Special characteristics of microbial enzymes include their capability and appreciable activity under abnormal conditions, mainly of temperature and pH. Hence, certain microbial enzymes are categorized as thermophilic, acidophilic or alkalophilic. Microorganisms with systems of thermostable enzymes that can function at higher than normal reaction temperatures would decrease the possibility of microbial contamination in large scale industrial reactions of prolonged durations [5][6][7]. The quality of thermostability in enzymes promotes the breakdown and digestion of raw materials; also the higher reaction temperature enhances the penetration of enzymes [8]. The complete saccharification and hydrolysis of polysaccharides containing agricultural residues requires a longer reaction time, which is often associated with the contamination risks over a period of time. Therefore, the hydrolytic enzymes are well sought after, being active at higher temperatures as well as retaining stability over a prolonged period of processing at a range of temperatures. The high temperature enzymes also help in enhancing the mass-transfer and reduction of the substrate viscosity [9,10] during the progress of hydrolysis of substrates or raw materials in industrial processes. Thermophilic xylanase are considered to be of commercial interest in many industries particularly in the mashing process of brewing. The thermostable plant xerophytic isoforms of laccase enzyme are considered to be useful for their applications in textile, dyeing, pulping and bioremediation [1,4].
Protease
Though the hydrolytic enzymes belong to the largest group of enzymes and are the most commercially-applicable enzymes, among the enzymes within this group the microbial proteases have been extensively studied [11][12][13][14][15][16]. Proteases prepared from microbial systems are of three types: acidic, neutral and alkaline. Alkaline proteases are efficient under alkaline pH conditions and consist of a serine residue at their active site [15]. Alkaline serine proteases have the largest applications in bio-industry. Alkaline proteases are of particular interest being more suitable for a wide range of applications, since these possess high activity and stability in abnormal conditions of extreme physiological parameters. Alkaline proteases have shown their capability to work under high pH, temperature and in presence of inhibitory compounds [15][16][17][18].
Vijayalakshmi et al. [16] have optimized and characterized the cultural conditions for the production of alkalophilic as well as a thermophilic extracellular protease enzyme from Bacillus. This bacteria named Bacillus RV.B2.90 was found to be capable of producing an enzyme preparation possessing special characteristics such as being highly alkalophilic, moderately halophilic, thermophilic, and exhibiting the quality of a thermostable protease enzyme. Alkaline proteases possess the property of a great stability in their enzyme activity when used in detergents [16,18,19]. The alkaline protease produced from Bacilli and proteases from other microorganisms have found more applications overall in bio-industries such as: washing powders, tannery, food-industry, leather processing, pharmaceuticals, for studies in molecular biology and in peptide synthesis [1,3].
Keratinases
Keratin is an insoluble and fibrous structural protein that is a constituent of feathers and wool. The protein is abundantly available as a by-product from keratinous wastes, representing a valuable source of proteins and amino acids that could be useful for animal feeds or as a source of nitrogen for plants [20]. However, the keratin-containing substrates and materials have high mechanical stability and hence are difficult to be degraded by common proteases. Keratinases are specific proteolytic enzymes which are capable of degrading insoluble keratins. The importance of these enzymes is being increasingly recognized in fields as diverse as animal feed production, textile processing, detergent formulation, leather manufacture, and medicine. Proteolytic enzymes with specialized keratinase activity are required to degrade keratins and for this purpose the keratinases have been isolated and purified from certain bacteria, actinomycetes, and fungi [20,21].
Keratinases have been classified as serine-or metallo-proteases. Cloning and expression of keratinase genes in a variety of expression systems have also been reported [22]. A higher operation temperature is required in the degradation of materials like feathers and wool, which would be possible using a thermostable keratinase. This aspect is of added advantage in achieving a higher reactivity due to lower diffusional restrictions and hence a higher reaction rate would be established. The enhanced stability of keratinase would increase the overall process yield due to the increased solubility of keratin and favorable equilibrium displacement in endothermic reactions.
Baihong et al. [23] have reported the enhanced thermostability of a preparation of keratinase by computational design and empirical mutation. The quadruple mutant of Bacillus subtilis has been characterised to exhibit the synergistic and additive effects at 60 °C with an increase of 8.6-fold in the t1/2 value. The N122Y substitution also led to an approximately 5.6-fold increase in catalytic efficiency compared to that of the wild-type keratinase.
An alkalophilic strain of Streptomyces albidoflavus has been reported to produce extracellular proteases [24]. This particular type of protease was capable of hydrolyzing keratin. The biosynthesis of this specific enzyme was optimized in submerged batch cultures at highly alkaline pH 10.5 and the enzyme yield was stimulated by using an inducer substrate containing keratin in the form of white chicken feathers. An enhanced (six-fold) protease production could be achieved with modified composition of culture-medium containing inducer at the concentration of 0.8% in the fermentation medium. The novelty of this crude enzyme has been reported to be its activity and stability in neutral and alkaline conditions. The maximum activity has been obtained at pH 9.0 and in the temperature range of 60-70 °C . This type of protease (keratinase-hydrolyzing keratins) is of particular significance for its application in industries since the crude enzyme showed its tolerance to the detergents and solvents tested [24]. Liu et al. [25] have studied the expression of extreme alkaline, oxidation-resistant keratinase from Bacillus licheniformis into the recombinant Bacillus subtilis WB600 expression system. The alkaline keratinase was characterized for its application in the processing of wool fibers.
Amylase
Amylases are significant enzymes for their specific use in the industrial starch conversion process [26]. Amylolytic enzymes act on starch and related oligo-and polysaccharides [27]. The global research on starch hydrolyzing enzymes based on the DNA sequence, structural analysis and catalytic mechanism has led to the concept of one enzyme family-the alpha amylase. The amylolytic and related enzymes have been classified as glycoside hydrolases. The enzymes have been produced by a wide range of microorganisms and substrates [28][29][30] and categorized as exo-, endo-, de-branching and cyclodextrin producing enzyme. The application of these enzymes has been established in starch liquefaction, paper, food, sugar and pharmaceutical industries. In the food industry amylolytic enzymes have a large scale of applications, such as the production of glucose syrups, high fructose corn syrups, maltose syrup, reduction of viscosity of sugar syrups, reduction of turbidity to produce clarified fruit juice for longer shelf-life, solubilisation and saccharification of starch in the brewing industry [31]. The baking industry uses amylases to delay the staling of bread and other baked products; the paper industry uses amylases for the reduction of starch viscosity to achieve the appropriate coating of paper. Amylase enzyme is used in the textile industry for warp sizing of textile fibers, and used as a digestive aid in the pharmaceutical industry [28].
Li et al. [32] have recently isolated, characterized and cloned a themotolerant isoamylase. For this purpose the enzyme was bio-synthesized using a thermophilic bacterium Bacillus sp. This novel enzyme has been reported to display its optimal activity at a remarkably high temperature of 70 °C, as well as being active in the alkaline range. This thermophilic enzyme has also been found to be thermo-stable between 30 and 70 °C , and its activity has been reported to be stable within a pH range of 5.5 to 9.0.
Gurumurthy et al. [33] completed the molecular characterization of an extremely thermostable alpha-amylase for industrial applications. This novel enzyme was produced by a bacterium Geobacillus sp which was isolated from the thermal water of a geothermal spring. This isolated bacterium showed the characteristics of thermo-tolerance and alkali-resistance. A purified preparation of amylase suitable for application was obtained using a DEAE-cellulose column and Sephadex G-150 gel filtration chromatography. The enzyme is a novel alpha-amylase due to its optimum activity at a very high temperature of 90 °C and an alkaline pH 8.0. However, this purified preparation enzyme was found to be stable only for 10 min at 90 °C .
Xylanase
Hemicellulose is one of main constituents of agricultural residues and plants along with cellulose, lignin and pectin [34]. Xylan is the major component of hemicellulose consisting of β-1,4-linked D-xylopyranosyl residues. The hydrolysis of xylan in plant materials is achieved by the use of a mixture of hydrolytic enzymes including endo-β-1,4-xylanase and β-D-xylosidase [35]. The importance of xylanase has tremendously increased due to its biotechnological applications for pentose production, fruit-juice clarification, improving rumen digestion and the bioconversion of lignocellulosic agricultural residues to fuels and chemicals [34]. Collins et al. [36] have extensively studied the xylanase enzyme and its families as well as the special xylanases possessing extremophilic characteristics. Xylanases have established their uses in the food, pulp, paper and textile industries, agri-industrial residues utilization, and ethanol and animal feed production [37,38].
The enzyme used for the purpose of bio-bleaching of wood pulp should be active in the conditions of alkaline pH, high temperature and at the same time it is desirable that this enzyme is stable at high reaction temperatures. Xylanase preparations used for wood processing in the paper industry should be free of cellulose activity. Cellulase-free xylanase preparations have applications in the paper industry to provide brightness to the paper due to their preferential solubilisation of xylans in plant materials and selective removal of hemicelluloses from the kraft-pulp. Kohli et al. [39] have studied the production of a c cellulase free extracellular endo-1,4-β-xylanase at a higher temperature of 50 °C and at pH 8.5 employing a selected microorganism: Thermoactinomyces thalophilus. The enzyme preparation was found to be thermostable at 65 °C , retaining its activity at 50% after 125 min of incubation at 65 °C . The crude enzyme preparation showed no cellulase activity and the optimum temperature and pH for maximum xylanase activity was found to be 65 °C and 8.5-9.0, respectively. A thermotolerant and alkalotolerant xylanase has been reported to be produced by Bacillus sp [40]. To make the applications of xylanase viable on commercial scales, heterologous systems of Escherichis coli, Pichia pastoris and Bacillus sp have been used to express xylanase activity [41,42]. The thermophilic microorganism Humicola spp. has been studied for its capability of bio-synthesising an alkali-tolerant β-mannase xylanase [43]. Acidophilic xylanases stable under acidic conditions of reaction are reported to be produced by an acidophilic fungus Bispora [44], in contrast a xylanase active under conditions of alkaline pH has been studied by Mamo et al. [45] for the mechanism of their high pH catalytic adaptation.
Recently three novel xylanases thermophilic in nature (XynA,B,C) have been characterized by Yanlong et al. [46], these were produced by Humicola sp. for their potential applications in the brewing industry. One xylanase gene, XynA, has been found to adapt to alkaline conditions and stability at higher temperatures. This XynA also possessed higher catalytic efficiency and specificity for a range of substrates. Yanlong et al. [46] have reported the application of three xylanases, XynA-C, in simulated mashing conditions in the brewing industry and found the better performance of 37% on filtration acceleration and 13% reduction in viscosity of substrate in comparison to the performance of a commercial trade enzyme, Ultraflo, a product from Novozyme.
Laccase/Ligninase
Ligninolytic enzymes are applicable in the hydrolysis of lignocellulosic agricultural residues, particularly for the degradation of the complex and recalcitrant constituent lignin. This group of enzymes is a mixture of synergistic enzymes, hence they are highly versatile in nature and can be used in a range of industrial processes [47][48][49]. The complex enzyme system consists of three oxidative enzymes: lignin peroxidase (LiP), manganese peroxidase (MnP) and laccase. These enzymes have established their applications in bio-remediation, pollution control and in the treatment of industrial effluents containing recalcitrant and hazardous chemicals such as textile dyes, phenols and other xenobiotics [50][51][52][53].
The paper and pulp industry requires a step of separation and degradation of lignin from plant material, where the pretreatment of wood pulp using ligninolytic enzymes is important for a milder and cleaner strategy of lignin removal compared to chemical bleaching. Bleach enhancement of mixed wood pulp has been achieved using co-culture strategies, through the combined activity of xylanase and laccase [54]. The ligninolytic enzyme system is used in bio-bleaching of craft pulp and in other industries such as for the stabilization of wine and fruit juices, denim washing [49], the cosmetic industry and biosensors [1,34]. Fungi are the most potent producers of lignin degrading enzymes. White rot fungi have been specifically studied for the production of these enzymes by Robinson et al. [50][51][52]. For the economical production of ligninolytic enzymes, agricultural residues have been used as the substrate in microbial production of lignin degrading enzymes [34].
Thermophilic laccase enzyme is of particular use in the pulping industry. Recently, Gali and Kotteazeth [55] reported the biophysical characterization of thermophilic laccase isoforms. These were initially isolated from the xerophytic plant species Cereus pterogonus and Opuntia vulgaris and showed thermophilic property [56][57][58]. In order to prepare laccase enzymes with special characteristics, several studies have been conducted to provide a scientific basis for the employment of laccases in biotechnological processes [59][60][61][62]. Forms of laccase with unusual properties have been isolated from the basidiomycetes culture of Steccherinum ochraceum [63], Polyporus versicolor [64] and a microbial consortium [65].
Cellulase
Cellulase enzymes are the third most important enzyme for industrial uses: world-wide research has been focused on the commercial potential of cellulolytic enzymes for the commercial production of glucose feedstock from the agricultural cellulosic materials [1]. The significance of cellulose hydrolyzing thermophilic enzymes in various industries includes the production of bio-ethanol and value-added organic compounds from renewable agricultural residues [66]. Cellulose is the most abundant natural resource available globally for bioconversion into numerous products in bio-industry on a commercial scale. For efficient bioconversion a strategy of efficient saccharification using cellulolytic enzymes is required. Hardiman et al. [66] used the approach of thermophilc directed evolution of a thermophilic β-glucosidase.
Cellulase is complex of three important enzymes which work synergistically owing to the crystalline and amorphous complex structure of cellulose. These enzymes, acting synergistically, hydrolyse cellulose to cello-biose, glucose and oligo-saccharides. Endoglucanase enzyme is the first one acting on amorphous cellulose fibers, attacking the glucose-polymer chain randomly, which releases small fibers consisting of free-reducing and non-reducing ends. The free-ends of the chain are then exposed to the activity of exoglucanase enzyme, which produces cellobiose. The third component of cellulase is β-glucosidase, which hydrolyses the cellobiose, producing the glucose as the final product of cellulose sacharification.
Thermostability is an important technical property for cellulases: since the saccharification of cellulose is faster at higher temperatures, the stability of enzyme activity is necessary to be maintained for the completion of the process. Though the enzymes have been prepared using thermophilic microorganisms, these enzyme preparations are not necessarily heat-stable. The activity profile for the thermal activation and stability of cellulases derived from two Basidiomycetes cultures was studied by Nigam and Prabhu [67]. The results proved that the prior heat-treatment of enzyme preparation caused activation of exo-and endo-glucanase activities, and improved the stability of enzymes over a period of reaction time. Therefore, the efficiency of cellulolytic enzymes may be increased by heat-treatment, by incubating buffered enzyme preparations without cellulose or substrates prior to the saccharification process [67].
Cellulolytic enzymes have been produced by a range of microorganisms including bacteria and fungi. The studies have been performed for the biosynthesis of a high-activity preparation in high yields [68][69][70]. Researchers have cultivated microorganisms to achieve cellulases of desired quality under submerged and solid state fermentation conditions for the economical production of enzyme using waste agricultural residues [1].
Miscellaneous Enzymes in Biotechnology
Various enzymes other than those described above have a significant place in the list of microbial enzymes, which have established their applications in bio-industries. Lipases have been widely studied for their properties and utilization in many industries [71][72][73][74][75]. Pectinases have established their role in the fruit and juice industries [76]. Certain enzymes are specifically required in pharmaceutical industry for diagnostic kits and analytical assays [77][78][79][80].
Bornscheuer et al. [81] have currently mentioned that in all the research and developments so far in the field of biocatalysis, the researchers have contributed in three waves of outcomes. These innovations have played an important role in the establishment of current commercially successful level of bio-industries. As a result recent bioprocess-technology is capable of meeting future challenges and the requirements of conventional and modern industries, for example Trincone [82] has reviewed the options for unique enzymatic preparation of glycosides. Earlier enzymatic process were performed within the limitations of an enzyme, whereas currently with the knowledge of modern techniques, the enzyme can be engineered to be a suitable biocatalyst to meet the process requirement. Riva [83] has identified the scope of a long-term research in biocatalysis, since there are underlying problems in the shift from classical processes to bio-based processes for commercial market. Table 1 summarizes some enzymes produced by microorganisms possessing special characteristics useful in various bio-processes. There is a tremendous scope for research and development to meet the challenges of third generation biorefeneries [83], for the production of numerous chemicals and bio-products from renewable biomasses [34]; or by the new glycoside hydrolases [82]; or new enzymes found in marine environments [84]. Although the research for the hemicellulases as important biorefining enzymes has not well established, biocatalysis for xylan processing is slowly progressing and a wide range of hemicellulases have been isolated and characterized [85]. Specifically about the biobased glycosynthesis, Trincone [82] has mentioned that the new prospects are open for the use of pentose sugars as main building blocks for engineered pentosides to be used as non-ionic surfactants or as the ingredients for prebiotic food and feed preparations.
Conclusions
Biotechnology is utilizing a wide range of enzymes synthesized on a commercial scale employing purposely screened microorganisms. Selected microorganisms have been characterized, purposely designed and optimized to produce a high-quality enzyme preparation on large scales for industrial applications. Different industries require enzymes for different purposes; hence microbial enzymes have been studied for their special characteristics applicable in various bio-processes. Recent molecular biology techniques have allowed to tailor a specific microorganism, to produce not only the high yields of an enzyme, but also enzyme with desired special characteristics such as thermostability, tolerance at high temperature and its stability in acidic or alkaline environment, and retaining the enzyme activity under severe reaction conditions such as in presence of other metals and compounds. | 5,028.8 | 2013-08-23T00:00:00.000 | [
"Biology",
"Chemistry",
"Engineering",
"Environmental Science"
] |
ABO and Rh blood groups in patients with lupus and rheumatoid arthritis
Background: Systemic lupus erythematous (SLE) and rheumatoid arthritis (RA) are autoimmune diseases in which the antigen-antibody system plays an important role. As blood group and Rh are determined by the presence or absence of antigens on the surface of red blood cells (RBCs), we aimed to determine the distribution of ABO and Rh blood groups in SLE and RA patients and its association with disease manifestations. Methods: This short communication is based on a study that was conducted on 434 SLE and 828 RA patients. We evaluated the distribution of ABO and Rh blood groups in RA and SLE patients. Results: This study projected that in lupus patients, Coombs-positive autoimmune hemolytic anemia and arthritis were more common among the B blood type and Rh-positive group, respectively. Furthermore, there was no relation between ABO and Rh blood group and rheumatoid factor (RF) and anti-Cyclic Citrullinated Peptide (anti-CCP) seropositivity. Moreover, there was no difference in distribution of blood groups in RA and SLE patients. Conclusion: The higher frequency of blood group B in hemolytic anemia, and positive Rh in arthritis in lupus patients, develop the hypothesis of probable role of ABO blood group antigen in some manifestations of lupus.
A BO blood group system is based upon the presence or absence of glycoproteins A and/or B on the surface of red blood cells (RBCs) (1). The mentioned antigens, play an important role in immunologic responses. The presence or absence of D antigen is reported as Rh positive and Rh negative, respectively (2). Systemic lupus erythematous (SLE) is a chronic autoimmune disorder characterized by specific autoantibodies. Lupus can present with hemolytic anemia, leucopenia and thrombocytopenia, thrombotic thrombocytopenic purpura, and multiple organ involvement (3). Rheumatoid arthritis (RA) is also an autoimmune disease with autoantibody formation, symmetrical arthritis of small and large joints with non-specific hematologic manifestations (4). Several studies evaluated the role of blood groups in different diseases. A greater risk for thromboembolic and cardiovascular accidents has been presented in non-O blood types (5). Besides, raising the odds of severe P. falciparum infection have been reviewed among people with type O blood (6). The first report on correlation between the blood groups and rheumatic diseases by Cohen in 1963, demonstrated no significant heterogeneity in the results obtained from different diseases (7). Moreover, some studies showed the differences in blood group distribution among patients with different types of autoimmune diseases.
Although RA was more common in the patients with A blood type; SLE was more prevalent among the patients with type O blood. In addition, blood type AB was less observed in all diseases compared to the others. There was considerable difference in the distribution of Rh factor in rheumatic diseases. Çildağ et al. presented that different genetic predisposition was associated with higher incidence of different rheumatic diseases. They noted the distribution of ABO blood groups in the world is O>A>B>AB, whereas it is A>O>B>AB and Rh+>Rh-in Turkey (8).
It has been proposed that identifying patients' blood group with autoimmune hemolytic anemia, can be interfered with autoantibody (9). Misinterpretation of blood group in patients with lupus has been recorded, due to autoantibody production after antigenic stimulation (10).
Those interesting findings point out the possible interference between blood group production pathways and SLE-associated autoantibodies. As a result of showing hematologic manifestations by lupus, we aimed to study the distribution of blood group and Rh in lupus and its organ involvements.
RA was also selected, due to the fact that it is also an autoimmune autoantibody formative disease without any specific hematologic manifestations.
Methods
This is a short communication on the basis of a crosssectional study which was conducted on 434 lupus and 828 RA patients, who were consecutively chosen from March 2014 to May 2017. The SLE and RA patients were diagnosed and classified by the 1997 American College of Rheumatology Revised Criteria, and 2010 ACR/EULAR Criteria, respectively (11,12). These patients were referred to the Rheumatic Diseases Research Center in Mashhad, and Qazvin Metabolic Disease Research Center in Qazvin. The recorded data included demographic features, clinical manifestations from the beginning stages of the disease until data collection, autoantibodies, blood group and Rh.
Since accessing to the blood group information of the volunteer samples from general population of Iran with no bias was not possible during this research, we applied a secondary data. This information was retrieved from the latest, comprehensive study on this issue by Shahverdi et al. (13), in which the blood type distribution of the general population was: O>A>B>AB.
Data were analyzed using SPSS Version 11.0. The normality of data was assessed by applying the Kolmogorov-Smirnov test. Descriptive data were presented as mean (±SD) for normally distributed variables.
Chi-square test was used to compare the qualitative variables between blood and Rh groups. Bonferroni correction was applied to minimize type I error. A p-value of less than 0.05 was considered statistically significant.
Ethical approval for this study was obtained from the Ethics Committee of Mashhad University of Medical Sciences, Mashhad, Iran (No# 941268).
As shown in table 1, Coombs-positive autoimmune hemolytic anemia was significantly more frequent among the B blood group (P=0.03, χ2=4.66). Additionally, arthritis was more common in Rh-positive patients compared to negative ones (P=0.02, χ2=4.99).
There was no significant difference between blood groups and other clinical manifestations in SLE patients. The most prevalent blood group and Rh in our lupus and RA patients were O+ RA patients: Average duration of disease was 5.7±5.6 years. The mean of the age distribution was 50.1±13.3 years. 75.6% of patients were anti-CCP positive and 66.4% were RF positive. No association was found between ABO blood group and anti-CCP (P=0.80, χ2=0.96) and RF (P=0.92, x2 =0.48). Furthermore, no evidence for association of Rh blood group, anti-CCP (P=0.50, x 2 =0.017) and RF (P=0.52, χ2 =0.01) was recognized.
It should be noted that there was no difference in distribution of ABO and Rh blood groups between RA and SLE patients (table 2).
As we mentioned in the method section, we used the latest secondary data about our national blood group distribution. We found that the blood group distribution of the general population was similar to our patients: O>A>B>AB.
Discussion
This study suggested a higher frequency of Coombspositive autoimmune hemolytic anemia in B blood group and arthritis in Rh-positive lupus patients. Besides, there was no difference in the distribution of blood types and Rh between RA and lupus. According to the existing documents, the distribution of ABO/Rh blood types was the same as our national distribution.
As indicated above, the first report belongs to 57 years ago, and the following article determined the blood groups of 99 patients with some varieties of articular disease; (31 cases with RA, 31 individuals with spondylitis, 15 patients with gout, 9 with disseminated lupus erythematosus, 6 with familial Mediterranean fever, and 7 with other diseases). Statistical analysis demonstrated no significant heterogeneity among the results obtained from different diseases (7). The last comprehensive study on this topic was conducted by Çildağ et al. (2017) which presented a difference in the frequency and distribution of blood groups in RA patients, among different nationalities and races. The distribution of ABO blood groups distribution in the world is O>A>B>AB, whereas it is A>O>B>AB and Rh+>Rh-in Turkey (8). These results are different from our study. Since accessing to the blood group information of the volunteer samples from general population of Iran with no bias was not possible during this research, we applied a secondary data from the latest, comprehensive study by Shahverdi et al. (13), in which the blood type distribution of the general population was: O>A>B>AB.
The first comprehensive report on the prevalence of various RBC antigens and phenotypes of diverse blood groups in general population of Iran was published by Shahverdi and et al. (2016). Blood type O was the most prevalent followed by A, B, and AB groups. According to several studies, this is similar to blood group distribution in the United States (13). These results are in line with the results described in the present study. Karadağ A. conducted a study to compare the distribution of blood groups in inflammatory rheumatic diseases and healthy subjects. He reported the A blood type was more prevalent in patients with inflammatory rheumatic disease and in the healthy subjects, followed by O, B, and AB blood groups, respectively. However, there was no significant difference between the ABO groups in terms of distribution (p>0.05). The Rh positive blood group was more prevalent in both groups compared to Rh negative, but there was a statistically significant difference in the Rh blood group distribution among the two groups (14).
Tamega et al. found discoid lupus erythematous is more severe in A blood group patients (15). Çildağ et al. suggested there was a significant difference in the distribution of blood groups in rheumatic diseases. Spondyloarthropathies, vasculitis, Behçet's, and RA were more common in A blood groups; Familial Mediterranean fever, lupus, and Sjogren were more common in O blood group. In addition, AB blood type was less common in those autoimmune diseases. While, they suggested that positive Rh is more prevalent in autoimmune diseases (8). Mosaca et al. reported hemolytic anemia as an early presentation of lupus, is valuable in distinguishing SLE from its mimickers (16). Some studies pointed to a possible association between some autoantibodies and Rh blood group. Many of these autoantibodies are specific for Rh antigens, and they commonly react weakly to Rhnegative compared to Rh-positive. These autoantibodies may be nonreactive only with specific Rh and D−negative RBCs (17). In our study, SLE patients with Rh-positive showed significantly higher presentation of arthritis. It may be useful to predict a less prevalence of arthritis in Rh-negative patients. There was no evidence for any influence of blood group and Rh in positive autoantibodies in RA patients. Additionally, blood type and Rh had no different distribution in RA and lupus patients.
Hemolytic anemia and arthritis were more frequent in B blood group and Rh-positive lupus patients, respectively. There was no relationship between blood, Rh types and rheumatoid factor (RF) and anti-CCP. There was not any difference in blood groups and Rh distribution in RA and lupus. There are few articles regarding this topic, and this study is one of the first articles investigated any link between those variables. Because of our large sample size and also existence of volunteer bias in the information of blood transfusion center, we were unable to compare our findings with a matched control group, simultaneously. Therefore, we had a noticeable limitation in this regard. Thus, further studies can better assess the role of blood group-related autoantibodies in different presentations of SLE and RA. | 2,486.4 | 2021-09-10T00:00:00.000 | [
"Medicine",
"Biology"
] |
Slow-Roll Inflation with Exponential Potential in Scalar-Tensor Models
A study of the slow-roll inflation for an exponential potential in the frame of the scalar-tensor theory is performed, where non-minimal kinetic coupling to curvature and non-minimal coupling of the scalar field to the Gauss-Bonnet invariant are considered. Different models were considered with couplings given by exponential functions of the scalar field, that lead to graceful exit from inflation and give values of the scalar spectral index and the tensor-to-scalar ratio in the region bounded by the current observational data. Special cases were found, where the coupling functions are inverse of the potential, that lead to inflation with constant slow-roll parameters, and it was posible to reconstruct the model parameters for given $ns$ and $r$. In first-order approximation the standard consistency relation maintains its validity in the model with non-minimal coupling, but it modifies in presence of Gauss-Bonnet coupling. The obtained Hubble parameter during inflation, $H\sim 10^{-5} M_p$ and the energy scale of inflation $V^{1/4}\sim 10^{-3} M_p$, are consistent with the upper bounds set by latest observations.
Introduction
The theory of cosmic inflation [1,2,3] that has been favored by the latest observational data [4,5,6,7], is by now the most likely scenario for the early universe, since it provides the explanation to flatness, horizon and monopole problems, among others, for the standard hot Bing Bang cosmology [8,9,10,11,12,13,14]. Inflation provides a detailed account of fluctuations that constitute the seeds for the large scale structure and the observed CMB anisotropies [15,16,17,18,19,20,21,22], as well as predicts a nearly scale invariant power spectrum.
Apart from the DBI models of inflation, another class of ghost-free models has been recently considered, named "Galileon" models [50,51]. The main characteristic of these models is that the gravitational and scalar field equations remain as secondorder differential equations. The Galileon terms modify the kinetic term compared to the standard canonical scalar field, which in turn can relax the physical constraints on the potential. In the case of the Higgs potential, for instance, one of the effects of the higher derivative terms is the reduction of the self coupling of the Higgs boson, so that the spectra of primordial density perturbations are consistent with the present observational data [52,53]. Galilean models of inflation have been considered in [52,53,54,55,56,57]. Some aspects of slow-roll inflation with non-minimal kinetic coupling have been analyzed in [58,59,60,61,62,63]. For a sample papers devoted to the study of slow-roll inflation in the context of Gauss-Bonnet (GB) coupling see [64,65,66,67,68,69,70,71,72,73,74,75,76]. This paper is dedicated to the study of the slow-roll inflation in the scalar-tensor model with non-minimal kinetic coupling to the Einstein tensor and coupling of the scalar field to the Gauss-Bonnet 4-dimensional invariant. We will consider models with exponential potential and exponential couplings. These type of non-minimal couplings arise in fundamental theories like supergravity and string theory after specific compactification to an effective four dimensional theory [77,78,79,80,81], which makes it appealing to analyze the mechanism of slow-roll inflation in such theories where the scalar field appears non-minimally coupled to curvature terms. This could provide a connection with fundamental theories in a high curvature regime characteristic of inflation.
An important feature of the exponential potential is that under its dominance the universe expands following a power-law, which plays important role in different cosmological epochs marked by the dominance of specific type of matter. This is the case of the late time dark energy dominated universe, where the exponential potential can give rise to accelerated expansion [82,83]. Applied to the study of the early universe, the exponential potential in the minimally coupled scalar field model, gives rise to power-law inflation [84,85,86,87,88] with constant slow-roll parameters. This implies that the exponential potential lacks a successful exit from inflation, which added to the fact that the tensor-to-scalar ratio is larger than the limits set by Planck data, rules out the exponential potential in the standard canonical scalar field. In the present paper we address the above shortcomings of the exponential potential, this time in the frame of scalar-tensor theories, taking into account non-minimal kinetic an GB couplings, which could play relevant role in the high curvature regime typical for inflation. We find that the above couplings predict values for the scalar spectral index and the tensor-to-scalar ratio that fall in the region quoted by the latest observational data. The paper is organized as follows. In the next section we introduce the model, the background field equations and define the slow-roll parameters. In section 3 we use quadratic action for the scalar and tensor perturbations to evaluate the primordial power spectra. In section 4 we analyze several models with exponential potential and exponential couplings. Some discussion is presented in section 5.
The model and background equations
We consider the following scalar-tensor model where G µν is the Einstein's tensor, G is the GB 4-dimensional invariant given by and κ 2 = M −2 p = 8πG. One remarkable characteristic of this model is that it yields second-order field equations and can avoid Ostrogradski instabilities. In the spatially flat FRW background one can write the field equations as follows where ( ) denotes derivative with respect to the scalar field. Related to the different terms in the action (2.1) we define the following slow-roll parameters The slow-roll conditions in this model are satisfied if 0 , 1 , ∆ 0 , .... << 1. From the cosmological equations (2.5) and (2.6) and using the parameters (2.8)-(2.11) we can write the following expressions forφ 2 and V It is also useful to define the variable Y from Eq. (2.13) as where it follows that Y = O(ε). Under the slow-roll conditionsφ << 3Hφ and i , k i , ∆ i << 1, it follows from the field equations (2.5)-(2.7) that they can be reduced to The scalar field equation (2.18) allows to determine the number of e-folds as where φ I and φ E are the values of the scalar field at the beginning and end of inflation respectively.
3 Second order action for the scalar and tensor perturbations Scalar Perturbations.
The details of the first and second order perturbations fro the model (2.1) are given in [89]. The second order action for the scalar perturbations is given by the following And the sound speed of scalar perturbations is given by The conditions for avoidance of ghost and Laplacian instabilities as seen from the action (3.1) are We can rewrite G T , F T , Θ and Σ in terms of the slow-roll parameters (2.8)-(2.11) and using Eqs. (2.13) and (2.14), as follows The expressions for G S and c 2 S in terms of the slow roll parameters can be written as Notice that in general G S = F O(ε) and c 2 S = 1 + O(ε). Also in absence of the kinetic coupling it follows that c 2 S = 1 + O(ε 2 ). Keeping first order terms in slow-roll parameters, the expressions for G S y c 2 S reduce to After the appropriate change of variables to normalize the action (2.1), we find the equation of motion, working in the Fourier representation, as ( [57]) (see (F.7) of [89]) From (3.19), and keeping up to first-order terms in slow-roll variables in (3.13) and (3.14), we find the following expression forz /z Taking into account the slow-roll parameters we can rewrite the Eq. (3.18) in the where After the integration of (3.21) using the slow-roll formalism (see [89] for details) we find, at super horizon scales (c S k << aH), the following asymptotic solutioñ On the other hand, from the relationship 24) and after integrating in the slow-roll approximation we find which gives in the super horizon regime, from from (3.19), the following k-dependence for the amplitude of the scalar perturbations Then, from the power spectra for the scalar perturbations we find the spectral index, in first order in slow-roll parameters Tensor perturbations.
The second order action for the tensor perturbations is given by ( [89]) where G T and F T are defined in (3.4) and (3.5) (in terms of the slow-roll variables (2.8)-(2.11)). The velocity of tensor perturbations is given by Following the same lines as for the scalar perturbations and introducing the following The deduction of the power spectrum for primordial tensor perturbations follows the same pattern as for the scalar perturbations. At super horizon scales (c T k << aH) the tensor modes (3.29) have the same functional form for the asymptotic behavior as the scalar modes (3.23), and therefore we can write power spectrum for tensor perturbations as where, in first order in slow-roll parameters, the tensor spectral index has the following form [89] An important quantity is the relative contribution to the power spectra of tensor and scalar perturbations, defined as the tensor/scalar ratio r For the scalar perturbations, using (3.27), we can write the power spectra as and all magnitudes are evaluated at the moment of horizon exit when c s k = aH In analogous way we can write the power spectra for tensor perturbations as Noticing that A T /A S 1 when evaluated at the limit 0 , 0 , ∆ 0 , ... << 1, as follows from (3.22) and (3.35), we can write the tensor/scalar ratio as follows taking into account the expressions for G T , F T , G S , F S given in (3.9)-(3.15), up to first order, and using the condition 0 , 0 , k 0 , ∆ 0 << 1, then we can see that c T c S 1 (in fact in the limit 0 → 0, c S = 1 independently of the values of 0 and ∆ 0 ) and we can make the approximation which is a modified consistency relation due to the non-minimal and GB couplings. In the limit 0 , ∆ 0 → 0 it gives the standard consistency relation for the single canonical scalar field inflation with n T given by (3.34). Here for standard consistency relation we mean the relation (3.41) independently of the content of n T . This expression can also be written as
Inflation Driven by Exponential Potential and Exponential Couplings
The exponential potential leads to scaling solutions important to describe different epochs of cosmological evolution, including solutions with accelerated expansion. In the standard minimally coupled scalar field it leads to inflationary solutions with constant slow-roll parameters. As stated in the introduction, the exponential couplings appear in a number of compactifications from higher dimensional fundamental theories such as supergravity and string theory, where the scalar field encodes the size of the extradimensions. Here we consider the exponential potential in presence of non-minimal kinetic and GB couplings given by exponential functions of the scalar field.
Let us start with the model (2.1) with the explicit form of the couplings given by from (2.8)-(2.11), using (2.16)-(2.18) we find the slow-roll parameters where we have set κ = 1 and α = V 0 f k . In standard slow-roll inflation (f k = 0, η = 0) the condition λ 2 << 1 is required, while in the presence of kinetic coupling this condition can be avoided due to the φ dependence in the slow-roll parameters. This φ-dependence of the slow-roll parameters also allows the graceful exit from inflation.
Using the condition 0 (φ E ) = 1 we find the expression for the scalar field at the end of inflation as With f k being positive, this field is well defined whenever λ > √ 2. It is clear from this expression that the larger η, the smaller φ E can be. It also follows that φ E varies very slowly with the increment of α because of the logarithm dependence. Assuming for instance λ = 2, η = 5, α = 10 3 , give φ E 1.08M p , and λ = 2, η = 5, α = 10 2 give The Eq. (2.19) gives the number of e-foldings as where φ I is the scalar field N e-folds before the end of inflation. Solving this equation gives the explicit form of φ I For the scalar spectral index, we see from (3.28) that up to first order in slow-roll parameters n s does not depend on k 0 and k 1 . So, if the model contains only nonminimal kinetic coupling the scalar spectral index becomes n s = 1 − 2 0 − 1 , and for the same reason from (3.40) follows that r = 16 0 . However, both 0 and 1 depend on all the parameters of the model. The analytical expression for n s is given by where φ I is given by (4.5). And for the tensor-to-scalar-ratio it is found Notice that the kinetic coupling constant f k and V 0 appear only in the combination Taking into account the dimensionality of the kinetic coupling one can set f k = 1/M 2 and then, α = V 0 /(M 2 M 2 p ). By replacing φ I from (4.5) into (4.6) and (4.7) we find the exact analytical expressions for the scalar spectral index and the tensor-to-scalar ratio in terms of the model parameters and the number of e-foldings in the slow-roll approximation: And for the slow-roll parameters N e-folds before the end of inflation, we find the following analytical expressions by the initial conditions on the scalar field, we still have freedom to fix V 0 by using the COBE-WMAP normalization [90,91], which sets the scale of M . The restrictions imposed by the COBE-WMAP normalization and the tensor-to-scalar ratio allows to set the set the scales of Hubble parameter and the energy involved in the inflation. From (3.37) where we used the limit ( 0 , 1 , ...) → 0 that gives A S → 1/2 and c 2 S → 1. Taking for instance the case N = 60, λ = 2, η = 1.5 we find 0 ∼ 0.0048 and r ∼ 0.077. Taking into account the COBE-WMAP normalization we find And using the tensor-to-scalar ratio under the same approximations done for P S not well defined as follows from (4.4) and (4.5) but n s and r can be found from (4.6), (4.7) and become constants given by which gives the relationship which imply that n s and r can not simultaneously satisfy the observational restrictions, and therefore this case is discarded.
which from (2.8) gives the following slow-roll parameters (4.20) where κ = 1 and β = V 0 f g . The scalar field at the end of inflation, from the condition 0 = 1, takes the form And the number of e-foldings from (2.19) is given by Solving this equation with respect to the scalar field we find the scalar field N e-folds before the end of inflation as Using (4.20) in (3.28) gives the expression for scalar spectral index as follows And replacing (4.20) into (3.40) gives the tensor-to-scalar ratio as r = 8 9 8βη + 3λe (η+λ)φ I 2 e −2(η+λ)φ I . (4.25) taking into account the above expression for φ I , it is found and r = 32λ 2 Notice that neither n s nor r depend on β, which appears only in the expressions for φ I and φ E . In order to appreciate the order of the parameters involved in inflation, having in mind that at the end of inflation the slow-roll parameters should be of order 1, we can evaluate the slow-roll parameters at the end of inflation by replacing φ E into Eqs. (4.20), giving (4.28) Replacing Φ I into (4.20) gives the slow-roll parameters N e-foldings before the end of inflation as According to (4.28), in order to keep ∆ 0 ∼ 1, λ should be close to 2, and from the expressions for 1 , ∆ 1 follows that η ∼ −1. All these approximations are valid under the condition that = 1 at the end of inflation. On the other hand, the exponential in the expressions for n s and r makes a big difference between n s and r provided N ∼ 60 for the above approximations for η and λ. In fact it provides a wrong value for n s and r ∼ 0. One can also consider the region of parameters where the exponent e −λN (λ+η) is of order 1. In this case, numerical analysis shows that if one assumes, for instance the values λ = −0.001 and η = 1, then n s and r fall in the appropriate region according to the latest observational data. For N varying in the interval [50,60], n s and r take values 0.961 ≤ n s ≤ 0.967 and 0.002 ≤ r ≤ 0.003 But in this same interval, the final field (4.21) which depends on η, λ, β takes the value φ E 0.065M p (assuming λ = −0.001, η = 1, β = −8 × 10 2 ). And the initial field (4.23), which depends additionally on N , varies in the interval 11.6M p ≤ φ I ≤ 11.8M p . As we can see the difference between the initial and final fields is almost two orders of magnitude.
Besides this, according to (4.28) when the scalar field reaches the final value, the slow-roll parameters 1 , |∆ 0 |, ∆ 1 >> 1 ( 0 = 1) indicating that the slow-roll regime is broken long before the field reaches the value φ E 0.065M p . Numerical analysis shows that at φ 7.6M p the slow-roll parameters 1 , |∆ 0 |, ∆ 1 ∼ 1 while << 1. But given these values of the parameters, for the scalar field it takes N ≈ 1 to evolve from φ I = 11.8M p to 7.6M p , making the slow-roll mechanism impracticable under the condition 0 = 1 at the end of inflation. It is also possible to assume that the inflation ends when any of the main slow-roll parameters becomes of order 1, which in our case would be ∆ 0 , and have viable inflation (see [74]). If the condition to end the inflation is imposed on the GB slow-roll parameter ∆ 0 (notice that 0 and ∆ 0 enter with the same hierarchy in the expression for the potential (2.12)), then the following results can be obtained. First, from the condition ∆ 0 = −1 the scalar field at the end of inflation takes the value From (2.19) we find the scalar field N e-folds before the end of inflation as This expression for φ I leads to the following n s and r according to (4.24) and (4.25) respectively The slow-roll parameters can be also explicitly written in terms of the model parameters when evaluated at the end and at the beginning of inflation. By replacing (4.30) into (4.20) we find the following expressions at the end of inflation 0 = λ βη(λ 2 + 2) + λ β 2 η 2 (λ 2 + 4) 2 βηλ + β 2 η 2 (λ 2 + 4) And replacing (4.31) into (4.20) gives the slow-roll parameters N e-foldings before the end of inflation as . Concerning the restrictions imposed by the COBE-WMAP normalization, we find from (3.37) Taking into account the values for the sample (4.39), where ∆ 0 is larger than , and using the COBE normalization for the power spectrum P ξ , we can write (4.42) And from the tensor-to-scalar ratio it is found A special case takes place when η = −λ in (4.19). The slow-roll parameters become constants given by The spectral index and tensor-to-scalar ratio are given by Solving these equations with respect to β and λ gives Thus, given the values of the observables n s and r, we can find the model parameters.
Taking for instance n s = 0.968 and r = 10 −2 , give β 0.36 and λ ±0.9. According to (4.42) and (4.43) The issue with this case is the constancy of the slow-roll parameters that leads to eternal inflation, unless an alternative mechanism to trigger the graceful exit from inflation is provided.
Kinetic and Gauss-Bonnet couplings I.
The following model includes both, the non-minimal kinetic and Gauss-Bonnet couplings.
The slow-roll parameters in terms of the scalar field take the form Where α and β are defined as before, i.e. α = f k V 0 and β = f g V 0 . By solving the condition to end the inflation, 0 = 1 we find .
which gives the scalar field N e-folds before the end of inflation as writing the scalar spectral index in terms of the scalar field from (3.28) and using (4.48) we find and for the tensor-to-scalar ratio from (3.40) and (4.48) we find r = 8λ 2 3e 2λφ + 8β 2 e −4λφ 9(2α + 1) . (4.53) At the horizon crossing, N e-folds before the end of inflation, we find the following expressions for n s and r and Notice that setting β = 0 we obtain the previous results (4.17) for n s and r. Evaluating the slow-roll parameters at the end of inflation (under the condition = 1) we Looking at the expressions (4.54) and (4.55) it can be seen that to obtain the observable values for n s and r, λ should be small or of the order 1 and α should be large. But, according to (4.56), this will make ∆ 0 >> 1 and k 0 >> 1, meaning that they become of the order 1 long before the end of inflation, spoiling the slow-roll approximation. Therefore it is not possible to satisfy the condition that all slow-roll parameters maintain in the region of ±1 at the end of inflation, for appropriate values of λ and α. Better results are obtained with the following model.
Kinetic and Gauss-Bonnet couplings II.
where the slow-roll parameters in terms of the scalar field take the form The end of inflation takes place for the scalar field φ E given by The number of e-folds, from (2.19) is given by Replacing φ E and solving with respect to φ I we find The scalar spectral index in terms of the scalar field is given by the following expression (from (3.28) and (4.58)) and for the tensor/scalar ratio (using (3.40) and (4.58)) it is found The observed values of n s and r are found through the evaluation of the above expressions N e-foldings before the end of inflation, leading to Notice that in fact the dependence of n s and r on α disappears when evaluated at the horizon crossing. Replacing (4.59) into (4.58) we find the expressions for the slow-roll parameters at the end of inflation n s and r do not depend on α, which is used to set the values of φ I and φ E .
Following the same lines as in the previous cases, we can evaluate the size of the Hubble parameter and the energy involved during inflation, obtaining that H ∼ 3 × 10 −5 M p and V 1/4 ∼ 7 × 10 −3 M p (taking into account the above slow-roll parameters).
The evolution of the slow-roll parameters for this case is shown in Fig. 5, where φ I 0.76M p and φ E 4.2M p . Observing Fig. 5 we can see that the slow-roll dynamics can also be consistent if one imposes the condition to end the inflation on the slow-roll parameters 1 = ∆ 1 = 1. This leads (from (4.58)) to the following scalar field at the end of inflation Then, from (4.60) and replacing φ E given by (4.67) it is found which leads, from (4.62) and (4.63), to the following expressions for n s and r Kinetic and Gauss-Bonnet couplings III.
This model leads to exact power-law inflation with the constant slow-roll parameters given by 0 = (3 − 8β)λ 2 6(α + 1) , 1 = 0, ∆ 0 = 8β(3 − 8β)λ 2 9(2α + 1) which predict the scalar spectral index and tensor/scalar ratio given by the following These equations can be solved with respect to α and β, resulting in Thus, for a given λ we can always find adequate values for α and β that satisfy the
Discussion
We have analyzed the slow-roll dynamics for the scalar-tensor model with non-minimal kinetic and GB couplings, where the potential and the functional form of the couplings are given by exponential functions of the scalar field. These type of couplings appear in a number of compactifications from higher dimensional fundamental theories such as supergravity and string theory, where the scalar field encodes the size of the extra dimensions. In the frame of the standard canonical scalar field, the exponential potential leads to important scaling solutions that describe different epochs of cosmological evolution, including solutions with late time accelerated expansion. It also leads to early time inflationary solutions, though it lacks successful exit from inflation and leads to tensor-to-scalar ratio larger than the current observational limits. With the Introduction of additional interactions like the non-minimal kinetic coupling and GB coupling (GB), we address the above shortcomings of the exponential potential and show that the tensor-to-scalar ratio can be lowered to values that are consistent with latest observational constraints [5,6] and that the model leads to a graceful exit from inflation.
First we considered a model with potential V 0 e −κλφ and kinetic coupling f k e −κηφ and have found that the observable magnitudes n s and r do not depend on α = V 0 f k , and depend only on the number of e-foldings and the exponential powers λ and η. The constants α and η can be used to set the values of the scalar field at the end and beginning of inflation, obtaining that φ E M p . A typical behavior of n s and r in this case is shown in Fig.1. In the particular case η = −λ, the slow-roll parameters become constant, but the obtained relationship between n s and r (4.18) makes it imposible to simultaneously satisfy the observational restrictions, making the model non viable for η = −λ. In the second case we considered the GB coupling given by F 2 = f g e −κηφ , and it was found that, similar to the previous case, neither n s nor r depend on β = V 0 f g , but this parameter can be used to set φ E and φ I . Considering the region of parameters where e −λN (λ+η) ∼ 1 it was found that, for 50 ≤ N ≤ 60, n s and r can take values in the intervals 0.961 ≤ n s ≤ 0.967 and 0.002 ≤ r ≤ 0.003, and the scalar field at the end of inflation can be as small as φ E ∼ 0.07M p . However, in this case some of the slow-roll parameters become larger than 1 long before 0 ∼ 1, breaking the slow-roll conditions. To fix this problem we have chosen to break the slow-roll conditions when ∆ 0 = −1, which gives excellent results as sown in Fig. 3 and is consistent with the slow-roll formalism according to the values obtained in (4.39) and (4.40). Considering the case η = −λ it was found that it leads to constant slow-roll inflation, but contrary to the case of kinetic coupling, it is always possible to find adequate values for the scalar spectral index and the tensor-to-scalar ratio. In the model with non-minimal kinetic coupling F 1 = f k e κλφ and GB coupling F 2 = f g e −κλφ it was found that in order to obtain viable values of n s (4.54) and r (4.55), the conditions λ 1 and α >> 1 should be satisfied, but this imply according to (4.56), that some slow-roll parameters reach values ∼ 1 long before the end of inflation, spoiling the slow-roll approximation. Better result is obtained with the model F 1 = f k e −κλφ and F 2 = f g e κλφ , where there is appropriate slow-roll approximation for values of λ and β that lead to n s and r in the range quoted by observations, as seen in Figs. 4 and 5. But more appropriate behavior of the slow-roll parameters was found if the condition to end the inflation is assumed as 1 = ∆ 1 = 1. In the proposed numerical example, the scalar-to-tensor ratio decreases to values r ∼ 0.008 as shown in Fig. 6, and the growth of the slow-roll parameters toward values of the order of ±1 at the end of inflation is more homogeneous than in the previous case, as seen in Fig. 7.
Finally, the model with V = V 0 e −κλφ , F 1 = f k e κλφ and F 2 = f g e κλφ was analyzed.
This model leads to inflation with constant slow-roll parameters and, as follows from The slow-roll analysis for the exponential potential, in the frame of the scalar-tensor theories with non-minimal kinetic and GB couplings, allows to find the scalar spectral index and tensor-to-scalar ratio in the range set by the latest observational data, and lead to successful exit from inflation. The advance in the future observations will allow to establish more accurate restrictions on the inflationary models with non-minimal couplings of the type considered here and reaffirm or rule out its viability. | 6,893.6 | 2019-07-16T00:00:00.000 | [
"Physics"
] |
Rotary slanted single wire CTA-a useful tool for 3 D flows investigations
The procedure is described of experimental investigation of a statistically stationary generally nonisothermal 3D flow by means of a constant temperature anemometer (CTA) using single slanted heated wire, rotary round the fixed axis. The principle of this procedure is quite clear. The change of the heated wire temperature modifies ratio of CTA sensitivities to temperature and velocity fluctuations. Turning the heated wire through a proper angle changes the sensitivity to components of the instantaneous velocity vector. Some recommendations are presented based on long time experiences, e.g. on the choice of probe, on the probe calibration, to the measurement organization and to the evaluation of results.
Introduction
Knowledge of flow behaviour and basic characteristics positively affects development and testing all manner of human work with the fluid flow either outside or inside an object of research, e.g.buildings, jet engines, furnace etc.The significant energy losses are caused by the interactions of boundary layers, wakes, corner vortices etc. generating on immobile or moving parts.Several dynamic phenomena result from these interactions.Apparently the knowledge on the temperature and on the velocity vector distributions in typical flow fields, in several examples of related devices, is contributing to the design and to the improvement of CFD codes.This can accelerate, together, with the knowledge of the flow disturbances features, the development of any device family and also this enables giving a prediction of device's merits and demerits.A thermo-anemometer is appropriate enough for measurement of the essential individual and joint velocity and temperature characteristics in unsteady nonisothermal 3D-flow field.The first stimulus to start with the development of the procedure were the wish to measure Reynolds shear stress components though only one channel CTA had been available in laboratory.The second stimulus was the suggestion of ŠKODA Turbine establishment to arrange measurements of temperature and velocity turbulent fluctuations in a water steam turbine.Both stimuli encourage to start the solution of the problem at the IT in seventies of twenties century.It follows from theoretical grounds of thermoanemometry (e.g.Hinze [1]) that basically are both aims achievable [2].Possibly the oldest executions of velocity, temperature and/or density fluctuations separations from ______________________<EMAIL_ADDRESS>output signal are in connection with Baldwin [3,4], Corrsin [5,6], Kovasznay [7,8], Morkovin [9] and Sandborn [10,11].Namely the CCA mode was assumed and the evaluation procedures were complex, originally a semi-graphic evaluation of measurement was applied; Kovasznay [7], introduced "fluctuation diagram" -graphic technique to improve the evaluation of equation with three unknowns, later modified by Morkovin [9] for use in compressible fluid flow.Hajime and Kovasznay [12] demonstrated successfully measurement of Reynolds Stress by a single rotated hot-wire in 1968.Later Hoffmeister [13] published his own rather complex procedure on using a single HW-probe in 3D turbulent isothermal flow.Obviously, no new fundamental physical ideas will be presented in the paper but an effective procedure of measurement evaluation and some sources of threatening errors will be described assuming contemporary experiences and devices.
Theoretical considerations and description of the method
The presented method of measurement by means of a single hot-wire thermo-anemometer is derived on the following assumptions: 1.The investigated object is three-dimensional, non-isothermal and unsteady fluid flow.2. The fluid properties are close to continuum.3. The unsteadiness is a superposition of deterministic (particularly periodic) disturbances and statistically stationary random fluctuations of velocity and temperature.4. The hot-wire probe axis of rotation (direction defined by the unit vector r ) is fixed and it is identical with the positive axis y of the orthogonal coordinate system (x, y, z) introduced in the investigated region (Figure 1) so as to be positive the x-component of the mean flow velocity.5.The wire is inclined to the fixed axis of rotation at the angle 2 π Θ < determined by the scalar product of the unit vectors of the wire direction l and the axis of rotation r 6.The wire is heated in CTA mode and its cooling in the fluid flow perpendicular to the wire axis is described by the generalized Collis-Williams cooling law together with the relation ( ) with respect to the concept of Hinze [1] on effective velocity U i.e. heat conduction from a wire skewed to the flow velocity W is equal to the heat convection from the same wire being perpendicular to the "effective" velocity U.The angle ϕ is determined by the scalar product The Nusselt number, Nu and the wire Reynolds number, Re are defined as usually e.g.Bruun [14].Next the notations are introduced: ambient temperature T (K), hotwire temperature T w (K) and effective temperature T m ( ) The calibration parameters A, B, M, N and κ must be determined in advance.The best calibration is done in the fluid's state identical at least very similar to the state during experiment.Several other formulations are available on the empirical description of the heat transfer from hot-wire in a fluid flow but here presented seem most universal for gaseous flows.
7. Finally, the assumptions of the thermoanemometer linearized theory are valid.
An example of definitions of the coordinate system and angles notation is shown in Fig. 1 borrowed from [15].Let us suppose in accord with the Fig. 1 the velocity vector as the superposition of the mean velocity vector and the vector of velocity fluctuations here β denotes the pitch angle, angular deflection of the mean velocity vector from the plane (x, y) ≡ (x, r) and µ is the yaw angle, angular deflection of the mean velocity vector from the plane y ≡ r = const.
The wire starting position 0 ψ = is in the plane (x.y) ≡ (x, r) with the shorter prong upstream the longer one then the components of the wire direction unit vector are 0 sin , cos , 0 Turning the wire of angle ψ (clockwise positive angle) from the starting position with the wire in the plane (x, r) then the components of the wire direction unit vector become Next the angle between the wire and the mean velocity ϕ , the wire setting angle, can be evaluated from the scalar product (4) and substituted into the directional sensitivity equation (3).Then after some formal rearrangement the formulae (3) is rewritten Having in mind the relations ( 11), ( 13) and ( 14), the direction of the mean velocity vector could be determined simply from the measurement of the CTA output signal distribution E versus ψ and evaluating extremes.Of course, the wire temperature and the fluid flow parameters must be unchanged during measurement of the distribution ( ) E ψ .The modulus of the mean velocity vector W is then calculated using the cooling law (2).Then it is possible to check the derived transformation of the mean output signal distribution into the relevant distribution of the normalized effective velocity.The equation with the empirical coefficient κ must valid ( ) ( ) Interpretation of the output signal fluctuations follows from the cooling law (2) and the equation (3) of the directional sensitivity of a heated wire.In the framework of the linearized thermo-anemometer theory are valid formulas , .
The sensitivities to the effective velocity fluctuations and to the temperature fluctuations are derived in the forms The partial derivatives , 1, 2,3 i F i = are better transparent if the mean flow coordinate system is introduced. 1 Heat conductivity, dynamic viscosity and density in gas flow are assumed in forms: ; ; .
The mean flow system (x 1 , x 2 , x 3 ) is defined by the conditions It is easy to transform the coordinate system (x, y, z) into (x 1 , x 2 , x 3 ) e.g. the components of the HW unit vector ( 9) become form sin cos cos cos sin , sin sin cos cos cos , sin sin .Then the partial derivatives i , i 1, 2,3 The evaluation of the dimensionless variances of the effective velocity fluctuations and the temperature one together with the dimensionless covariance of velocity and temperature consists in the interpolation of the linear function with three unknowns subscripts at the round brackets indicate that measurement of 2 2 e E must be done at several wire temperatures T w and some roll angels ψ.
Only afterwards they are possible the interpolations of the turbulent heat transfer vector components by means of the formulae and the evaluation of the dimensionless Reynolds stress tensor components 2 i j w w W requires the interpolation of the function with six unknowns ( ) ( ) 3 Practical application of the rotary slanted single wire CTA As an example the distribution of the output voltage from CTA E with wire heated on 225 ° C is shown in Fig. 2 (triangles) together with walues of E measured at wire temperatures T w between 150 ° C and 275 ° C (black circles).Distribution od the normalized variance of the signal fluctuations demonstrates the difference in the fluctuations level with the wire in wake of longer prong and with the wire upstream the longer prong.
An agreement of the mean flow measurement results and the calibration of the wire directional sensitivity is demonstrated in Fig. 3.The evaluated ratio of the mean effective velocity and the mean flow velocity is plotted against the formulae (10) with the empirical coefficient κ (evaluated from the directional sensitivity calibration).
The distribution in Fig. 3 is the certification that the evaluations of the mean flow characteristics are right.
01047-p.4
A simple check of the statistical estimates of the dimensionless variances of the effective velocity fluctuations and the temperature fluctuations and of the dimensionless covariance of velocity and temperature is shown in Figure 4.The interpolated value of the nondimensional variance 2 2 e E is evaluated by substituting statistical estimates For the discussed example the ratio interpolation divided by measurement equals (1 ± 0.08) though estimates of the relative errors 2 t and tu are very high 50 and 80 percent; the estimated error of 2 u is 5 percent.It should be noticed that the presented example was selected from an industrial measurement.At that time the flow was without important sources of heat flux and virtually turbulent heat transfer was negligible.
The statistical estimates of the dimensionless Reynolds stress tensor components The comparison of measurement results (circles) and the interpolation (full line) is shown in Fig. 5. However the measurement's scatter seems big, the calculated relative probable error of the interpolation is only 6 percent.
The presented example is describing a non isothermal flow of the overheated water steam in an industrial device with turbulence level 11.6 percent and with the intensity of temperature fluctuations 1.9 percent.Some other examples in the high speed air flow and flow of a wet water steam might be found in references [15 -19].
Conclusions
The useful tool is presented for the experimental investigations of complex three dimensional nonisothermal unsteady fluid flows fulfilling exactly determined assumptions that are only little restricting the employment.
The single slanted heated wire, rotary round the fixed axis, heated by a CTA on several operating temperatures constitutes this tool.
Theoretical grounds of the procedure are illustrated by an example from some industrial measurement.More neatly arranged results are received from laboratory experiments.
Fig. 1 .
Fig. 1.Example of the introduction of coordinate system and the angle notation in an axisymmetric flow region.
2 e
this it is clear that the left side of this relation equals maximum if ψ are turning angles at which the wire is perpendicular to the mean velocity i.e. position of maximal wire heat loss that appears in maxima of the CTA output signal a b , E E .The relation is valid + .(13)Two minima of the wire heat loss appear with c E and d c E E ≪ at turning angles c ψ and d ψ .The physical significance has the angle c ψ at the probe position with the shorter prong/electrode upstream the longer one.Opposite direction d ψ corresponds to the hot wire position in the longer prong wake.Then a deeper decrease of the output signal E comes that is accompanied by an extensive amplification of the fluctuating component e.g.
the equation (21).Comparison with the value following at the same conditions from measurement illustrates the accuracy of interpolation.
Fig. 5 .
Fig. 5. Interpolation of the dimensionless variance of effective velocity fluctuations. | 2,856 | 2013-04-01T00:00:00.000 | [
"Engineering",
"Physics"
] |
Whole genome expression profiling of blood cells in ovarian cancer patients -Prognostic impact of the CYP1B1, MTSS1, NCALD, and NOP14 genes
Ovarian cancer patients with different tumor stages and cell differentiation might be distinguished from each other by gene expression profiles in whole blood cell mRNA by the Affymetrix Human Gene 1.0 ST Array. We also examined if there is any association with other clinical variables, response to therapy, and residual tumor burden after surgery. Patients were divided into two groups, one with poor prognosis, advanced stage and poorly differentiated tumors (n = 22), and one group with good prognosis, early stage and well- to medium differentiated tumors (n = 11). Six genes were found to be differentially expressed: the PDIA3, LYAR, NOP14, NCALD and MTSS1 genes were down-regulated and the CYP1B1 gene expression was up-regulated in the poor prognosis group, all with p value <0.05, adjusted for mass comparison. In survival analyses, CYP1B1, MTSS1, NCALD and NOP14 remained significantly different (p<0.05). Patient groups did not differ in any transcript related to acute phase or immune responses. This minimal gene expression signature of prognostic ovarian cancer-related genes opens up an avenue for more practicable monitoring of ovarian cancer patients by simple peripheral blood tests, which may evolve into a tool to guide selection of curative and postoperative supportive therapies.
INTRODUCTION
Ovarian cancer is an important disease among the gynecological malignancies. Despite a slowly decreasing incidence in many Western countries the prognosis is still unfavorable [1], and the overall 5-year survival rate is approximately 50% at the best centers after primary cytoreductive surgery and combination chemotherapy with paclitaxel and carboplatin [2]. Significant improvements in treatment results have been achieved during the last decades and further improvements can be expected in the future for this disease. Many clinical trials are ongoing to improve chemotherapy, but also to incorporate target therapy agents [2,3].
Predictive and prognostic factors are important in guidance of expected response and survival and for the choice of optimal primary therapy [4,5]. A number of prognostic factors identified so far are mainly clinical, e.g. stage, type of histology, FIGO grade, and residual tumor after primary surgery [6,7]. The amount of residual tumor is in fact among the strongest prognostic factors for survival [6,7]. The goal of the primary cytoreductive surgery is to reduce the tumor volume as much as possible to no residual tumor macroscopically or at least to less than 1 cm tumor diameter of the remaining nodules. Centralized surgery and experienced tumor surgeons are important to achieve this goal, but the biology of the individual tumor is also thought to be of importance for the outcome of the surgery and prognosis [6,8]. It should therefore be possible to identify biomarkers in a blood sample that adds prognostic value, and an alternative to performing a biopsy of tumor tissue.
The biology of individual ovarian tumors can be characterized by their genetic profiles with up-or downregulation of important oncogenes and tumor suppressor genes. DNA changes and expression of RNA can be studied with microarray techniques on tissue samples from the tumor. Fresh or fresh-frozen tissue is generally needed for these analyses, but often these types of specimens are not available in the routine clinical work, especially during postoperative follow-up. A more practicable way would be to analyze blood cell samples from the individual patient, both blood leukocytes and circulating tumor cells may be the sources of mRNA in these analyses [9,10], but on a molar basis the leukocytes can be expected to be the dominating source of mRNA. The mRNA species from leukocytes is thought to reflect more general and systemic reactions and tumor cell mRNA species would reflect specific tumor characteristics. In our pilot study, we corroborated that two groups of ovarian cancer patients with or without residual tumor mass after primary surgery showed differences in gene expression profiles in blood cells which seemed to agree with such a contention since most of the genes that differed belonged to rather cancerspecific pathways [11]. In the present study, we therefore tested the hypothesis that patients with different tumor stage and cell differentiation can be distinguished from each other by performing a whole transcriptome profile in whole blood cell mRNA of ovarian cancer patients. We also wished to examine if these profiles were associated with other clinical variables, such as therapy response, survival and residual tumor burden after surgery.
Clinical characteristics
The characteristics of the patients and tumors are presented in Table 1. The complete series analyzed encompassed 33 patients with ovarian carcinomas (FIGO stages I-IV), pre-selected to represent a high-risk group (Group A, n=22) and a low-risk group (Group B, n=11). FIGO stage (stage III-IV vs. I-II) and tumor grade (grade 3 vs. grade 1-2) were used to define the two groups. The mean age of the patients in the two risk groups (63.6 and 60.3 years) was not significantly different. All tumors included were adenocarcinomas. In the high-risk group 21 of 22 cases (95.5%) were seropapillary adenocarcinomas, and in the low-risk group seven of 11 cases (63.6%). In the latter group two tumors were of the endometrioid type and two cases were clear cell carcinomas. This difference was statistically significant (p = 0.016). Residual carcinoma after the primary surgery was more frequent in the highrisk group (68.2%) than in the low-risk group (18.2%), p = 0.007. The mean follow-up period for patients alive was 42.1 months (range 14-86 months). The 5-year overall survival rate of the complete series was 48.8% (95% CI 28.4-69.2%) and differed between the groups; in the highrisk group 28.8% and in the low-risk group 100% (logrank test; p = 0.0004).
Gene expression data as predictors of outcome
An unsupervised cluster analysis was made from the gene expression array for the 100 genes with lowest unadjusted p values including all patients from groups A Table 2: Blood leukocyte gene expression profiles of ovarian cancer patients, unguided analysis. Comparison of Group B vs. Group A, a negativ fold change indicates a down-regulation of gene expression. The moderated t-statistics generated the p value in the same manner as an ordinary t-test. Adjusted p value is also known as q-value or FDR. This is a Benjamini and B, whereby only three patients were misclassified (see heat map in Figure 1). Six genes; PDIA3, CYP1B1, LYAR, NOP14, NCALD and MTSS1 were found to be expressed significantly different between the two groups when adjusted for multiple testing (Table 2). At the time of analysis 15 patients (all in the highrisk group) were dead of disease. No cases of intercurrent death were recorded. Overall survival rate was calculated for patients with leukocyte mRNA up-regulated (level above the median value of all patients) or down-regulated (level below the median) of the six genes analyzed. Upregulation of the CYP1B1-gene, and down-regulation of MTSS1, NCALD, and NOP14 genes, was associated with a significantly inferior survival rate ( Figure 2). Expression of PDIA3P and LYAR showed the same pattern, but the differences were non-significant.
There was a highly statistically significant association between tumor stages (stage I-II vs. III-IV) and expression of all six genes studied. Down-regulation of MTSS1 was noted in 74% of advanced stage tumors, but only in 10% in early stages (p = 0.0007). CYP1B1 was overexpressed in 65% of advanced stage tumors and in 10% in early stages (p = 0.0035). The other four genes were all significantly down-regulated in advanced stages (Table 4).
Serous papillary carcinomas were most frequent in this series and this type of histology showed borderline association with expression of MTSS1 (p = 0.092). For the other five gene types this association was not statistically significant.
On the other hand, FIGO-grade of the tumor was highly statistically associated with expression of all six genes. FIGO-grade 3 was compared with FIGO-grade 1-2 in the analyses. MTSS1 showed the strongest association with poorly differentiated tumors, and 77% of these tumors showed down-regulation of this gene (Table 4).
A statistical model using Cox proportional regression analysis and the best subset technique showed that a combination of the up-regulated CYP1B1 and the down-regulated MTSS1 gene expressions predicted overall survival rate most efficiently. A three-gene model also included NOP14. Addition of information from the other genes only marginally improved the model.
DISCUSSION
In this whole genome expression study on blood cell mRNA from ovarian cancer patients, only six genes, PDIA3, CYP1B1, LYAR, NOP14, NCALD, and MTSS1 showed a statistically significant difference in expression between subjects with tumors that were poorly differentiated vs. those who had moderately to well differentiated tumors. Four of these, CYP1B1, NCALD, NOP14, and MTSS1 C were significantly associated with prognosis in survival analyses ( Figure. 2). Since tumor differentiation is a major prognostic factor, it makes sense that these genes account partly for this difference in prognosis. This is further supported by the known functions of the six genes, which all appeared to be of relevance for tumor biology in general, and in particular for a partly estrogen-linked tumor such as ovarian cancer, as outlined below. In a cluster analysis based on the gene expression data, only three of the 33 included patients were misclassified (Figure. 1).
The CYP1B1 (Cytochrome P450, family 1, subfamily B, polypeptide 1) mRNA encodes a protein that catalyses reactions involved in drug metabolism and the synthesis of lipids, including cholesterol and steroids [12,13]. A search in the BioGPS database [14] confirmed gene expression in normal whole blood and in particular in CD14+ monocytes. The protein can be detected in several normal tissues as well as in tumor and metastasis tissues, levels tend to be elevated in tumor tissue compared to normal tissue [13]. Some studies reported it to be undetectable in normal tissue but detectable in tumor and metastasis tissue [12,15]. Importantly, CYP1B1 can be found in tissues that are estrogen-stimulated, like the breast, ovary, and uterus [16]. In these tissues its main function is to catalyze the hydroxylation of estradiol to 4-hydroxyl estradiol (4-OHe2) [16]. Several studies have suggested that the CYP1B1 gene may be a marker for ovarian cancer and a possible target for intervention [13,15,16]. Modugno et al argues that subgroups of ovarian cancer patients respond well to endocrine treatment and calls for biomarkers that can predict such patients [17]. Thus, it is remarkable and suggestive of some systemically active regulatory process that we could pick up a significant difference in mRNA levels of this particular gene between the two patient groups even in cells from peripheral blood.
The MTSS1 (metastasis suppressor 1) gene, also known as Missing in Metastasis gene (MIM), encodes a protein that contains multiple functioning motifs, thought to act as an actin-binding scaffold protein. It has been implicated in carcinogenesis and metastasis; some researchers consider it to be a potential metastasis suppressor gene [18][19][20]. One study of colorectal cancer (CRC) found an increased MTSS1 protein expression in CRC tissue compared to normal tissue and it was correlated to poor differentiation, tissue invasion, presence of lymph node metastases, high TNM stage: strong positive protein expression was associated with significantly shorter survival [19]. A loss of MTSS1 protein expression in gastric cancer has been associated with large tumor size, poor differentiation, deep invasion level, the presence of nodal metastasis, and poor outcome in patients who underwent gastrectomy [18]. The sparse clinical data is thus fairly contradictory. Animal and cell-line studies suggest that MTSS1 is more resistant to cell-cell junction disassembly, and a loss of protein expression promotes epithelial-to-mesenchymal transition and metastasis [20,21]. Our results support the view that down-regulated blood cell MTSS1 expression is a marker of worse prognosis in ovarian cancer.
The NCALD (neruocalcin delta) mRNA encodes a member of the neuronal calcium sensor (NCS) family of calcium-binding proteins. The protein is thought to be a regulator of G protein-coupled receptor signal transduction and several alternatively spliced variants of the gene exists, all encoding the same protein. NCALD gene expression can be found in several tissues [22], for example in many parts of the normal brain, natural killer cells, lymphoblasts, and trace amounts of NCALD gene expression can be found in healthy ovarian tissue [15]. So far very little is known about this gene in cancer. A study by Couvelard et al found NCALD gene expression to be one of many genes that can distinguish between metastatic and non-metastatic pancreatic endocrine tumor tissue [23]. However, another gene belonging to the same gene family, the neuronal Ca 2+ sensor protein family (NCS), termed VILIP1 [24], has been more extensively studied in cancer, and shown to act as a tumor suppressor gene by inhibiting cell proliferation, adhesion, and invasiveness [25,26]. The VILIP-1 protein and mRNA was down-regulated in a study on non-small cell lung carcinoma [25], and high gene expression was reported to be associated with a high rate of lymph node metastasis and poor prognosis in colorectal cancer patients [27].
PDIA3, the protein disulfide isomerase family A, member 3 gene, encodes a protein in the endoplasmatic reticulum that interacts with lectin chaperones calreticulin and calnexin to modulate the folding of glycoproteins that are newly synthesized [28,29]. The protein PDIA3 (also known as ERp57, GRP58, ERp60, and ERp61) has been found to be active in several other locations and reactions, for example interactions in the nucleus which involve DNA repair, DNA damage recognition, and apoptosis [28,29]. A study of a number of different ovarian cancer cell-lines reported PDIA3 mRNA expression to be strongly elevated compared to human ovarian surface epithelial cells, and protein expression followed the same pattern [30]. Cicchillitti et al described that paclitaxel-resistant cells lack the normal interaction between b-actin and PDIA3 [29]. The BioGPS database [14] confirmed PDIA3 gene expression in normal whole blood cells and most other tissues.
The Ly1 antibody reactive homolog (LYAR) was first described by Su et al as a cDNA encoding zinc finger protein isolated from mouse T-cell leukemia line, they also showed that cells with this protein had increased ability to form tumors in nu/nu mice and therefore called it a nucleolar oncoprotein in cell growth regulation [31]. The BioGPS database [14] showed that LYAR gene expression is found in many normal tissues and whole blood. Highest levels are reported in NK-cells, T-cells, lymphoblasts, CD34+ cells, and testis interstitial tissue.
Finally, we find it remarkable, and worth stressing, that no expression signature indicating unspecific disease activity in the immune system or general acutephase inflammatory response mechanisms, such as that found in a recent study on prostate cancer [32], seemed to differentiate the poor and good prognosis groups. This raises our expectations that the novel prognostic signature described here is a real feature of the prognostic differences in tumor biology within the panorama of ovarian cancer.
In conclusion, we propose six genes that are promising candidates as a prognostic biomarker signature measured as mRNA in peripheral blood cells in ovarian cancer patients, PDIA3, CYP1B1, LYAR, NOP14, NCALD, and MTSS1. Monitoring of these in peripheral blood samples in future longitudinal multicenter followup studies, will be necessary for validation of the clinical utility of this proposed prognostic gene expression signature.
Ethics statement
Investigation has been conducted in accordance with the ethical standards and according to the Declaration of Helsinki and according to national and international guidelines and has been approved by the authors' institutional review board, the Regional Board of Ethics, Uppsala, Sweden. Written informed consent was obtained from the patients.
Subjects
Blood samples were consecutively collected from ninety-two women with ovarian cancer, FIGO (International Federation of Gynecology and Obstetrics) stage I-IV, admitted for treatment at the Department of Gynecological Oncology, University Hospital in Örebro, Sweden from October 2004 to December 2011. Enrollment took place 2-4 weeks after the primary cytoreductive surgery. Patients with a defined tumor stage and differentiation by a reference pathologist were considered for this project, and samples with RNA of satisfactory quality (see methods) were then analyzed. Thirty-three of the patients were included in this study. Patients were divided into two groups, A and B, one with a known poor prognosis; poorly differentiated tumors (n = 22), and one group with good prognosis; well-to medium well differentiated tumors (n = 11). See Table 1 for tumor characteristics.
Blood collection and extraction
The blood was collected in PAXgene tubes and the total RNA was extracted with PAXgene Blood RNA Kit (QIAGEN Inc., Valencia, CA, USA) in compliance with the manufacturer's instructions. Total RNA concentration was measured with spectrophotometry on a ND-1000 instrument (NanoDrop Technologies, Wilmington, DE, USA) absorbance ratio (260/280 nm) between 1.9-2.2 accepted. RNA quality was evaluated on an Agilent www.impactjournals.com/oncotarget 2100 Bioanalyzer (Agilent Technologies, Waldbronn, Germany), A RIN (RNA integrity number) over seven was considered as good quality.
Gene expression analysis and statistical calculations
To generate biotinylated sense-strand cDNA, 250 ng of total RNA were used from each patient according to Ambion WT Expression Kit (P/N 4425209 Rev B 05/2009) and Affymetrix GeneChip® WT Terminal Labeling and Hybridization User Manual (P/N 702808 Rev. 1, Affymetrix Inc., Santa Clara, CA, USA). Samples were hybridized to a GeneChip® Human Gene 1.0 ST Array (Affymetrix Inc., Santa Clara, CA, USA) and scanned using the GeneChip®Scanner 3000 7G at the Uppsala Array Platform (Uppsala University, Sweden) according to the manufacturer's instructions. The raw data was normalized in the free software Expression Console provided by Affymetrix (http://www.affymetrix.com) using the robust multi-array average (RMA) method first suggested by Li and Wong in 2001 [33,34]. Subsequent analysis of the gene expression data was carried out in the freely available statistical computing language R (http:// www.r-project.org) using packages available from the Bioconductor project (www.bioconductor.org). In order to search for the differentially expressed genes between the A and B groups an empirical Bayes moderated t-test was then applied [35], using the 'limma' package [36]. To address the problem with multiple testing, the p values were adjusted using the method of Benjamini and Hochberg [37]. SAS software packages were used for the statistical calculations.
Clinical characteristics were analyzed using Pearson's chi-square test, t-test, Kaplan-Meier survival analysis and log-rank test statistics. Cox proportional regression analysis and the best subset technique were used for prognostic modeling. A p value of 0.05 or less was regarded as statistically significant. Statistica software packages were used for the statistical calculations. | 4,221 | 2014-05-01T00:00:00.000 | [
"Biology",
"Medicine"
] |
Mineral Resources: Reserves, Peak Production and the Future
The adequacy of mineral resources in light of population growth and rising standards of living has been a concern since the time of Malthus (1798), but many studies erroneously forecast impending peak production or exhaustion because they confuse reserves with " all there is ". Reserves are formally defined as a subset of resources, and even current and potential resources are only a small subset of " all there is ". Peak production or exhaustion cannot be modeled accurately from reserves. Using copper as an example, identified resources are twice as large as the amount projected to be needed through 2050. Estimates of yet-to-be discovered copper resources are up to 40-times more than currently-identified resources, amounts that could last for many centuries. Thus, forecasts of imminent peak production due to resource exhaustion in the next 20–30 years are not valid. Short-term supply problems may arise, however, and supply-chain disruptions are possible at any time due to natural disasters (earthquakes, tsunamis, hurricanes) or political complications. Needed to resolve these problems are education and exploration technology development, access to prospective terrain, better recycling and better accounting of externalities associated with production (pollution, loss of ecosystem services and water and energy use).
Introduction
At least since the time of Malthus (1798) [1], there has been concern about the adequacy of resources to support a growing human population on planet Earth.In more recent times, there has been an ebb and flow between studies that predict the near-term exhaustion or peak production of resources [2][3][4][5][6][7][8][9][10][11][12][13] and those that are more optimistic [14][15][16][17][18][19][20].The one thing that is common among every prediction about the exhaustion or production peak of resources by a particular date is that they have been wrong.This is well illustrated, but by no means proven or vindicated, by the famous bet between Julian Simon and Paul Erlich about the price of five benchmark metals, chromium, copper, nickel, tin and tungsten, ten years in the future [21].Even though Dr. Erlich was quite confident that prices would have to be higher in the future due to resource scarcity, he lost that bet because prices were actually lower, both in real and inflation-adjusted dollars.However, had the bet been made at a different time, the outcome could have been the opposite of what transpired.Therefore, what is it that leads to such uncertainty about what seems to be a straightforward argument?
On the one hand, there is no disputing that Earth has a finite mass and if resources are consumed at some definable rate, then eventually they will be used up [22].On the other hand, we have not yet run out of any mineral commodity, and exploration and technology have more than kept up with changing demands for mineral resources throughout human history.As was said by Sheikh Zaki Yamani, "The Stone Age did not end for lack of stone..." (quoted in [23]) How do we reconcile such contradictions?Perhaps a useful analogy for understanding this seeming paradox between the unarguably finite amount of mineral resources on Earth and the continuing ability of society to meet its resource needs is the continuous increase in human performance in athletic endeavors, even though elementary logic dictates that it cannot continue forever.It is a demonstrable fact that new world records continue to be set in every athletic event despite the real limits to human achievement: humans will never be able to run faster than a rifle bullet, except in comic books and movies.
Although many sports analogies could be considered in this regard, running might be the most useful because the records are easily quantifiable and because most people have run at some point in their lives, even if not at a competitive level.A running event that illustrates this logic conundrum is the mile, because the 4-min benchmark was long considered unreachable.There are many excellent literary and historical accounts of the quest to break this once unbreakable mark [24].That pace today would not even qualify for entry into the Olympics, and the current world record for the mile is 3:43.13,set in 1999 by Morocco's Hicham El Guerrouj.However, any particular record is not the point.If a 4-min mile is not the limit, is 3:40, 3:30 or 3:00?History says that no record lasts forever, yet elementary physiology tells us that improvement cannot continue indefinitely.Therefore, although obviously this is not a perfect analogy for the ultimate limits to mineral resources, it does illustrate the difficulty in reconciling finite limits with the absence of evidence of reaching those limits.That is the central theme of the present paper.
Background
Before discussing a framework for thinking about limits to mineral resource availability and consequences of use, it is useful to first explore the important terms, reserves and resources [11].These terms are poorly understood by many and widely misused, particularly in the well-known Limits to Growth study of Meadows et al. [2] and more recent incarnations [10,25,26].Reserves have a defined meaning as codified by several widely-used standards, such as the Australian Joint Ore Reserves Committee (JORC) [27], the South African Mineral Resource Committee (SAMREC) [28] and Canada's National Instrument 43-101 [29].Furthermore, the Committee for Mineral Reserves International Reporting Standards (CRIRSCO) [30] has harmonized the various country-level codes into a common international reporting standard.A summary of various resource and reserve classifications is outlined in Table 1.In simple terms, a reserve is a known quantity of a resource as established by drilling and sampling; it typically is expressed as X tons of material with an average grade of Y at a cutoff grade of Z.This results in a calculated amount of contained metal that potentially could be recovered, based on assumptions of cost, price and technology.Resource is a broader and more general term than reserve and includes identified material that may be less well characterized, possibly of lower grade and less certain to be economically recoverable.Resources can be converted to reserves by additional drilling or changes in economic factors, such as price or technology [40].However, it is very important to understand that neither reserves nor resources are the same as "all there is".This is the fundamental flaw of many studies, such as Meadows et al. [2], that assumed reserves, or some multiplier thereof, were "all there is" and that by applying a given annual rate of consumption, one could model how long the resource would last before disaster ensued (a more detailed analysis is given by [19,41]).A little appreciated fact is that increasing world population and standards of living lead to more production, which, in turn, requires larger reserves to sustain that production; thus, world reserves of almost all commodities are larger now than they were 50 or 100 years ago [42].This is because the time value of money makes it uneconomic to spend unlimited amounts to convert all identified or undiscovered resources into reserves.In practice, most companies do not drill out more than about 20 or 30 years' worth of reserves, and some large mines have had ~20 years' worth of reserves for more than a century as continuing exploration proves additional reserves.Thus, modeling of how long reserves might last is fraught with the same difficulty as modeling how long the food in one's refrigerator might last: even though you keep eating three meals a day, there is still food in the refrigerator.
It is worth repeating that reserves are an economically-defined quantity and never have and never will equate to "all there is".In spite of this reality, a recent article went so far as to assert that the majority of Earth's reserves have already been consumed: "80% of the world's mercury reserves, 75% of its silver, tin, and lead, 70% of gold and zinc, and 50% of copper and manganese had already been processed through human products" [26] (p.239), even though it is a simple fact that the world's reserves of these elements continue to evolve with improved geological assurance, technological advances and economic conditions and are presently as large as they have ever been [43].
Another way of approaching this question of how long resources will last is the "peak" concept as applied to petroleum resources by M. K. Hubbert [13,44].It is based on an empirical observation about U.S. oil fields that production follows a bell-shaped curve of a normal distribution, and thus, the peak of production occurs when roughly half of the resource has been extracted.Thus, changes in production are used to model the entire extractable resource.Such an approach is based on many assumptions, the most important of which are that prices and technology are relatively stable.Hubbert's prediction that peak oil production for the conterminous United States would occur in the early 1970s was a reasonable extrapolation of trends leading up to the supposed "peak oil" and has been cited by many as proof that such production trend modeling is accurate [5,6,45].However, the recent application of new and improved drilling technology (horizontal drilling and fracking) has shown that the assumptions of the Hubbert peak oil methodology were restricted to "conventional" petroleum production and have not accurately tracked the large increases in both production and reserves of petroleum and natural gas that have recently resulted from these new technologies (Figure 1) [46].
amounts to convert all identified or undiscovered resources into reserves.In practice, most companies do not drill out more than about 20 or 30 years' worth of reserves, and some large mines have had ~20 years' worth of reserves for more than a century as continuing exploration proves additional reserves.Thus, modeling of how long reserves might last is fraught with the same difficulty as modeling how long the food in one's refrigerator might last: even though you keep eating three meals a day, there is still food in the refrigerator.
It is worth repeating that reserves are an economically-defined quantity and never have and never will equate to "all there is".In spite of this reality, a recent article went so far as to assert that the majority of Earth's reserves have already been consumed: "80% of the world's mercury reserves, 75% of its silver, tin, and lead, 70% of gold and zinc, and 50% of copper and manganese had already been processed through human products" [26] (p.239), even though it is a simple fact that the world's reserves of these elements continue to evolve with improved geological assurance, technological advances and economic conditions and are presently as large as they have ever been [43].
Another way of approaching this question of how long resources will last is the "peak" concept as applied to petroleum resources by M. K. Hubbert [13,44].It is based on an empirical observation about U.S. oil fields that production follows a bell-shaped curve of a normal distribution, and thus, the peak of production occurs when roughly half of the resource has been extracted.Thus, changes in production are used to model the entire extractable resource.Such an approach is based on many assumptions, the most important of which are that prices and technology are relatively stable.Hubbert's prediction that peak oil production for the conterminous United States would occur in the early 1970s was a reasonable extrapolation of trends leading up to the supposed "peak oil" and has been cited by many as proof that such production trend modeling is accurate [5,6,45].However, the recent application of new and improved drilling technology (horizontal drilling and fracking) has shown that the assumptions of the Hubbert peak oil methodology were restricted to "conventional" petroleum production and have not accurately tracked the large increases in both production and reserves of petroleum and natural gas that have recently resulted from these new technologies (Figure 1) [46].The point of the present paper is not to assess previous studies of resource adequacy, but rather to provide a conceptual framework for thinking about how resources, particularly mineral resources, are explored for, discovered and put into production in order to satisfy the continuing needs of society.An underlying concept is that mineral resources have been the foundation of civilization throughout history [12,[48][49][50] and that the adequacy of such resources for future generations remains The point of the present paper is not to assess previous studies of resource adequacy, but rather to provide a conceptual framework for thinking about how resources, particularly mineral resources, are explored for, discovered and put into production in order to satisfy the continuing needs of society.An underlying concept is that mineral resources have been the foundation of civilization throughout history [12,[48][49][50] and that the adequacy of such resources for future generations remains one of the central issues of our time [51,52].Although many mineral resources could be used as examples, the below discussion will focus on copper, because it has been used throughout history; it is a major ingredient in modern construction and technology; its geologic occurrence is relatively well known; and it has a rich historical record of production data.In addition, there have been numerous studies [53,54] about copper, as well as recent predictions of "peak copper" occurring within the next 20-30 years [9,10,55].
Copper
Historic mine production has recovered more than 658 million metric tons (Mt) of copper (Cu) from 1700 to 2015 [43,56,57].Of this total, over 45 percent has been produced during the past 20 years, and approximately 26 percent has been produced during the previous 10 years.In other words, roughly one quarter of all of the copper mined throughout human history has been produced in just the past 10 years (Figure 2).These percentages suggest that global copper production has doubled every 25 years in order to accommodate trends in global copper consumption.
A research group at Monash University in Australia analyzed current estimates of copper resources based on published company reports and tabulated a total of 1781 Mt of Cu contained within a total of 730 projects, with a further 80.4 Mt of Cu in China from an unknown number of deposits, yielding a world total of 1861 Mt of Cu [53].This is very similar to a recent U.S. Geological Survey (USGS) estimate of identified Cu resources of 2100 Mt of Cu [54].To reiterate a previous point, this 2100 Mt of Cu is the amount of identified resources, and further drilling or possible changes in prices/technology will almost certainly add to the total.However, neither reserves nor resources are "all there is".Before addressing the questions of what is known about, and the potential adequacy of, "all there is", it is useful to compare current estimates of reserves to current and projected rates of production.
Modeling based on identified resources has led some researchers to suggest that mine production of copper will peak and begin to decline within decades due to increasing demand and resource depletion [8,55].Is this realistic?The U.S. Census Bureau estimates that world population will grow from 7.3 billion people in 2015 to 9.4 billion in 2050 [58].At current rates of global per capita production of copper, estimated at about 2.6 kg/year in 2015, the production of primary (mine-produced) copper would grow to 24.2 Mt of copper in 2050 from the level of 18.7 Mt in 2015 [43].
However, assuming a global per capita production value of 2.6 kg/year does not take into account regional variations in per capita consumption and how they are changing due to changes in affluence or lifestyles.Nor does it take into account the increasing quantities of copper that will likely become available for recycling and thus offset the need for primary copper.Studies of various countries (e.g., [59][60][61][62]) show that per capita consumption of copper is related to the level of economic development.Per capita consumption of copper is relatively stable over time in high-GDP countries, such as the United States and Japan, at about 6 kg/year [63].Very low levels of consumption occur in countries with low economic activity and low income levels, whereas increasing levels of per capita consumption are observed in rapidly developing countries, such as Indonesia and China [63].
Resources 2016, 5, 14 5 of 14 one of the central issues of our time [51,52].Although many mineral resources could be used as examples, the below discussion will focus on copper, because it has been used throughout history; it is a major ingredient in modern construction and technology; its geologic occurrence is relatively well known; and it has a rich historical record of production data.In addition, there have been numerous studies [53,54] about copper, as well as recent predictions of "peak copper" occurring within the next 20-30 years [9,10,55].
Copper
Historic mine production has recovered more than 658 million metric tons (Mt) of copper (Cu) from 1700 to 2015 [43,56,57].Of this total, over 45 percent has been produced during the past 20 years, and approximately 26 percent has been produced during the previous 10 years.In other words, roughly one quarter of all of the copper mined throughout human history has been produced in just the past 10 years (Figure 2).These percentages suggest that global copper production has doubled every 25 years in order to accommodate trends in global copper consumption.
A research group at Monash University in Australia analyzed current estimates of copper resources based on published company reports and tabulated a total of 1781 Mt of Cu contained within a total of 730 projects, with a further 80.4 Mt of Cu in China from an unknown number of deposits, yielding a world total of 1861 Mt of Cu [53].This is very similar to a recent U.S. Geological Survey (USGS) estimate of identified Cu resources of 2100 Mt of Cu [54].To reiterate a previous point, this 2100 Mt of Cu is the amount of identified resources, and further drilling or possible changes in prices/technology will almost certainly add to the total.However, neither reserves nor resources are "all there is".Before addressing the questions of what is known about, and the potential adequacy of, "all there is", it is useful to compare current estimates of reserves to current and projected rates of production.
Modeling based on identified resources has led some researchers to suggest that mine production of copper will peak and begin to decline within decades due to increasing demand and resource depletion [8,55].Is this realistic?The U.S. Census Bureau estimates that world population will grow from 7.3 billion people in 2015 to 9.4 billion in 2050 [58].At current rates of global per capita production of copper, estimated at about 2.6 kg/year in 2015, the production of primary (mine-produced) copper would grow to 24.2 Mt of copper in 2050 from the level of 18.7 Mt in 2015 [43].
However, assuming a global per capita production value of 2.6 kg/year does not take into account regional variations in per capita consumption and how they are changing due to changes in affluence or lifestyles.Nor does it take into account the increasing quantities of copper that will likely become available for recycling and thus offset the need for primary copper.Studies of various countries (e.g., [59][60][61][62]) show that per capita consumption of copper is related to the level of economic development.Per capita consumption of copper is relatively stable over time in high-GDP countries, such as the United States and Japan, at about 6 kg/year [63].Very low levels of consumption occur in countries with low economic activity and low income levels, whereas increasing levels of per capita consumption are observed in rapidly developing countries, such as Indonesia and China [63].Factoring in projected growth in population and global per capita GDP to 2050 provides a forecast of copper consumption that needs to be satisfied by mine production of 30.4 Mt of Cu.These projections indicate that between 750 and 990 Mt of primary copper will be required to satisfy projected global demand from 2012 to 2050.The projected range of estimated demand considers modest changes in the amount of copper recycling, which has increased in recent years [65]; but does not account for new technology-driven changes in demand patterns, substitution by alternative materials or other factors that could alter future demand growth for copper.To restate this, we can estimate amounts and project trends, but we cannot know the actual mix of mineral resources needed 30-50 years in the future any more than someone in 1970 could have predicted today's high-tech need for a variety of specialty metals, such as rare-earth elements.
Using these assumptions and constraints, the current estimate of 2100 Mt of copper in identified resources in major copper deposits worldwide is about twice as much copper as is estimated to be needed through 2050.Even without considering the almost certain expansion of reserves that will take place with future exploration and drilling, this makes it very unlikely that copper production will peak by 2040 due to resource depletion (Figure 3), as predicted by the studies of [9,10,55].
Resources 2016, 5, 14 7 of 14 Factoring in projected growth in population and global per capita GDP to 2050 provides a forecast of copper consumption that needs to be satisfied by mine production of 30.4 Mt of Cu.These projections indicate that between 750 and 990 Mt of primary copper will be required to satisfy projected global demand from 2012 to 2050.The projected range of estimated demand considers modest changes in the amount of copper recycling, which has increased in recent years [65]; but does not account for new technology-driven changes in demand patterns, substitution by alternative materials or other factors that could alter future demand growth for copper.To restate this, we can estimate amounts and project trends, but we cannot know the actual mix of mineral resources needed 30-50 years in the future any more than someone in 1970 could have predicted today's high-tech need for a variety of specialty metals, such as rare-earth elements.
Using these assumptions and constraints, the current estimate of 2100 Mt of copper in identified resources in major copper deposits worldwide is about twice as much copper as is estimated to be needed through 2050.Even without considering the almost certain expansion of reserves that will take place with future exploration and drilling, this makes it very unlikely that copper production will peak by 2040 due to resource depletion (Figure 3), as predicted by the studies of [9,10,55].Furthermore, because many of these studies dating back to Meadows et al. [2] confuse reserves with "all there is", their conclusions about reserve exhaustion, whether substantiated or not, do not really address the underlying question of the adequacy of mineral resources for future generations.To best address this important question, we need to know future world demand (net of recycling) for minerals, estimate as well as we can the amount of identified resources and assess the location, quality, quantity and the environmental and economic aspects of the extraction of undiscovered resources.This is a daunting task that has never been seriously undertaken for any major commodity.The best effort for copper is a recently completed global assessment of the major sources of onshore world copper resources in two, but not all, of the most important types of copper-rich deposits by the U.S. Geological Survey (USGS) [54].
Before addressing the implications of that study for the questions of "peak copper" and the adequacy of copper resources for future generations, it is important to realize what is not included in that assessment.The USGS study focused mainly on deposits to a depth of one kilometer and did not include resources in the ocean basins that cover more than 75% of the Earth's surface, nor the possibility for utilization of resources from space.Some studies have indicated that seafloor resources [66] may be as much or greater than onshore resources and others have proposed direct recovery of metals from seawater, which if economically viable, would likely eclipse all onshore resources [67].Neither of these ideas about potential resources has been developed sufficiently to know if they are even Furthermore, because many of these studies dating back to Meadows et al. [2] confuse reserves with "all there is", their conclusions about reserve exhaustion, whether substantiated or not, do not really address the underlying question of the adequacy of mineral resources for future generations.To best address this important question, we need to know future world demand (net of recycling) for minerals, estimate as well as we can the amount of identified resources and assess the location, quality, quantity and the environmental and economic aspects of the extraction of undiscovered resources.This is a daunting task that has never been seriously undertaken for any major commodity.The best effort for copper is a recently completed global assessment of the major sources of onshore world copper resources in two, but not all, of the most important types of copper-rich deposits by the U.S. Geological Survey (USGS) [54].
Before addressing the implications of that study for the questions of "peak copper" and the adequacy of copper resources for future generations, it is important to realize what is not included in that assessment.The USGS study focused mainly on deposits to a depth of one kilometer and did not include resources in the ocean basins that cover more than 75% of the Earth's surface, nor the possibility for utilization of resources from space.Some studies have indicated that seafloor resources [66] may be as much or greater than onshore resources and others have proposed direct recovery of metals from seawater, which if economically viable, would likely eclipse all onshore resources [67].Neither of these ideas about potential resources has been developed sufficiently to know if they are even possible, but their relevance to the present discussion is simply that it is not possible to predict the exhaustion of mineral resources when only considering part of the potential resource.
Also important is the fact that the USGS study was constrained to current prices and technology for considering what might be an economic mineral resource.If prices were 10-times higher or technology 10-times more efficient (both of which have taken place in the history of resource extraction [42]), then the assessed copper resources would be considerably higher than the presented estimates.Thus, the undiscovered copper resources estimated by Johnson et al. [54] are still a small subset of "all there is", but at least this estimate is closer to an ultimate resource than the identified resources categorized by Mudd et al. [53] and similar studies.
The USGS global copper resource assessment [54] estimated that more than 3500 Mt of copper are present in undiscovered porphyry and sediment-hosted continental copper deposits; the majority of these resources are located in Asia, followed by the Americas.Of this 3500-Mt estimate of in-ground copper resources, approximately 2000 Mt of copper is estimated as economic to recover under current economic conditions and mining technology.
A significant fraction of these undiscovered resources is estimated to occur deep beneath covering rock and sediment and commonly in remote areas.These deposits will be difficult and time consuming to discover, and many potentially economic deposits in remote and deep settings may not be discovered and developed before 2050.An estimate of undiscovered resources that are most likely to be discovered and characterized by 2050 is approximately 1100 Mt of copper [68].
Using the production rates (16.9-30.4Mt/y of copper) previously discussed for current reserves, the range of undiscovered copper resources by the USGS study of 1100-3500 Mt of copper (keeping in mind the previously discussed caveats about this being a minimum estimate) could last more than a century.This would be in addition to the currently identified resources, which by themselves are estimated to be twice the amount needed through 2050.Therefore, the combined total is likely to last at least through the end of this century and possibly the century after that.
Kesler and Wilkinson [69] used a different, largely hypothetical, approach to estimating world copper resources.Their tectonic diffusion model predicts world copper resources of 89 billion metric tons (Bt) of copper within mineable depths, an amount that would last for millennia.Thus, although it is true that copper is not an infinite resource on planet Earth, based on current knowledge, a sufficient amount exists to fulfill the needs of at least several generations.
Of course, copper, like any other mineral resource, is not really "consumed" in the same way that energy loses its ability to perform useful work when it is "consumed."Thus, discussions of mineral resource exhaustion are implicitly referring to the lack of availability of resources in the ground and not their lack of availability in an absolute sense given that many metals are available for multiple cycles of use and re-use.Within this paradigm of ultimate resources, it is useful to discuss how much of a resource is currently in use, how much might be needed in the future to fulfill the services desired by an increasingly affluent global population and how much of what is in the ground can be put into use to satisfy these requirements [7,70,71].
Therefore, instead of worrying about reaching "peak" production or "exhausting" a resource, we should instead be more concerned about what we do with the resource after it has been extracted.Is it being used in dissipative applications [72]?How much is being "lost" (i.e., placed in a state where it is prohibitively difficult or costly to recover) at each life cycle stage?Further concerns are the externalities (pollution, loss of ecosystem services and water and energy use) associated with resource production.A full life cycle assessment and material flow accounting of mineral resources would be a more fruitful conversation than just peak production due to resource exhaustion.
The Future
All of the preceding discussion boils down to when, but not if, a given quantity of resource will be exhausted.However, this begs the question of, "what then?"This is not as dramatic as it may seem because the common analogy between resource exhaustion and falling off a cliff is not appropriate.Resource use does not proceed full bore until the last unit of an element is consumed and then disaster unfolds.Rather, resource use follows the basic laws of supply and demand.As resources are consumed, scarcity will drive prices up, which in turn will affect demand, consumption and production.To greatly simplify, if the equilibrium price of a commodity rises ten-fold, then this will both increase supply, because previously marginal or unknown deposits will be put into production, and decrease demand, because some uses of that commodity will no longer be economic, and either production will decrease, thrifting (reducing unit consumption) will occur or other materials will be substituted.This will gradually reduce the need for the commodity in question in the long term and bring production in line with the available reserves, even though short-term supply disruption or imbalance is possible.To return to an earlier analogy, "the Stone Age did not end because we ran out of stone", we do not know which metals will be in demand 30-50 years from now; changes in technology and lifestyle may mean that society's future needs are very different from today's.
This does not mean that we should be unconcerned about resource adequacy.We should.However, we should not confuse the adequacy of mineral resource supply with the actions needed to address the needs of future generations for mineral resources.
The main resource issue for the future likely will be the development of capacity to discover and produce additional resources.There may well be enough undiscovered copper to meet global needs for the foreseeable future, but rates of exploration, discovery success and mine development will need to increase to supply the needed copper resources.Cost-effective technology will need to be developed to discover additional mineral deposits in new, deeper and concealed settings and to extract and recover resources from earth materials, while minimizing environmental impact.This technology will only be effective if it can be applied to lands where new resources may be discovered and mines developed.Nature, not society, controls where mineral concentrations occur, but society determines whether to mine or not, and land access becomes more difficult as population increases.
As much as recycling [65] and substitution [73] will be part of the solution, they cannot by themselves solve the problem.Population growth and rising standards of living, combined with the sequestration of elements, like copper in buildings, cars, cell phones, etc., for periods of years to decades and, in some cases, centuries will require new primary supplies of mineral resources.The lead time from discovery to mine development can be 10-30 years.Extensive mineral exploration will be required to meet this future resource demand, because many of the undiscovered deposits will be harder to find and more costly to mine than near-surface deposits located in more accessible areas.
At a global level, it is not clear that society is making the investments in education, research and development to ensure that these mineral resources will be available for future generations [74].The number of universities that teach and do research on mineral resources, mining engineering and metallurgy is decreasing rather than increasing, and this is paralleled by decreases in governmental funding to support geoscience research (Figure 4).National, state and provincial geological surveys and the knowledge infrastructure that they create, manage and publically provide are instrumental in the discovery process, particularly by conducting modern geologic mapping and related geochemical and geophysical surveys to identify favorable geologic environments likely to contain mineral resources.They also compile data on known deposits, conduct research into the processes that form mineral deposits, track mineral commodity production and use and update assessments to estimate the amounts, quality and location of future resources on local, national and global scales.
In most countries, exploration and discovery is done largely by private companies and individuals, building on, but independent of, the underlying research and data development of universities and governmental geological surveys.Some companies conduct their own research or fund others to do targeted research.However, private companies are particularly susceptible to economic cycles with a clear correlation between economic cycles and exploration expenditures (Figure 5).A downsized mineral industry during a low cycle may not be able to respond quickly to the next up cycle, thus resulting in short-term mismatches in supply and demand.For the past 10 years, greatly increased exploration expenditures do not appear to have resulted in proportional discovery success, a trend some attribute to exploration for increasingly deeper ore bodies that are undercover or in remote regions.Thus, adequate supplies of mineral resources for future generations should not be taken as a given.The need for mineral resources is clear, but the path to a sustainable future will reflect the distribution of materials in the Earth and ultimately will depend on the choices, innovation, policies and values of human society.This deserves serious thought.
Resources 2016, 5, 14 10 of 14 distribution of materials in the Earth and ultimately will depend on the choices, innovation, policies and values of human society.This deserves serious thought.Even with the problems identified above, the future is not grim.Discoveries continue to be made even in well-explored districts.Most exploration has been concentrated in the upper part of the upper kilometer of the Earth's crust, even though economic deposits have been discovered down to three or more kilometers.Thus, deeper drilling combined with new exploration technology promises new discoveries, even in mature districts.We do not know what the limits to mineral resources might be, but we do know that we have not come close to reaching them yet.Copyright 2014 American Geosciences Institute and modified with their permission [75].
Resources 2016, 5, 14 10 of 14 distribution of materials in the Earth and ultimately will depend on the choices, innovation, policies and values of human society.This deserves serious thought.Even with the problems identified above, the future is not grim.Discoveries continue to be made even in well-explored districts.Most exploration has been concentrated in the upper part of the upper kilometer of the Earth's crust, even though economic deposits have been discovered down to three or more kilometers.Thus, deeper drilling combined with new exploration technology promises new discoveries, even in mature districts.We do not know what the limits to mineral resources might be, but we do know that we have not come close to reaching them yet.Even with the problems identified above, the future is not grim.Discoveries continue to be made even in well-explored districts.Most exploration has been concentrated in the upper part of the upper kilometer of the Earth's crust, even though economic deposits have been discovered down to three or more kilometers.Thus, deeper drilling combined with new exploration technology promises new discoveries, even in mature districts.We do not know what the limits to mineral resources might be, but we do know that we have not come close to reaching them yet.
Figure 1 .
Figure 1.U.S. crude oil reserves and production showing apparent "peak oil" for conventional petroleum reserves in 1972 and growth in reserves in the past ~10 years as a result of the application of new drilling technology.U.S. barrels are equivalent to approximately 0.159 cubic meters.Data source: [47].
Figure 1 .
Figure 1.U.S. crude oil reserves and production showing apparent "peak oil" for conventional petroleum reserves in 1972 and growth in reserves in the past ~10 years as a result of the application of new drilling technology.U.S. barrels are equivalent to approximately 0.159 cubic meters.Data source: [47].
Figure 2 .
Figure 2. Trends in cumulative (A); annual (B); per capita (C); per constant 2005 US$ gross domestic production (GDP) (D); copper production on an absolute basis and normalized to one for values in year 1960 (E,F).Data sources:[56,57] for copper production; [58] for world population;[64] for GDP data.Data presented up to year 2015, except for those involving GDP for which the year 2014 is the most recently available.
Figure 3 .
Figure 3. World copper production: historical and projected based on the modeling in [9].
Figure 3 .
Figure 3. World copper production: historical and projected based on the modeling in [9].
Figure 4 .
Figure 4. Geoscience degrees granted by year and U.S. Federal funding of geoscience as a percentage of total research spending.Copyright 2014 American Geosciences Institute and modified with their permission[75].
Figure 4 .
Figure 4. Geoscience degrees granted by year and U.S. Federal funding of geoscience as a percentage of total research spending.Copyright 2014 American Geosciences Institute and modified with their permission [75].
Figure 4 .
Figure 4. Geoscience degrees granted by year and U.S. Federal funding of geoscience as a percentage of total research spending.Copyright 2014 American Geosciences Institute and modified with their permission[75].
Table 1 .
Mineral resource and reserve classifications. | 8,834 | 2016-02-29T00:00:00.000 | [
"Economics",
"Geology"
] |
Chinese Named Entity Recognition in the Geoscience Domain Based on BERT
Geological reports are frequently used by geologists involved in geological surveys and scientific research to record the results and outcomes of geological surveys. With such a rich data source, a substantial amount of knowledge has yet to be mined and analyzed. This paper focuses on automatically information extraction from geological reports, namely, geological named entity recognition. Geological named entity recognition has an important role in data mining, knowledge discovery and Knowledge graph construction. Existing general named entity recognition models/tools are limited in the domain of geoscience due to the various language irregularities associated with geological text, such as informal sentence structures, several domain‐geoscience words, large character lengths and multiple combinations of independent words. We present Bidirectional encoder representations from transformers (BERT)‐(Bidirectional gated recurrent unit network) BiGRU‐ (Conditional random field) CRF, which is a deep learning‐based geological named entity recognition model that is designed specifically with these linguistic irregularities in mind. Based on the pretrained language model, an integrated deep learning model incorporating BERT, BiGRU and CRF is constructed to obtain character vectors rich in semantic information through the BERT pretrained language model to alleviate for the lack of specificity of static word vectors (e.g., word2vec) and to improve the extraction capability of complex geological entities. We demonstrate our proposed model by applying it to four test datasets, including a geoscience NER data set from regional geological reports, and by comparing its performance with those of five baseline models.
2 of 15 et al., 2021). Such approaches depend on hand-crafted pattern-matching-based rules for supporting the recognition and extraction of target entities from geological textual data. Outside of the geoscience domain, several rule-based NER techniques/models have been proposed (Appelt et al., 1993;Elsebai et al., 2009;Lehnert et al., 1992). As the amount of data increases, the workload of rule extraction increases, the difficulty of maintaining rule consistency increases, and rule-based and dictionary-based methods cannot address the heterogeneity and complexity of text and the thus achieve high GNER performance Santoso et al., 2021). Compared with rule-based methods, the statistical learning methods can learn from large amount of annotating training datasets to guide the recognition and extraction NER (Liu et al., 2022;Molina-Villegas et al., 2021;Peng et al., 2021). The popular deep learning methods for named entity recognition generally base on word embeddings, which are able to learn similar representations for semantically or functionally similar words Santoso et al., 2021;Tian et al., 2021). Researchers started to construct models without complicated feature engineering and to minimize the reliance on NLP toolkits for feature acquisition.
However, the recognition of geological named entities still faces some difficulties and challenges (Qiu, Xie, Wu, Tao, & Li, 2019;: (a) compared with general domain texts, geological named entities have (a) large character lengths, (b) many remote words, (c) multiple combinations of independent words, and (d) nested entities. For example, "garnet diopside silica" indicates a kind of silica with garnet and diopside as the main mineral components, and this geological naming entity contains 10 characters. The entity "garnet diopside silica" includes garnet, diopside, and silica. The geological nomenclature of "garnet diorite" includes garnet, diorite, diorite, and garnet diorite silica. There are nesting relationships among geological naming entities of different conceptual levels; for example, a certain stratigraphic entity contains several rocks, and rocks contain several minerals. Presently, there is a lack of a large-scale annotated corpus in the geological field; there is a lack of samples for training the model; and the recall and accuracy are not sufficient. Therefore, geological named entity recognition becomes a challenging task.
The popular deep learning approaches for NER generally based on word embeddings for learning similar representations for semantically or functionally similar words. One drawback of these approaches is that the embedding for the same word in different semantic sentences are identical. A large-scale annotated corpus is required for model training to prevent parameter underfitting or overfitting, while the production of an annotated corpus is costly, and building a large-scale annotated data set is challenging for most natural language processing tasks. As an advanced and widely employed language expression model BERT (Devlin et al., 2018) extracts bidirectional semantic features of Chinese utterances from large-scale texts by unsupervised learning, further enhances the generalization ability of the word vector model, and can better characterize syntactic and semantic information in different contexts, which can, to a certain extent, free the dependence of supervised learning on large-scale annotation data and effectively reduce the labor intervention of manual annotation. Therefore, the integration of BERT into deep learning models will become a new way to improve the performance of Chinese, geological named entity recognition.
To address the abovementioned issues, in this paper, a character-level embedding-based BERT-BiGRU-CRF model is proposed for extracting Chinese geological named entities in response to the characteristics of named entity recognition tasks in the geological domain and to the problems of small sample size and poor results for some target entities (Note that BERT can be used for other languages, while this paper we focus on Chinese GNER). The model consists of an encoder, a BiLSTM neural network layer and a conditional random field (CRF) layer from the bottom up: the encoder is a character-level, Chinese, BERT-based model that maps the input Chinese entity characters into a low-dimensional, dense real space to mine the potential semantics embedded in Chinese entity elements; the BiLSTM neural network layer takes the character vector transformed by the encoder as input to capture the Chinese entity sequences; the BiLSTM neural network layer takes the character vectors transformed by the encoder as input to capture the forwards (left-to-right) and backwards (right-to-left) bidirectional features of the Chinese entity sequences; and the conditional random field layer takes the upstream BiLSTM extracted bidirectional features as input to generate the labels that correspond to each character of the geological entities in combination with the BIOES annotation specification. The experimental results demonstrate that our presented model outperforms previous models and that our approach achieves state-of-the-art performance on the constructed datasets.
The main contributions of this research are summarized as follows: 1. Based on the characteristics of Chinese geological domain text, we propose BERT-BiGRU-CRF model for guiding the recognition and extraction of target information from unstructured textual data 2. Our model is compared in detail with four other mainstream models, and experiments demonstrate that our model obtains higher performance on both the general domain data set and geological domain data set 3. We share the source code of BERT-BiGRU-CRF and the annotated test data at https://doi.org/10.5281/ zenodo.5758466 The remainder of the article is structured as follows: Section 2 discusses related work in the area of geological named entity recognition. Section 3 introduces the proposed approach, and Section 4 provides details on the experiments and results. Some concluding remarks are described in Section 5.
Text Mining in Geoscience
In the domain of geosciences, various systems and applications, such as mineral exploration (Holden et al., 2019;Shi et al., 2018), paleontological studies (Peters et al., 2014(Peters et al., , 2017Wang et al., 2018), and geological text mining and application in Chinese (Qiu, Xie, Wu, Tao, & Li, 2019;Wang et al., 2018), have been developed and constructed. Holden et al. (2019) analyzed 25,419 mineral exploration reports in a targeted manner using NLP pipeline analysis methods (word recognition, keyword extraction, etc.), focusing on searching and summarizing the geology-related content in the reports, and developed a related system named the GeoDocA system. This system retrieves geology-related subject terms based on a dictionary of pre-customized entity groups (geological timescales, mineralogy, host rock types and alteration types), calculates the co-occurrence of these terms, and ultimately creates a summary map for retrieval. Shi et al. (2018) utilized a mature text mining algorithm for direct application to geological text data in Chinese, performed a case study of the Lara copper deposit in Sichuan Province, China, for retrieval, and used convolutional neural networks (CNNs) to classify geological, geophysical, geochemical and remote sensing-related text data for geological texts. Entity co-occurrence and frequency statistics of the deposit analysis were performed. Enkhsaikhan et al. (2018) used a form of word embedding to analyze the semantic similarity among geology-related phrases and applied an analogical solver to establish the semantic relationships among terms to investigate mineral exploration through a semantic analysis approach. The final results demonstrate the potential application of semantic relationships among entities in the domain.
For the domain of paleontological text mining, Peters et al. (2014) developed PaleoDeepDive, a paleontology-oriented knowledge mining platform based on the original platform named DeepDive (De Sa et al., 2016), which was primarily utilized to generate historical and generic turnover rates of taxonomic diversity. In their recent work (Peters et al., 2017), they automated the analysis of stratigraphic databases using a computer reading system to automatically extract information about the three phases of occurrence of stromatolites arranged on a geological time scale, as well as predictors of stromatolite prevalence.
In the domain of geosciences in Chinese, proposed a self-learning-based word segmentation method to segment meaningful words from geoscience reports written in Chinese, in response to problems such as the drastic performance degradation of generic domain, word segmentation methods applied to the geological domain. To address the lack of a massive corpus in the Chinese geology domain, proposed a method to generate a corpus based on a random combination of word and word frequency information to segment words related to geology by using aBiLSTM model. Wang et al. (2018) segmented mineral deposit domain texts based on CRFs and then used the segmented texts for keyword extraction and co-occurrence word statistics to construct a knowledge graph for visual analysis. Their algorithm was able to segment both generic domain words and geological domain words. Li et al. (2021) constructed a Chinese word segmentation algorithm based on a geological domain ontology assisted by a self-loop approach to better segment geological domain texts. Ma et al. (2021) employed a deep learning model to train journal abstracts and titles in the field of Chinese geology. The resulting model was trained to perform automatic abstract construction.
All of these studies have explored and visualized geological text but have not addressed the more fine-grained information (e.g., NER, keywords and relation extraction) contained in it. Instead, this paper focuses on the extraction of named entities from geological texts to build a knowledge graph of the geological domain and to mine and discover hidden knowledge in the geological domain.
Previous Research on Geological Named Entity Recognition
Geological named entity recognition is a domain-specific named entity recognition that aims to identify some important concepts in geology, including geological age, geological formations, stratigraphy, rocks, minerals and locations . Several scholars have conducted research on geological named entity identification. Zhang et al. (2018) developed a classification system for elements of geological entity information and an annotation specification based on the linguistic features of geological texts and applied a deep belief network model to geological entity information recognition. Ma (2018) combined a geological domain ontology to carry out the named entity recognition task using a mature BiLSTM-CRF model after preprocessing operations of word separation and deactivation for geological texts, such as geological reports, geological domain journal papers and geological professional websites. Qiu, Xie, Wu, and Tao (2019) preprocessed the corpus with reference to the word2vec model using a large amount of unlabeled data to train word embeddings in the geological domain, used a recurrent neural network approach based on BiLSTM with an attention mechanism for semantic encoding of sentences, and finally combined it with conditional random fields to achieve geological entity recognition. Chu et al. (2021) fused multiple methods, such as ELMO (embeddings from language models), CNN, and Bi-LSTM-CRF, to extract geological entities after word vectorization representation using a CNN to add character features and ELMO to extract word dynamic features for input distributed representation. The commonality of these studies is that they take advantage of the ability of deep learning models to learn deep nonlinear features among words to tackle geological named entity recognition tasks.
In this paper, we focus on named entity recognition in the Chinese geological domain, compare the performance of the current mainstream deep learning models on geological named entity recognition, compare the performance of the mainstream models on a standard data set in the general domain and a constructed data set in the domain of geosciences.
BERT-BiLSTM-CRF
In the BERT-BiLSTM-CRF model, the BERT model is selected as the feature representation layer for word vector acquisition. The BiLSTM model is employed for deep learning of full-text feature information for specific geological domain named entity recognition. The output sequence of the BiLSTM model is processed in the CRF algorithm layer, and combined with the CRF algorithm, a global optimal sequence is obtained based on the labels between two neighbors, as shown in Figure 1.
BERT-BiGRU-CRF
The model framework consists of a BERT pretrained language model, BiGRU network, and CRF layer; a diagram of the model architecture is shown in Figure 2. First, the input sequence is input into the BERT layer for pretraining to obtain context-dependent representations, which are utilized to solve the key problem of entity recognition with many rare characters and nested entities. Second, the vectors obtained from the BERT layer are input into the BiGRU layer to solve the problem of long-term text memory and long text dependency, which can be applied to solve the key problem of large-length geological entity characters. Last, decoding is performed by the CRF layer to obtain the output label sequence.
BERT Layer
The RNNs and CNNs have certain shortcomings when addressing NLP tasks: the circular network structure of RNNs is not parallelized, and training is slow; the inherent convolutional operation of CNNs is not well suited to serialized text. The transformer model (Vaswani et al., 2017) is a new architecture for textual serial networks that is based on a self-attention mechanism, where arbitrary units interact and there is no length limi-10.1029/2021EA002166 5 of 15 tation problem. The BERT model uses a multilayered bidirectional transformer encoder structure while being constrained by left-right and right-left contexts and is better able to include rich contextual semantic information than the ELMo model (Peters et al., 2018), where left-to-right and right-to-left LSTM connections are trained independently. In addition, the transformer uses positional embedding to add temporal information in response to the problem that the self-attention mechanism cannot extract temporal features. The BERT input representation is stitched by three embeddings, namely, token embedding (word embedding), sentence embedding and positional embedding, which can clearly represent a single text sentence or a pair of text sentences in a token sequence, as shown in Figure 3. In addition, the BERT pretrained language model also captures word-level and sentence-level representations through two tasks, the masked language model and next sentence prediction, respectively, and is trained jointly. The masked language model is designed to train a deep bidirectional language representation vector by randomly masking certain words in a sentence and then predicting the masked words. In contrast to standard language models (e.g., word2vec) that can only predict the target function in one direction from left to right or from right to left, masked language models can predict the masked words from any direction.
BiLSTM Layer
LSTM networks were proposed by Hochreiter and Schmidhuber (1997) to solve the phenomenon of gradient disappearance or gradient explosion that occurs in earlier recurrent neural networks after multilayer network propagation; they comprise a special kind of recurrent neural network. LSTM is widely applied in text information processing tasks as it can well capture temporal information and handle information with backwards and forwards dependencies (Chiu & Nichols, 2016). The standard LSTM can only accept antecedent information and only considers the impact of antecedent information on the current moment, disregarding the following information. Considering the close contextual connection of Chinese text, this paper adopts a bidirectional LSTM. BiLSTM is a further development of LSTM by adding a backwards LSTM that combines the forwards hidden layer and the backwards hidden layer to obtain two different vector representations of the input information at the current moment in a recursive operation, combining them as the vector representation of the input information at the current moment, which can access both preceding information and following information. The detailed encoding pattern of BiLSTM is shown in Figure 4.
GRU Layer
A gated recurrent unit (GRU) is designed to solve the problem of RNN, long-term memory and back-propagation gradient disappearance . The performance effect of a GRU is similar to that of LSTM, and its advantages are reflected in fewer parameters, lower hardware and time cost, and better generalization ability effect on small sample datasets. The internal structure of GRU is shown in Figure 5.
The GRU combines the current node input x t with the state h (t−1) transmitted from the previous node to derive the output y t of the current node and the hidden state h t passed to the next node. The network internal parameter transfers and update equations are shown in (1)-(4).
where σ is the sigmoid function, which is utilized as a gating signal. The closer the gating signal is to 1, the more data are remembered, and conversely, the more data are forgotten. r is the gating to control the reset, and z is the gating to control the update. h' refers to the candidate hidden state. w rx , w rh , etc. Are the weight matrices, and b r , b z , etc. Are the bias quantities. ⨀ is the Hadamard product, that is, the corresponding elements in the matrix are multiplied.
Attention Mechanism Layer
GRU can solve the problem of long-term memory to a certain extent and extract global features. However, it is difficult to solve the problem of long-distance dependency in geological text, and it is difficult to retain local detail information in long text. To compensate for the shortcomings of BiGRU in extracting local features (there are many solutions, and in this paper we choose the attention mechanism strategy), this paper introduces the attention mechanism to extract the degree of association between different characters in a sentence and the context, which is conducive to solving the long-distance dependency problem caused by the large character length of geological named entities. The attention mechanism adds feature weights to the semantics related to geological named entities to improve the effect of local feature extraction.
The attention mechanism layer assigns weights to the feature vectors h t output by the previous layer and calculates the common output feature vector c t of the previous layer and the attention layer at time t.
, = exp score −1 , ℎ score , ℎ = tanh , ℎ where a t,i is the attention function. The score function is the alignment model, which assigns scores based on how well the inputs and outputs match at moment i, defining how much weight each output gives to each input hidden state.
CRF Layer
The named entity recognition task can generally be considered a sequence annotation problem, and usually the output of BiLSTM can be applied for sequence annotation by adding a softmax layer to the top layer for judgment and outputting the label with the highest probability to complete the annotation task of the input sequence. Although the BiLSTM solves the problem of contextual linkage, it lacks constraints on the output label information.
The softmax layer is based on the judgment of the information at the current moment and does not consider the overall linkage, and the output result is only the optimal solution of the information at the current moment, that is, the local optimal solution. This approach may produce an invalid sequence of labels, which are sequences of outputs such as "B-ROC, I-TIME...". In the case of named entity identification tasks, the BIOE method (Lample et al., 2016) is selected for labeling so that B-ROC followed by I-TIME is not possible.
Given the set of random variables X as the observation sequence and the output sequence Y, the CRF model is described using the conditional probability P(Y/X). For a sentence of text, X = {x 1 , x 2 ..., x n } denotes its observation sequence, and for the output label sequence Y = {y 1 , y 2 ..., y n }, its score is computed as follows: where Q is a matrix of scores m*k output by the attention mechanism, where m is the length of the sentence and k is the number of tags with different entity types. Q i,j denotes the score for the jth tag of the ith word. A is a matrix of transferred scores of size k + 2, where Ay i ,y i+1 denotes the scores transferred from label i to label i + 1.
Y X is the sequence of all possible annotations for sentence X. The final decoding is performed by the Viterbi algorithm to obtain the predicted tag sequence with the highest score as follows: * = arg max(score( , )) (9)
Experiments and Results
A set of primary experiments are conducted to evaluate the presented approach on the gold NER datasets and GNER datasets. First, we introduce the experimental environment and parameters. Second, we presented the evaluation metrics based on precision, recall and F1-score. Third, we compare a set of models/algorithms with our proposed models. Last, we demonstrate the recognition results by a case study and analyze the illustrative errors.
Datasets
Our presented approach is evaluated on the MSRA, Boson, PeopleNER and GeoNER2021 datasets, all of which are preprocessed. Among these datasets, GeoNER2021 is the domain of geoscience that was developed by human annotation; the other datasets comprise the gold NER datasets in the generic domain. Note. Text (%) = the number of entities/number of words in the data set; count = the number of entities on the data set; Max length = the maximum length of the entity on the data set; Avg length = the average length of the entity on the data set. a Word = the number of the words on the data set. b Sentence = the number of sentences on the data set.
Table 1 Statistical Analysis of the Datasets in Our Work
Boson: This dataset was provided by the Boson Chinese Semantic Open Platform; it includes 2,000 sentences in total. There are 6 entity categories: time, location, person name, organization name, company name and product name.
PeopleNER: This data set is divided into two data files, with a total of approximately 286,000 sentences. The first data set includes the original text after splitting, and the second data set includes the corresponding tags one by one after splitting, with a total of 286,000 sentences. The main entity types include location, organization, person name and time.
GeoNER2021:
The data in this paper were obtained from the National Geological Archive of China Geological Survey (NGAC) website, with a total of 43 regional geological survey reports. The collected geological reports were manually labeled into six geological named entity categories GTM, GST, STR, ROC, MIN and PLA.
Experimental Environment and Parameter Settings
The model was trained and tested in Python 3.7.3 and TensorFlow 1.1. The experiments were conducted using the BERT-Base model, which contains 12 transformer layers, a 768-dimensional hidden layer and a 12-head multihead attention mechanism. The GRU network has a 128-dimensional hidden layer. The attention mechanism layer is set to 50 dimensions, and the maximum sequence length is set to 256. The optimization function is Adam; the learning rate is set to 5E−5; and the dropout layer is set to 0.5. The dropout layer is set to 0.5. All models were trained on a single GTX 3090 GPU (Table 2).
Evaluation Metrics
The performance is measured with the precision (named P), recall (named R) and F1-score. The detailed formula is calculated as follows: where TP refers to a positive sample and a positive prediction; FP refers to a negative sample and a positive prediction; and FN refers to a positive sample and a negative prediction.
In this work, each experiment is repeated five times, and we report the average F1-score as the final result (Table 3).
Comparative With Other Algorithms
We selected six mainstream algorithms for testing on three generic domains, named entity recognition datasets. The experimental results are shown in Table 4, which show that all the deep learning models obtained more than 85% performance on these datasets. Among them, BERT-BiGRU-CRF obtained the highest performance on all three datasets, with F1 values of 98.1%, 99%, and 99.1%. This finding further demonstrates that the model has a good generalization capability.
Also, six comparative experiments are conducted to validate the different deep learning-models on the Boson data set, which is a relatively small test data set, and the result is reported in Table 5. As seen in Table 5, the algorithm that achieves the best performance among all the models is BERT-BiG-RU-CRF. As we expected, BERT-BiGRU-CRF has good ability to generalize for identifying named entity. In addition, increasing comprehensive and representative gold data set can help BERT-BiGRU-CRF obtain better performance, and further illustrating the ability of BERT-BiGRU-CRF model to obtain higher performance even on small sample datasets.
Further, a series of experiments were designed to validate the performance of the six algorithms on the PeopleNER data set that contains a large number of named entities. Table 6 demonstrates the results. BERT-BiGRU-CRF achieves the best performance, reaching a precision, recall and F1-score of 0.981,0.983 and 0.982, respectively. This again validates that the model exhibits superior performance compared to mainstream deep learning models and is capable of better recognition in large scale corpora.
We compared six mainstreams, named entity recognition models on the geological named entity recognition data set that we constructed. The experimental results, which are shown in Table 7, suggest that the BERT-BiGRU-CRF model achieves the best performance among all deep learning-based models on the GEONER2021 data set. The second best model is the BERT-BiLSTM-CRF model. The experimental results demonstrate that the BERT model has better input sequence characterization, while the GRU module has better modeling results than BiLSTM.
The above three experimental results show that the BERT model can fully extract word-level, word-level and sentence-level features, and the pre-trained word vectors can better characterize the contextual semantics and enhance the generalization ability of the model, especially for the recognition of domain entities with small data set size, and also for the types of geological entities that consist of Chinese and numeric combinations, it can avoid the problem of word separation errors affecting the recognition of entities. It is also possible to avoid the problem that entity recognition is affected by word separation errors. Among the experiments, it is also found that the training time has increased significantly after incorporating the BERT model.
Impact of the Size of the Training Data Set
A set of primary experiments was conducted to determine how many training data from the corpus could affect the GNER model performance, so an optimal number of training data could be determined and utilized for improved performance. A total of 45 controlled experiments were conducted with training data whose proportion ranged from 10% to 100% (the step size for increasing is 10%). The experimental results, in terms of average F1-score, are demonstrated in Figure 6.
The experimental results show that the number of training datasets can improve the GNER performance. As shown in Figure 6, increasing the number of training datasets can increase the average F1-score. For instance, the BERT-BiGRU-CRF model achieved the least satisfactory performance with a 10% training data set, and increasing this proportion to 90% improved the average F1-score by 72%. Although increasing the number of training datasets could improve the performance of the model, optimal GNER performance cannot be achieved if the number of training datasets is solely increasing. For instance, when the number of training datasets was set to 100%, the Note. Bolded indicates the best performance indicator, highlighting.
Table 5
Performance of Different Models on the Boson Data Set average F1-score dropped by 2% compared to the best performance (demonstrated with a 90% proportion). Figure 6 also demonstrates that the BERT-BiGRU-CRF is sensitive to relatively minor changes in the number of training datasets, with a range of 10%-90%. For instance, increasing the number of training datasets from 50% to 90% did affect the average F1-score. Based on the experimental results, the optimal number of training datasets ranges between 80% and 90%.
Impact of Different Architectures
To evaluate the effects of different features and components on the overall performance of the presented methods, we conducted an additional set of experiments to assess the variants of BERT-BiGRU-CRF without different features and components, as illustrated in Table 8. These results demonstrate that each enhancement incorporated into the proposed model contributes to the improvement of the overall performance. The following model settings were evaluated: 1. The entire architecture (BERT-BiGRU-CRF), as introduced in Section 3.2 2. The entire architecture proposed in this study with a word embedding of low dimensionality (i.e., vectors of 150 dimensions instead of 200) 3. The entire architecture proposed in this study with a word embedding of high dimensionality (i.e., vectors of 300 dimensions instead of 200) 4. The entire architecture proposed in this study apart from the BERT representation layer 5. The entire architecture proposed in this study is separated from the CRF layer As is evident from Table 8, all aforementioned variants of BERT-BiGRU-CRF outperformed the traditional matching method, although slight differences were observed in scores obtained via the various approaches. The first two variants were assessed to determine the effect of lower or higher dimensionalities of the word embeddings in the architecture. The results demonstrate that the quality of the results depends on dimensionality. An appropriate dimension setting of the word embedding is necessary for this DNN model, as the use of a higher dimensionality requires a greater number of computational resources, producing slightly inferior results. The two final variants were assessed to determine the effects of specific layers within the proposed model. As expected, in either case, the performance of the model was slightly degraded in terms of each metric, indicating the important impact of the two layers on the performance of the overall model.
Extraction Results and Error Analysis
In this research, a set of experiments is conducted to validate the extracted NER results and analyze the errors. Some illustrative examples of geological NER results are demonstrated in Table 9. As shown in Table 9, basic geological nomenclature entities, such as the stratigraphic unit entities "Upper Cretaceous" and "Neoproterozoic" and rock entity "red molasses", can be effectively identified. In addition, the model can accurately identify the long entity with the remote geographic nomenclature "Nima County Zhang'en-Shenzha County Kargol", which is a nested entity composed of several independent words: Nima County, Zhang'en, Shenzha County, and Kargol. The result of identifying Nima County, Zhang'en-Shenzha County, and Kargol as geological named entities is output directly.
We also summarized and analyzed the model recognition errors, as shown in Table 10, and discovered the following main problems: (a) We cannot accurately recognize some consecutive entity characters separated by symbols only. For example, in the entity "Raja-Kanru Fault", "Raja" is not recognized as the character of the geological structure, but the "-" character followed by "Kangru" is employed as the starting character of this geological structure. Note. Bolded indicates the best performance indicator, highlighting.
Table 7
Performance of Different Models on the GeoNER2021 Data Set (b) The model only recognizes local information. For example, the rock entity of "medium basal volcanic rock" only recognizes "volcanic rock", and "medium basal" is marked as other characters.
The solution for Problem 1 can be constructed by constructing regular expressions to judge the entities on the left and right sides of the symbol "-" If both sides match the entity type, then the entity is considered as a whole. For Problem 2, regular expressions can also be employed to judge and fuse the lexical properties before identifying the entity, and of course, it is also possible to improve the generalization ability of the model by extending the data set.
Conclusions and Future Work
Geological named entity recognition is the basis step for acquiring information and extracting knowledge from massive geological reports or documents and enables further relationship extraction and construction of geological knowledge graphs. In this paper, we study and compare the mainstream, deep learning-based, named entity recognition methods in the geological domain to address the current challenges of named entity recognition in the geological domain and the descriptive characteristics of geological domain texts. An annotated corpus for geological domain, named entity recognition is constructed, and a named entity recognition model for geological domain texts based on the BERT, pretrained language model is proposed. We applied the proposed BERT-BiGRU-CRF model in four different datasets, and evaluated its performances in terms of precision, recall, and F1-score. The experimental results show that the method significantly outperforms the baseline model and other deep learning models in terms of performance evaluation metrics, such as precision, recall and F1-score for text named entity recognition in the geological domain.
The contributions of this research can be seen from two perspectives. From the perspective of methodology, this work presents a deep learning model (named BERT-BiGRU-CRF) approach. As indicated by the experiment results, the BERT-BiGRU-CRF has a better performance in extracting named entity than other deep learning models. From the perspective of application, this paper develops domain-geosciences NER data set for extracting geological NER and further knowledge discover.
Future research will focus on two aspects: (a) the geological domain corpus is an open domain corpus, and the construction of the corpus is a process of continuous updating and improvement. We will further improve and optimize the geological domain named entity annotation corpus, expand the classification of geological domain named entities, and enrich the corpus by various means such as offline collection; (b) research advanced deep learning models, explore the application of new deep learning models in the geological domain, named entity recognition task, optimize the existing model structure, design a more suitable named entity recognition model for the geological domain, and obtain better performance.
Conflict of Interest
The authors declare no conflicts of interest relevant to this study. | 7,947.8 | 2022-02-14T00:00:00.000 | [
"Geology",
"Computer Science"
] |
Can $f(R)$ gravity relieve $H_0$ and $\sigma_8$ tensions?
To investigate whether $f(R)$ gravity can relieve current $H_0$ and $\sigma_8$ tensions, we constrain the Hu-Sawicki $f(R)$ gravity with Planck-2018 cosmic microwave background and redshift space distortions observations. We find that this model fails to relieve both $H_0$ and $\sigma_8$ tensions, and that its two typical parameters $\log_{10}f_{R0}$ and $n$ are insensitive to other cosmological parameters. Combining the cosmic microwave background, baryon acoustic oscillations, Type Ia supernovae, cosmic chronometers with redshift space distortions observations, we give our best constraint $\log_{10}f_{R0}<-6.75$ at the $2\sigma$ confidence level.
constrained specific f (R) models with joint cosmological observations in recent years, there is still a lack of a direct test of the ability to alleviate H 0 and σ 8 tensions for f (R) gravity in light of Planck CMB data. Especially, due to three reasons: (i) the data of Planck-2018 full mission is released; (ii) H 0 tension becomes more serious than before; (iii) richer data from large scale galaxy survey to study DM clustering is gradually obtained, this is an urgent issue needed to be addressed. By implementing numerical analysis, we find that the Hu-Sawicki f (R) gravity cannot reduce H 0 and σ 8 tensions.
This work is organized as follows. In the next section, we introduce the basic equations of f (R) gravity and a specific f (R) model to be investigated in this analysis. In Section III, we display the data and analysis method. In Section IV, the numerical results are presented. The discussions and conclusions are exhibited in the final section.
II. f (R) GRAVITY
To construct a modified theory of gravity, one can introduce some terms such as R 2 , R µν R µν , R µναβ R µναβ , or R n R, when quantum corrections are taken into account. In f (R) gravity, different from the above high-order derivative gravity, the modification is just a function of Ricci scalar R. f (R) gravity was firstly introduced by Buchdahl [15] in 1970 and the readers can find more details in recent reviews [16,17]. The action is written as where f (R), L m and g denote a function of R, the standard matter Lagrangian and the trace of the metric, respectively. By varying Eq.(1), one can obtain the modified Einstein field equation where f R ≡ df /dR denotes an extra scalar degree of freedom, i.e., the so-called scalaron and T µν is energy-momentum tensor. In a spatially flat Friedmann-Robertson-Walker (FRW) universe, the equation of background evolution in f (R) gravity is expressed as where f RR ≡ df R /dR, N ≡ ln a, H is Hubble parameter, a is scale factor and ρ m is matter energy density.
We are also of interests to study the perturbations in f (R) gravity and just consider the linear part here. For sub-horizon modes (k aH) in the quasi-static approximation, the linear growth of matter density perturbations is shown as [18] where Ω m denotes the effective matter density ratio at present. The function X has the following form It is noteworthy that the function X in Eq.(4) induces a scale dependence of linear growth factor δ(k, a) in f (R) gravity, when the growth factor is just a function of scale factor a in GR. In general, a viable f (R) model should be responsible for the inflationary behavior in the very early universe, reproduce the late-time cosmic acceleration, pass the local gravity test, and satisfy the stability conditions. To efficiently investigate cosmological tensions in f (R) gravity, we consider the viable Hu-Sawicki f (R) model (hereafter HS model) [19] in this work and it is given by where µ and n are free parameters characterizing this model. By adopting R µ 2 , the approximate f (R) function shall be written as where R 0 is the present-day value of Ricci scalar and f R0 = f R (R 0 ) = −2Λµ 2 /R 2 0 . For the purpose of constraining this model with data, one should first obtain the evolution of background and perturbation by inserting Eq.(7) into Eqs. (3)(4).
To the best of our knowledge, there are three main methods to confront f (R) gravity with cosmological observations. The first is numerically solving the above equations in a direct way [20][21][22][23][24][25][26][27][28][29]. The second is adopting an approximate framework to obtain the analytic solutions of the above equations and this method, to a large extent, can save computational cost [30][31][32][33][34]. The third one is studying the effects of viable f (R) gravity on the large scale structure formation by using N-body and hydrodynamical simulations [35]. Note that the last method always spend more computational cost and storage space than two previous ones. The constrained 2-dimensional parameter spaces (Ωm, σ8) from the "C" (red) and "R" (blue) datasets are shown for ΛCDM, HS f (R) models with n = 1 and free n, respectively.
III. DATA AND METHOD
Since our aim is to study whether HS f (R) gravity can alleviate the H 0 and σ 8 tensions, first of all, we use the following two main datasets.
CMB: Although the mission of the Planck satellite is completed, its meaning for cosmology and astrophysics is extremely important. It has measured many aspects of formation and evolution of the universe such as matter components, topology and large scale structure effects. Here we shall use updated Planck-2018 CMB temperature and polarization data including likelihoods of temperature at 30 2500 and the low-temperature and polarization likelihoods at 2 29, namely TTTEEE+lowE, and Planck-2018 CMB lensing data [5]. We denote this dataset as "C". The constrained 2-dimensional parameter spaces (log 10 fR0, σ8) from the "C" dataset are shown for HS f (R) models with n = 1 (red), 2 (green), 3 (grey), 4 (orange) and free n (blue), respectively. The constrained 2-dimensional parameter spaces (log 10 fR0, σ8) from the data combination "CBSHR" are shown for HS f (R) models with n = 1 (red) and free n (blue), respectively. The magenta dashed line denotes log 10 fR0 = −6.
RSD:
To study the alleviation of σ 8 tension in f (R) gravity, we adopt the redshift space distortions (RSD) as our reference probe which is sensitive to large scale structure formation. Specifically, we use the so-called "Gold-2018" growth-rate dataset [36]. This dataset is denoted as "R".
Furthermore, to break the parameter degeneracy and give tight constraints on on free parameters of HS model, we also employ the following four probes.
BAO: By measuring the position of these oscillations in the matter power spectrum at different redshifts, the BAO, a standard cosmological ruler, can place constraints on the expansion history of the universe after decoupling and break the parameter degeneracy better. It is unaffected by errors in the nonlinear evolution of the matter density field and other systematic uncertainties. Specifically, we take the 6dFGS sample at the effective redshifts z ef f = 0.106 [37], the SDSS-MGS one at z ef f = 0.15 [38] and the BOSS DR12 dataset at three effective redshifts z ef f = 0.38, 0.51 and 0.61 [39]. Specifically, to constrain the HS f (R) gravity, we use the background quantity D A /r d as a function of scale factor a in the numerical analysis, where D A and r d are angular diameter distance and comoving BAO scale, respectively. To calculate the comoving sound horizon r d , we use the fitting formula given by Ref. [40]. This dataset is identified as "B". The marginalized constraints on the HS f (R) models with n = 1, 2, 3, 4 and free n using the "C" dataset are shown, respectively. For the typical parameter log 10 fR0, we quote 2σ (95%) uncertainties or bounds. The symbol "♦" denotes the parameter that cannot be well constrained by observed data. SNe Ia: SNe Ia, the so-called standard candle, is a powerful distance indicator to study the background evolution of the universe, particularly, the Hubble parameter and EoS of DE. In this analysis, we use the largest SNe Ia "Pantheon" sample today, which integrates the SNe Ia data from the Pan-STARRS1, SNLS, SDSS, low-z and HST surveys and encompasses 1048 spectroscopically confirmed points in the redshift range z ∈ [0.01, 2.3] [41]. In our numerical analysis, we use the full Pantheon sample and marginalize over the absolute magnitude parameter M . We refer to this dataset as "S".
Cosmic Chronometers: As a complementary probe to investigate the late-time evolution of the universe, we also include the cosmic chronometers in our numerical analysis. Specifically, we employ 30 chronometers to constrain the HS model [42]. Hereafter we denote this dataset as "H".
It is worth noting that we take the first method (see Section II), namely numerically solving the background and perturbation equations, to implement constraints on HS f (R) model. In order to obtain the posterior probability density distributions of model parameters, we incorporate the modified equations governing the evolution of background and perturbation of HS f (R) model into the public online packages CAMB and CosmoMC [43,44]. Specifically, we roughly calculate the Hubble expansion rate H(a) and the linear growth factor δ(k, a) at each step of a, and use a interpolating scheme to obtain the solutions H(a) and δ(k, a) with varying a. As a consequence, we can numerically obtain the corresponding cosmological observables to be confronted with data. The latter package can be used for implementing a standard Bayesian analysis via the Markov Chain Monte Carlo (MCMC) method to infer the posterior probability density distributions of parameters. We use the Gelman-Rubin statistic R − 1 = 0.1 as the convergence criterion of MCMC analysis. Meanwhile, to analyze the MCMC chains, we take the public package GetDist [45]. For To investigate comprehensively both H 0 and σ 8 tensions in HS model, we carry out the following numerical analysis. For H 0 tension, respectively, we constrain five models, i.e., n = 1, 2, 3, 4 and free n with the "C" dataset when keeping the typical parameter log 10 f R,0 free. For σ 8 tension, we just present the constraining results of the representative case n = 1 and the general one free n from "C" and "R" datasets, respectively. We also display the comprehensive constraints on two models (n = 1 and free n) by using the data combination "CBSHR". The corresponding χ 2 expressions for all the datasets can be found in Ref. [5].
IV. NUMERICAL RESULTS
For the purpose of studying the alleviation of two important cosmological tensions in the framework of HS f (R) models, our main numerical results are displayed in Fig.1 and Fig.2 and marginalized constraining results are presented The marginalized constraints on the HS f (R) models with n = 1 and free n are shown by using the "R" and "CBSHR" datasets, respectively. Similarly, for the typical parameter log 10 fR0, we quote 2σ (95%) uncertainties. The symbol "♦" denotes the parameter that cannot be well constrained by observed data. To some extent, one can predict its H 0 behavior via Eq.(7). In Fig.1, we have exhibited the constrained 2-dimensional parameter spaces (Ω m , σ 8 ) for five HS models, it is easy to see the large H 0 gap between CMB and HST observations. Only using the CMB data, we conclude that H 0 is insensitive to typical model parameter log 10 f R0 in all five models (see the right panel of Fig.1). To investigate the σ 8 tension, in Fig.2, first of all, we display the constrained Ω m -σ 8 plane for ΛCDM as a reference. Then, we present constrained Ω m -σ 8 planes for the commonly used HS model with n = 1 and for the complete HS model with free n, respectively. We find that the relatively small σ 8 discrepancy in both considered HS scenarios with a little larger parameter spaces (Ω m , σ 8 ) cannot be resolved and is still over 1σ level. This implies that the HS f (R) gravity cannot reduce current H 0 and σ 8 tensions, which is the key result of this work. It is also interesting to study the parameter degeneracy between log 10 f R0 and σ 8 . When only using CMB data, for five HS models, one may find that log 10 f R0 is positively correlated with σ 8 , which indicates that stronger deviations in HS f(R) gravity from GR lead to larger effects of matter clustering (see Fig.3). However, when using the combined dataset CBSHR, this positive correlation disappears and σ 8 seems to be insensitive to log 10 f R0 (see Fig.4). Meanwhile, we are also of interests to study the degeneracy between the additional parameter n and σ 8 , and find that the amplitude of matter clustering is insensitive to n regardless of the use of C or CBSHR datasets (see Fig.5). Furthermore, to study the degeneracies between parameters better, we exhibit the marginalized constraints on HS f (R) models with n = 1 and free n in Fig.6 and Fig.7, and obtain the following conclusions: (i) to a large extent, the parameter spaces are compressed when combining C with BSHR datasets; (ii) in all cases, two typical parameters log 10 f R0 and n are insensitive to other cosmological parameters, which is clarified for the first time in the literature.
We also find that when using only CMB data, the case of free n has the smallest χ 2 = 12958.0 but close to other ones, when using only RSD data, the case of free has a relatively better fitting than HS model with n = 1, and that when using the combined datasets CBSHR, these two cases present almost same χ 2 value. Therefore, we can not easily distinguish these HS f (R) variants from currently statistical analysis.
Moreover, in Tab.I, we can find that the best constraint log 10 f R0 < −4.02 at the 2σ confidence level originates from the case of n = 1 by only using CMB data, while two typical parameters log 10 f R0 and n in the free n case cannot be well constrained (see also Fig.6 and Fig.7). Subsequently, in Tab.II, we find that, when using RSD data alone, constraints on typical parameters of HS models are poor and smaller σ 8 values are obtained, which indicates that this RSD dataset gives a smaller effect of matter clustering at late times than the CMB observation. Interestingly, although the mean value of the constraint H 0 = 75 ± 10 km s −1 Mpc −1 from RSD data is consistent with the HST result, it has a much larger uncertainty. Finally, at the 2σ confidence level, we give our best constraint on the typical parameter log 10 f R0 < −6.75 in the case of n = 1, while log 10 f R0 < −6.60 in the free n case. It is worth noting that we still cannot provide good constraint on n even using the joint dataset CBSHR.
It is noteworthy that there are two interesting and tight constraints from large scale structure observations. In Ref. [21], the authors uses the galaxy clustering ratio, a sensitive probe of the nature of gravity in the cosmological regime, gives f R0 < 4.6 × 10 −5 at the 2σ level. Recently, in Ref. [29], the authors place constraints on chameleon-f (R) gravity from galaxy rotation curves and find that f (R) models within the range −7.5 < log 10 f R0 < −6.5 seem to be favored with respect to ΛCDM. Interestingly, our best constraint just lies in this range and this may give a clue of the correct living range for the HS f (R) gravity.
V. DISCUSSIONS AND CONCLUSIONS
Recently, the H 0 and σ 8 tensions under the standard cosmological paradigm have re-activated a wide variety of alternative cosmological models. However, all the time, there is a lack of direct tests of f (R) gravity in resolving both tensions. To address this urgent issue, we confront the popular HS f (R) gravity with current observations. By testing five specific HS f (R) models with observational datasets, we obtain two main conclusions: (i) HS f (R) gravity cannot resolve both H 0 and σ 8 tensions; (i) the typical parameters log 10 f R0 and n are insensitive to other cosmological parameters. Meanwhile, in the HS f (R) model with n = 1, we give our best constraint log 10 f R0 < −6.75 at the 2σ confidence level.
It is noteworthy that a coupling between matter and geometry in the framework of f (R) gravity may help resolve these tensions, and that other f (R) gravity models may relieve both discrepancies much better than the considered HS f (R) one. We expect that future high-precision CMB and SNe Ia observations and independent probes such as gravitational waves could help reduce or even solve these intractable cosmological tensions. | 3,964.8 | 2020-08-10T00:00:00.000 | [
"Physics"
] |
Special Issue on E-Health Services.
The importance of e-health to citizens, patients, health providers, governments, and other stakeholders is rapidly increasing. E-health services have a range of advantages. For instance, e-health may improve access to services, reduce costs, and improve self-management. E-health may allow previously underserved populations to gain access to services. Services utilizing apps, social media, or online video are rapidly gaining ground in most countries. In this special issue, we present a range of up-to-date studies from around the world, providing important insights into central topics relating to e-health services.
Introduction
Increasing and aging populations with more chronic illnesses are straining health services in both developed and developing countries. In this situation, the prevention of disease and the encouragement of healthy lifestyles becomes even more important. A shift of this kind necessitates more active patient engagement and patient involvement in health care, and e-health has to play a central part in this process. E-health services may be easier to access than traditional services in remote and rural areas and reduce the time spent by users on travel and appointments. It may be easier to offer the services to many people at a low cost. Thus, e-health services may improve the immediacy of access as well as the equality of access to quality health information and improve self-management and thereby help to alleviate the burden on health services. In addition, e-health can improve the quality of health services by increasing shared decision-making and by empowering citizens, patients, and health care professionals [1,2]. E-health can be defined as the use of information and communication technology for the enablement or improvement of health care [3]. Rapid technological development with increasing internet access around the world and the pervasiveness of smartphones makes e-health relevant to all. The growing coverage of mobile phones in low-and middle-income countries is allowing access to health information and other e-health services to people in underserved areas [4][5][6].
E-health has expanded from web-based services to mobile health apps, online video services, and social media, and new services and technologies are constantly being presented. A few examples of e-health services that are already in use in many countries around the world are online consultations, electronic patient records, digital radiological systems, decision-support tools, self-help apps, telemonitoring, and e-prescriptions.
The most frequently used e-health service, by far, is online health information, and studies from Europe and the US suggest that more than half of the general population and most internet users have used the internet to search for information about health and illness [2,[7][8][9]. Of those who do search for health information online, about 6 in 10 take some type of action based on the information they find online [2,8]. While the access to e-health information and other services is increasing around the world, there still remains a divide between those who use and those who do not use these digital tools. This divide is linked to several factors, including, not least, socio-economic differences [2,9].
While most online health information searching seems to start at a search engine, social media such as Facebook and Twitter and online video services such as YouTube are likely to play an increasingly important role as sources of health information in the future [10,11]. Patients will be engaged in participating in their health care through a range of applications, including social media and mobile apps [12]. While social media now seem often to be used to provide information and support, mobile apps seem to be popular, especially for lifestyle issues such as exercise and dieting and for the self-monitoring of other health and illness variables. However, while the applications are beneficial to most, they may also, in some cases, worsen people's health by spreading misinformation or encouraging eating disorder behaviors or self-harm [4,[13][14][15]. In any case, as it stands today, many health care services do not seem to be fully utilizing the potential of these media that are rapidly gaining ground. Therefore, more research needs to be done on how to move from innovation into adoption in daily practice.
The Special Issue
In this special issue of the IJERPH on e-health services, we have included papers that cover a wide repertoire of services and methodological approaches, especially from medical, psychological, and societal perspectives. While the extent of e-health services is too big to be covered in full in this special issue, our included papers provide insight into several central topics within the field of e-health. We believe we have created a special issue that will provide readers with up-to-date insights into e-health services from around the world that would help researchers and other stakeholders to shape a better future.
Del Hoyo and collaborators [16] describe a study where they worked to adapt the TECCU telemonitoring app to IBD-patients' needs and preferences. Drawing on a qualitative methodology involving successive focus groups, they identified three main themes that were central to the discussions: platform usability, the communication process, and the platform content. The app was valued for its usability and personalized monitoring. Through the study, further improvements were made to the app's messaging system, and educational content was continuously updated.
In their paper on e-health communities for doctors, Li and collaborators [17] study the interaction of 102 doctors in the "Lilac Forum". They found that the frequency of interaction between the participants varied due to factors such as differences in their professional standing (titles) and differences in degree of participation.
The paper from Misawa and co-authors [18] describes a case where the rate of Japanese citizens receiving colorectal cancer examinations had significantly increased due to the application of machine learning and nudge theory. In this study, machine learning-based on historical data from designated periodical health examinations, digitalized medical insurance receipts, and medical examination records for colorectal cancer-was used to deduce segments to whom receiving the examination was recommended. As a result, 3264 (26.8%) out of the 12,162 recommended subjects received the examination, which exceeded the upper end of the initial plan (19.0%).
The erroneous use and overuse of antibiotics has led to problems of bacterial resistance to antibiotics, increases health care costs, and gives patients unnecessary side-effects. E-health systems such as the Rational Antibiotic Use System (RAUS) may help with information, prescription support, and the monitoring of antibiotic usage. Shanshan Guo et al. [19] explore the impact of the RAUS on a large Chinese hospital. The findings suggest that the implementation of the system did not result in financial losses to the hospital, although the prescription of antibiotics was reduced-thereby providing encouragement for other hospitals to implement programs to reduce antibiotic prescribing.
In their paper on the "COPD-Life" program, Charlotte Simonÿ and coauthors [20] describe the program's rationale and content. The program for COPD patients was delivered as a study intervention by an interprofessional team of clinicians collaborating from both the hospital and the municipal health care system in Denmark. Making use of two-way audio and visual communication software, 15 patients participated in the intervention via a tablet computer from their private setting. The intervention contained elements of instruction, conversation and exercise and aimed to draw on e-health to empower the patients to take better care of themselves.
In their paper on quality of life in patients following pacemaker implantation in a Norwegian hospital, Remedios López-Liria et al. [21] compare follow up through standard outpatient visits to follow up through remote monitoring. While the health-related quality of life was slightly better after 12 months in the group that received standard follow up, the difference was not statistically significant. Moreover, the frequencies of emergency visits and re-hospitalizations did not differ between the groups, suggesting that remote follow up should be further explored as an option for this group of patients.
In an analysis drawing on structural equation modeling, Yuan Tang and co-authors [22] examine which factors are of importance to patients' acceptance of online medical websites in China. Based on their analyses, they propose a modified technology acceptance model and conclude that there is a need to further improve trust and to reduce the perceived risk to users in order to increase acceptance of the service.
There is a growing trend of individuals seeking health information on social media. Health authorities could benefit from these media to disseminate validated health information. By identifying engaging factors to their social media posts, they could further enhance the impact of the information. In an observational study by Afiq Izzudin A. Rahim et al. [23], the factors associated with engagement rates on the Facebook page of the Ministry of Health of Malaysia were analyzed. They found that only 39% of the posts by the Ministry of Health had good engagement rates. The posts that were the most successful were typically on the topics of health education or risk communication, included a video, and were posted in the afternoon or after office hours. The authors' findings imply that taking these factors into consideration when posting on social media could further improve engagement rates and thereby the successful dissemination of important health-related information to the public.
In their paper, Sabina Asensio-Cuesta and co-authors [24] describe the development and assessment of an app for cell phones. The purpose of the app was to monitor the physical, psychological, social, and environmental aspects of patients receiving cancer treatment to indicate their quality of life (QoL). The authors tested the app in a pilot study with university volunteers from Spain and concluded that they could verify the plausibility of detecting human activity indicators directly related to QoL.
Kolasa and collaborators [25] performed a systematic literature review of assessment guidelines for digital health interventions. In the 11 identified guidelines, safety, clinical effectiveness, usability, economic aspects, and interoperability were most often discussed. Based on the review, the authors present important recommendations, including on methodology.
In a review, Almeida and collaborators [26] examine studies that assess the usability of pain-related apps. A main finding was that a majority of the studies did not use valid instruments or a triangulation of methods to assess usability. Drawing on their findings, the authors present recommendations for future studies in the field.
In their review of which sociodemographic factors influence the use of e-health in individuals affected by chronic conditions, Fabienne Reiners and co-authors [27] find that e-health seems to be the least used by the individuals that might need it the most, such as older individuals affected by chronic diseases, with low incomes and low educational levels, living in rural areas. Drawing on their review findings, the authors recommend tailoring the delivery of e-health services to address the inequality in the use of e-health, for instance, by using different ways of delivering the information or using different devices.
In a study protocol, Anish Menon and co-authors [28] describe a pilot randomized controlled trial (RCT) for a mobile diabetes management system to support adults with type 2 diabetes. The system comprises of a mobile app, automated text-messaging feedback, and a clinician portal. Blood glucose level (BGL) data are automatically transferred by a Bluetooth-enabled glucose meter to the clinician portal via the mobile app. The outcome measures of the study are firstly, to improve glycemic control and secondly, to improve the patient experience, reduce reliance on physical clinics, and decrease service delivery costs.
Conclusions
As we have shown in the papers that we have included in this special issue, e-health covers a wide range of topics and methodologies. The services presented in this special issue show great promise, and some may have clear advantages compared to the current standard approaches that they might be either supplementing or replacing.
However, while many e-health services show great promise in trial phases, their implementation into the health services has often proven to be more difficult. The adoption of e-health is challenging because it involves not only individual patients and clinicians, but also very large and complex organizations with a range of organizational, bureaucratical, and managerial components [29]. Moreover, the implementation of new e-health services is often not sufficiently financially incentivized-an important issue that needs to be addressed by policy-makers that aim to encourage e-health use. The relationship between providers and patients is a cornerstone of health care [30][31][32], and e-health services that are able to integrate this perspective could be more likely to stand the test of time.
Despite these challenges, we are convinced that as technological development progresses, we are likely to see the testing and widespread implementation of increasingly more advanced e-health services that will help to provide better care for patients and reduce the strain on traditional services and costs to individuals and society.
Author Contributions: The Special Issue on E-health Services was edited jointly by R.W., E.G., J.-A.K.J. and V.T. This editorial was written jointly by the editors. All authors have read and agreed to the published version of the manuscript. | 2,975 | 2020-04-01T00:00:00.000 | [
"Medicine",
"Computer Science"
] |
SLC4A4 promotes prostate cancer progression in vivo and in vitro via AKT-mediated signalling pathway
Background Prostate cancer (PCa) is the second leading cause of cancer-related male deaths worldwide. The purpose of this study was to investigate the effects of homo sapiens solute carrier family 4 member 4 (SLC4A4), which encodes the electrogenic Na+/HCO3− cotransporter isoform 1 (NBCe1), in the development and progression of PCa. Methods The expression levels of SLC4A4 in PCa and normal prostate tissues were evaluated by immunohistochemistry. The SLC4A4 knockdown cell model was structured by lentiviral infection, and the knockdown efficiency was validated by RT-qPCR and Western blotting. The effects of SLC4A4 knockdown on cell proliferation, apoptosis and cycle, migration, and invasion were detected by Celigo cell counting assay and CCK-8 assay, flow cytometry analysis, wound-healing, and Transwell assay, respectively. Tumor growth in nude mice was surveyed by in vivo imaging and Ki-67 staining. Furthermore, underlying mechanism of SLC4A4 silence induced inhibition of PCa progression was explored by human phospho-kinase array. Results Our results revealed that SLC4A4 expression was up-regulated in PCa tissues and human PCa cell lines. High expression of SLC4A4 in tumor specimens was significantly correlated with disease progression. SLC4A4 knockdown inhibited cell proliferation, migration and invasion, while facilitated apoptosis, which was also confirmed in vivo. Moreover, SLC4A4 promoted PCa progression through the AKT-mediated signalling pathway. Conclusion The results of this study indicated that SLC4A4 overexpression was closely associated with the progression of PCa; SLC4A4 knockdown suppressed PCa development in vitro and in vivo. SLC4A4 acts as a tumor promotor in PCa by regulating key components of the AKT pathway and may therefore act as a potential therapeutic target for PCa treatment.
the occurrence and progression of PCa remains poorly understood. The cellular carcinogenesis process is multistep and complex, involving multiple factors and genes [5,6], along with alterations in the expression modes of various genes, which in turn influence cell proliferation, apoptosis and differentiation [7]. PCa is a highly heritable disease with a strong genetic component. Thus, it is particularly significant to identify the genetic risk factors for PCa and search for new therapeutic targets.
Homo sapiens solute carrier family 4 member 4 (SLC4A4) is a family member of the solute carrier family and encodes an electrogenic Na + /HCO 3 − cotransporter, which is mainly involved in the secretion and absorption of sodium bicarbonate [8]. This process is highly important for maintaining the dynamic pH equilibrium within cells. SLC4A4 and other solute carrier family members have been found to be associated with tumorigenesis and tumor development [9]. For instance, Destruction of SLC4A4 or SLC4A9 by genetic or pharmacological methods have been reported to acidify intracellular pH and suppress cancer cell growth [10]. Accumulating evidence showed that the expression of SLC4A4 is different in a variety of malignant tumors. MicroRNA 223-3p inhibited the expression of SLC4A4 in clear cell renal cell carcinoma, promoting the cancer proliferation and metastasis [11,12]. SLC4A4 expression was also shown to be downregulated in thyroid cancer, providing diagnostic efficacy in clinical practice [13]. Moreover, SLC4A4 expression was shown to be higher in chronic myeloid leukemia and mucinous epithelial ovarian cancer than in adjacent normal tissue [14], suggesting that the biological processes in which SLC4A4 is involved are tumor-specific. Although SLC4A4 can indicate the prognosis of patients with colon adenocarcinoma and some other kinds of tumors [15,16], its significance in PCa has not been revealed.
Patient specimens and immunohistochemical staining
Tumour tissues and adjacent paired non-tumour tissues were gathered from patients who were diagnosed with PCa and underwent surgical excision in Renmin Hospital of Wuhan University (Wuhan, China) between June 2018 and January 2020. PCa and normal prostate tissues were gathered from 74 patients; the age range of the patients was between 26 and 87(mean age, 65 years).
Immunohistochemistry (IHC) was used to detect the expression of SLC4A4 in these tissues. Paraffin-embedded sections were dewaxed, the antigen was retrieved, and then the paraffin sections were incubated using primary antibody against SLC4A4 (cat. No. bs-21660R; Bioss) (1:200) and secondary antibody goat anti-rabbit (cat. No. A0208; Beyotime) (1:400). After staining, ten fields (× 100 magnification) were chosen to be captured and analyzed with the optical microscope (Olympus, Japan) for each section. The SLC4A4 staining intensity was scored on a range from 0 (negative), 1 (weak), 2 (positive + +) and 3 (positive + + +). Median IHC score was used to distinguish the high/low expression of SLC4A4.
Cell culture
Human PCa cell lines DU 145, LNCaP and PC-3 were purchased from the Cell Bank of Chinese Academy of Sciences and cultured in RPMI-1640 medium at 37 °C in a humidified incubator containing 5% CO 2 . The cell culture media were supplemented with 10% fetal bovine serum (FBS) and 1% sodium penicillin G/streptomycin sulfate (P/S). The normal prostate epithelial cell line RWPE-1 was purchased from the Cell Bank of Chinese Academy of Sciences and cultivated in keratinocyte serum-free medium (K-SFM) containing 0.05 mg/ mL bovine pituitary extract (BPE), 5 ng/mL epidermal growth factor (EGF) and 1% P/S.
Construction of target gene interference lentivirus
To knock down the expression of SLC4A4 in PCa cell lines, the short hairpin RNA (shRNA) sequence targeting the human SLC4A4 gene was identified as 5'-TTA TTC TTC AGC TGG TCC TTC-3' . Meanwhile, the target sequence of the negative control shRNA was identified as 5'-TTC TCC GAA CGT GTC ACG T-3' . Oligomers were annealed and ligated to BR-V121 lentiviral vector (Yibeirui, Shanghai, China) through Age I/EcoR I restriction site to produce Lv-shSLC4A4 and Lv-shCtrl. Lastly, the sequencing was performed to validate the construct results.
The modified BR-shRNA plasmid and pMD2.G and pSPAX2 helper plasmids were transfected three times with Lipofectamine 2000 into HEK-293 T cells to obtain lentiviruses. Next, the lentiviral particles were gathered, filtered, and preserved. The knockdown efficiency of SLC4A4 was evaluated by RT-qPCR and Western blotting.
Cell transfection and fluorescence imaging
PCa cells were respectively transfected with 1 × 10 7 TU/ ml of the lentivirus containing shRNA interfering with SLC4A4 (shSLC4A4) or shRNA for negative control (shCtrl), and the cells were then incubated at 37 °C for three days. The expression of green fluorescent protein (GFP, carried by the lentiviral vector) was observed by fluorescent microscope (EMD Millipore), and the ratio of fluorescent cells to total cells (viewed in white light) was used to evaluate the transfection efficiency.
Reverse transcription and real-time quantitative PCR (RT-qPCR) assays
PCa cells were gathered and centrifuged. Total RNA was extracted with the TRIzol reagent. Complementary DNAs (cDNAs) was synthesized with the PrimeScript ™ RT reagent Kit (Takara). The qPCR reaction system was prepared using a real-time quantitative PCR instrument according to product specification. The reaction system was composed of the following reagents: TB Green ® Premix Ex Taq ™ II, Forward and Reverse primers (Sangon Biotech), reverse transcription products and RNase-free H 2 O. The thermal cycling conditions were: pre-denaturation at 95 °C for 30 s, denaturation at 95 °C for 15 s, annealing at 60 °C for 10 s for a total of 42 cycles, and 72 °C for 5 min (extension). GAPDH was used as an internal reference. The relative expression levels of genes were calculated as the method of the 2 −ΔΔCt [17]. The sequences of the main primers are as follows: GAPDH forward, 5'-TGA CTT CAA CAG CGA CAC CCA-3' and reverse, 5'-CAC CCT GTT GCT GTA GCC AAA-3'; SLC4A4 forward, 5'-AAG CTC TTT CGG CAA TTC TCTTC-3' and reverse, 5'-GAA ACT CTC CAA CAC GCC CTG-3' .
Western blotting
PCa cells were washed twice with ice-cold PBS, lysed with cell lysis buffer (Beyotime) containing protease inhibitors, and incubated on ice for 15 min. The supernatant was harvested by centrifugation, and the protein content was measured with the BCA Protein Assay Kit (cat. No. 23225; HyClone-Pierce). Equal amounts of proteins were separated and transferred onto polyvinylidene difluoride (PVDF) membranes (0.45 μM). Subsequently, the membranes were blocked with TBS + 0.1% Tween-20 (TBST) buffer containing 5% skim milk and incubated with various primary antibodies. Following being washed with TBST, the membranes were blotted with HRP-conjugated secondary antibodies. The target bands were visualized with immobilon Western Chemiluminescent HRP Substrote (cat. No. RPN2232; Millipore). GAPDH was used as an internal reference.
The antibodies used in this experimental study were BAX
Celigo cell counting assay
Following infection with shRNA lentivirus, PCa cells were trypsinized, resuspended, counted and then inoculated in 96-well plates. Each group underwent a minimum of 3 duplicate wells. From the second day, the plates were tested by Celigo Imaging Cytometry System once a day for 5 consecutive days. The number of green fluorescent cells in each scan plate was precisely calculated by adjusting the input parameters of the analysis setup; the data were then plotted statistically, and the cell proliferation curve for 5 days was drawn.
Cell Counting Kit-8 (CCK-8) assay
CCK-8 (Dojindo, Shanghai) assay was used to determine the number of living cells. The cell suspension was inoculated with 2 × 10 3 cells per well into a 96-well plate and pre-incubated for 24 h after transfection. Treated or untreated cells were cultured as appropriate. 10 µl of CCK-8 solution was added to each well, and the cells were incubated for 2 h at 37 °C. The absorbance at 450 nm was recorded with a TECAN infinite M200 Multimode microplate reader (Tecan, Mechelen, Belgium). All detections were conducted in triplicate.
Flow cytometry analysis
Apoptosis was detected by the flow cytometry analysis. The infected PCa cells were cultured until the cell density reached 85%. The cells were trypsinized and resuspended, centrifuged for 5 min. The supernatant was discarded, and cell precipitates were washed with D-Hank's solution (pH = 7.2-7.4) precooled at 4 °C. The cells were washed with 1 × binding buffer, centrifuged, and resuspended. The cell suspensions (1 × 10 5 -1 × 10 6 cells) were stained by 10 µl Annexin V-APC and protected from light for 15 min [18]. Subsequently, 400-800 µl of 1 × binding buffer was added depending on the amount of cells. At last, the cells were tested by Guava easyCyte HT flow cytometer and analysed with FlowJo VX10.
Cell cycle distribution was analysed with the flow cytometry. When 6 cm dish cells in each experimental group grew to about 80% coverage, the cells were fixed at least 1 h after the washing. Afterwards, cells were washed and resuspended in PBS containing PI and RNase A. Finally, the cells were tested by flow cytometer, and the percentage of the cells in G0/G1, S and G2/M phases were visualized by a ModFit. All detections were performed in triplicate.
Scratch test
The aim of this assay was to assess the migration ability of cells after transfection. When the cells were dense in the microscopic field, three standardized wounds per well were scratched with the tip of a sterile pipette. Then, the cells were cultured in serum-free medium. Wound sizes were photographed with a phase-contrast microscope at 0 and 24 h, respectively. The five randomly selected fields were adopted to calculate the rate of wound healing using ImageJ software.
Transwell invasion assay
Diluted Matrigel (Corning, USA) was added to transwell upper chambers 12 h prior to the experiment and placed at 37 °C for solidification. The PCa cells (1.0 × 10 5 cells per chamber) were inoculated in the upper chambers by resuspension with basal serum-free medium, and the lower chambers was added with medium containing 20% FBS to attract cells to penetrate the membrane. Following incubation for 24 h, those cells on the outside surface of the chambers, having invaded through the membrane, were fixed using 4% paraformaldehyde (500 µl per chamber) for 20 min and stained using 0.1% crystal violet for 20 min. Cells on the upper surface and remaining Matrigel were wiped off by cotton swabs. At last, the numbers of stained cells in five different views per chamber were counted.
Xenograft animal model
To study in vivo tumor growth, four-week-old BALB/c nude mice, weighing 18-20 g, were purchased from Beijing HFK Bioscience Co. Ltd. All 20 mice were placed in SPF housing conditions.
After a week of acclimatization in Animal Experiment Center of Renmin Hospital of Wuhan University, 20 mice were randomly divided into the control group (shCtrl) and the test group (shSLC4A4) (n = 10 mice/ group). Since LNCaP was an androgen-dependent cell line, the tumorigenic rate of subcutaneous inoculation in nude mice alone was very poor [19]. DU 145, on the other hand, was an androgen-independent line with low differentiation and better tumorigenic effect [20,21]. The shRNA lentivirus-infected DU 145 cells were digested, suspended and injected subcutaneously into right forelimb axilla of each mouse (serum-free medium containing 4 × 10 6 cells). Those 20 mice were reared for 31 days, during which length and width of the tumors in mice were measured five times using the Vernier caliper. On day 31, the mice were injected intraperitoneally with D-Luciferin, anesthetized with an intraperitoneal injection of 0.7% pentobarbital sodium and placed under the animal multispectral living imaging system for imaging. Next, the mice were sacrificed with cervical dislocation, and tumors autopsied from the mice were weighed and photographed. Tumor volume in mm 3 (V) was calculated based on the formula: V = π/6 × L × W × W, where L represents length and W represents width of the tumor.
Ki-67 staining
Paraffin sections of tumor tissues taken from mice were dewaxed, rehydrated in a decreasing ethanol gradient, and incubated with the anti-Ki-67 antibody (cat. No. ab16667; Abcam). After washing with PBS, the paraffin sections were incubated with secondary antibody goat anti-rabbit (cat. No. ab97080; Abcam), counterstained with hematoxylin, and Ki-67 expression was observed under an optical microscope. Ten fields of each section were captured for analysis, and this experiment was repeated three times.
Human Phospho-Kinase array (proteome profiler)
To explore the potential downstream signal pathways and functional targets of SLC4A4 in PCa, Human Phospho-Kinase Array Kit (cat. no. ARY003C; Bio-Techne China Co., Ltd.) was employed. PCa cells transfected with shCtrl or shSLC4A4 were lysed. Meanwhile, 8 nitrocellulose membranes (4 Part A, 4 Part B, each containing 39 different capture antibodies printed in duplicate) were closed with 2 ml of Array Buffer 1 (block buffer) for 1 h. Then, the samples were piped to wells and incubated overnight. After washing, 1 × biotinylated antibody cocktail was added to each well and incubated. Afterwards, diluted Streptavidin-HRP was pipetted into each well and incubated. Followed by the washing of membranes, any excess Wash Buffer was blotted off, and 500 µl of Chemi Reagent Mix (equal vol. of Chemi Reagent 1 and 2) was added to each membrane. In the end, the signal density was measured with the chemiluminescence imaging system and analysed with the ImageJ. This experiment was performed in duplicate.
Statistical analysis
SPSS 23.0 and Graphpad Prism 8 were used for data analysis. The quantitative data were presented as the mean ± standard deviation (SD). Chi-squared tests were performed to compare the differences in SLC4A4 expression among PCa patients. Spearman rank correlation analysis was used to analyse the correlation between SLC4A4 expression and clinicopathological features. The histograms of SLC4A4-related signal molecules in carcinoma cell were plotted by SignaLink 2.0 analysis. P-value < 0.05 was considered a statistically significant difference.
Expression of SLC4A4 in clinical prostate specimens
For the purpose of determining the effect of SLC4A4 on the development of PCa, the expression of SLC4A4 in clinical PCa and normal prostate tissues was examined by IHC. As demonstrated in Fig. 1A, the results of 192 pathological sections confirmed the cytoplasmic localization of SLC4A4 and indicated that the expression of SLC4A4 in PCa tissues was distinctly higher than that in normal prostate tissues (P < 0.001; Table 1), which can be used for follow-up statistical analysis of clinicopathological data.
Expression of SLC4A4 in clinicopathological data of PCa patients
The relationship between the levels of SLC4A4 expression and the clinicopathological characteristics in PCa patients was assessed by statistical analysis. The high/ low SLC4A4 groups were divided depending on median of IHC scores of all tissue specimens. The results demonstrated that the levels of SLC4A4 expression were remarkably different between patients at different clinical stages, T Infiltrate and lymphatic metastasis (P < 0.05; Table 2). According to Spearman rank correlation analysis, the clinical stage, T Infiltrate and the risk of lymphatic metastasis were positively correlated with high SLC4A4 expression (Table 3). These results suggested that the expression of SLC4A4 was increasing with the deepening of tumour malignancy.
Knockdown of SLC4A4 in PCa cells
The results of qRT-PCR verified that SLC4A4 mRNA expression was significantly higher in the PCa cell lines than in the normal prostate epithelial cell line RWPE-1 (Fig. 1B). For investigating the roles of SLC4A4 in PCa, SLC4A4-targeting shRNA was cloned into GFP-carrying lentiviral vector. Afterwards, shSLC4A4 or shCtrl lentivirus was transfected into human PCa cell lines. Where the detailed plots of DU 145 and LNCaP are presented in Fig. 1C. The fluorescent signal inside cells, which have been infected with shCtrl or shSLC4A4 for 72 h, observed by the microscope, reveal a > 80% transfection efficiency in both cell lines. The knockdown efficiency of SLC4A4 was examined using RT-qPCR on mRNA level. The results demonstrated that, compared with shCtrl group, the knockdown efficiency of SLC4A4 in shSLC4A4 group was 59.24% (P < 0.001) in DU 145 cells; the knockdown efficiency of SLC4A4 in shSLC4A4 group was 82.79% in LNCaP cells (P < 0.01; Fig. 1D). Furthermore, results of western blotting also indicated that the expression of SLC4A4 protein was distinctly down-regulated after infection in comparison with shCtrl cells (Fig. 1E). These results implied that the SLC4A4-knockdown cell models were successfully constructed.
Knockdown of SLC4A4 inhibits PCa cell proliferation and facilitates apoptosis
To observe the effect of SLC4A4 on cell proliferation, the growth curves of PCa cells in 5 days were depicted by Celigo Imaging Cytometry System. This is presented in Fig. 2A, B, cell proliferation is obviously suppressed in shSLC4A4 group in comparison with shCtrl group. In both DU 145 and LNCaP cells, the cells of shSLC4A4 group exhibited a slower proliferation rate (P < 0.001, fold change = − 4 and − 6.5, respectively). Flow cytometry analysis was used for detecting the effect of SLC4A4 knockdown on PCa cells apoptosis. Compared with the shCtrl group, the percentage of apoptotic cells in shSLC4A4 group was increased by 2.9-fold in DU 145 cells and 9.4-fold in LNCaP cells (P < 0.001; Fig. 3A, B), implying that SLC4A4 knockdown facilitated apoptosis among PCa cells. Furthermore, it was found that, after SLC4A4 knockdown, more cells were stalled in G2 phase and fewer in G1 and S phases than those transfected with shCtrl (Fig. 3C, D). Collectively, knockdown of SLC4A4 inhibited PCa development in vitro.
Knockdown of SLC4A4 inhibits the mobility of PCa
Scratch test detected that, after being transfected with the corresponding lentivirus, in contrast to shCtrl group, the migration rate of cells in shSLC4A4 group (24 h) was reduced by 16% (P < 0.01) in DU 145 cells and 68% (P < 0.001) in LNCaP cells (Fig. 4A), respectively. Simultaneously, the outcomes of transwell assay suggested that, the invasion rate of cells in shSLC4A4 group was reduced by 89% (P < 0.001) in DU 145 cells and 93.6% (P < 0.001) in LNCaP cells (Fig. 4B).
Effect of SLC4A4 knockdown on tumour progression in vivo
In order to investigate the potential of shSLC4A4 as a therapeutic target for PCa, the nude mouse xenograft model was established with DU 145 cells. The transfection efficiency of infecting into DU 145 cells with shSLC4A4 or shCtrl lentivirus was identified to be > 80%, and these cells were then injected subcutaneously into the 20 mice. After intraperitoneal injection of D-fluorescein into the 20 mice, the bioluminescence intensities (µW/cm 2 ) were calculated by in vivo imaging. In contrast to shCtrl group, the bioluminescence intensity of shSLC4A4 group was lowered by 93% (P < 0.001; Fig. 5A, B). In addition, compared to shCtrl group, the tumors from mice of shSLC4A4 group were smaller in diameter at all five measured stages, and were lower in weight (P < 0.05; Fig. 5C-E). The above results indicated that the tumor growth was slower in shSLC4A4 group (P < 0.05). Besides, Ki-67 staining showed that the proliferative potential of PCa cells was obviously restrained in shSLC4A4 group in contrast to shCtrl group (Fig. 5F). These results support that SLC4A4 knockdown can inhibit tumour progression in vivo. Altogether, the findings imply that shSLC4A4, which targets explicitly SLC4A4, may have a potent prohibitive effect on prostate tumorigenesis in vivo.
Variation of expression of related proteins after SLC4A4 knockdown
The results of western blotting indicated that after SLC4A4 interfered, compared with shCtrl group, the expressions of proteins CDK4, BCL-2 were down-regulated. In contrast, the expressions of proteins BAX, FAS were up-regulated (Fig. 6D). These protein bands have further verified that the knockdown of SLC4A4 could induce PCa apoptosis and boost the BAX/BCL-2 ratio, exhibiting a potent effect of SLC4A4 on restraining apoptosis and cell cycle. In other words, the results indicate that SLC4A4 promotes the progression of PCa.
SLC4A4 promotes PCa progression through the AKT pathway
Activation of PI3K/AKT can lead to diverse biological activities, like immunity, inflammation, cell proliferation, apoptosis, and tumorigenesis [22][23][24]. In the present study, we detected whether the AKT activator SC79 could reverse the influence of SLC4A4 knockdown on PCa cells. Compared with shCtrl group, the level of SLC4A4 protein was down-regulated, AKT had no significant change, p-AKT expression was down-regulated, BAX expression was upregulated and BCL-2 expression was down-regulated in shSLC4A4 group (Fig. 7A). Compared to the shSLC4A4 group without SC79 treatment, the treatment with SC79 of the shSLC4A4 + SC79 group produced the opposite effects on the expression of the proteins (p-AKT, BAX, BCL-2). The p-AKT level was clearly enhanced upon SC79 treatment (Fig. 7A). These results demonstrated that SC79 partially reversed the inhibitory action of SLC4A4 knockdown on PCa cells. In order to research the roles of AKT pathway in SLC4A4-mediated cell viability, apoptosis and mobility, rescue experiments were conducted as well. In the existence of SC79, compared with shSLC4A4 group, the cell proliferation was clearly elevated and the apoptosis rate was obviously reduced in shSLC4A4 + SC79 group (Fig. 7B-D). Simultaneously, the migration and invasion abilities of the cells upon the SC79 treatment were meaningfully increased (Fig. 8A, B). Altogether, these findings confirmed that SLC4A4 could promote PCa progression through regulating the AKT pathway.
Discussion
The genesis and progression of PCa is a complex process containing multiple steps and genes [25,26], it is therefore of great theoretical and practical significance to illustrate the abnormal expression of genes during prostate carcinogenesis. Our current study verified that SLC4A4 expression was dramatically higher in PCa clinical tissues and cell lines than normal prostate tissues and cells, and increased along with the malignant degree of the tumor. Furthermore, we constructed an SLC4A4 knockdown cell model by lentiviral infection and confirmed the effects of SLC4A4 knockdown on biological behaviours such as proliferation, apoptosis, migration and invasion of PCa cells by Celigo cell counting assay, flow cytometry analysis, wound-healing and Transwell assays. The results indicated that SLC4A4 knockdown inhibited cell proliferation, migration and invasion, while promoting apoptosis. In addition, we constructed an in vivo model of xenografts in nude mice and confirmed that the shSLC4A4 group had a significant inhibitory effect on PCa tumour growth in vivo compared to the shCtrl group, which was also supported by the relatively lower bioluminescence intensity levels and Ki-67 expression in the tumours of the shSLC4A4 group. In short, we demonstrated that knockdown of SLC4A4 could inhibit PCa aggressiveness and progression both in vitro and in vivo. The acidic and hypoxic tumor environment requires intracellular pH regulation to facilitate tumor development. S. J. Gibbons et al. [27] reported that the electrogenic, sodium-coupled bicarbonate cotransporter, isoform 1 (NBCe1), encoded by SLC4A4 gene, was expressed in the subtype of interstitial cells of Cajal (ICCs) in the mouse gastrointestinal tract. Mouse ICCs was in charge of the production of slow electrical waves. Moreover, SLC4A4 transcripts expressed in human gastrointestinal smooth muscle cells and mouse ICCs are modifiable isomers. Scott K. Parks et al. [28] demonstrated that SLC4A4 was conducive to the HCO 3 − transports [29] and tumor cell phenotypes, exerting an important effect on the growth and metastasis of breast and colon carcinoma.
Gao et al. [16] indicated that ADH1B, CLCA4, GCG, ZG16, and SLC4A4 were the top five down-regulated molecules in colorectal cancer and SLC4A4 expression was negatively correlated with the prognosis of colorectal cancer patients by survival analysis. Prognostic predictive model according to age, tumor stage, and SLC4A4 expression showed effective performance in the prediction of overall survival among colorectal cancer patients at 1, 3, and 5 year. However, SLC4A4 has been little studied in prostate tumors. The present study is the first to complement the molecular characterization and functional effects of SLC4A4 in PCa tumorigenesis and makes it possible to formulate future strategies for these potentially significant drug targets. Analysis of clinicopathological data indicated that SLC4A4 was a free-standing prognostic factor of PCa that was meaningfully associated with T Infiltrate, lymphatic metastasis and clinical stage. In other words, high expression of SLC4A4 predicted high malignancy in PCa patients. These results confirmed that the growth, migration and invasion of PCa cells were inhibited in vitro and in vivo after knockdown of SLC4A4. Moreover, our study emphasized that SLC4A4 knockdown induced apoptosis in PCa cells and raised the BAX/BCL-2 ratio, suggesting Fig. 6 The exploration of regulatory mechanism of PCa by SLC4A4 silencing. A Proteins alignment and distribution in the intracellular signalling array. B Protein expression findings in the intracellular signalling array with or without SLC4A4 knockdown. C Histograms of SLC4A4-related signalling molecules in carcinoma cell analyzed by SignaLink 2.0. D Verification of the expression of target proteins. Error bars indicate the mean ± SD of at least three replicate experiments. **P < 0.01, *P < 0.05 that SLC4A4 may have an inhibitory effect on apoptosis. Furthermore, the regulatory functions of SLC4A4 in PCa cells were elucidated by a series of gain-of-function analyses.
AKT is an effector molecule of phosphoinositide 3-kinase (PI3K) in the PI3K/AKT/mTOR signalling pathway [30]. Elevated AKT kinase activity in approximately 40% of patients with breast, prostate and gastric cancers has been reported [31]. The AKT pathway serves as an effective medium of signalling from multiple upstream regulatory proteins (e.g. PTEN, PI3K and receptor tyrosine kinases) to some downstream effectors Fig. 7 The role of AKT pathway in SLC4A4-mediated PCa progression. A Compared to no treatment with SC79, treatment with SC79 of the shSLC4A4 + SC79 group clearly produced the opposite effects on the levels of these proteins. In the presence of SC79, inhibition of SLC4A4 knockdown in DU 145 cells was distinctly reversed, the cell proliferation (B) was clearly elevated, and the apoptosis rate (C-D) was significantly decreased. Error bars indicate the mean ± SD of at least three replicate experiments. ***P < 0.001. SC79, AKT activator such as GSK3β, FOXO and MDM2, and these signalling pathways can intersect with various other surrogate signalling pathways. Genetic and epigenetic transformations in genes involved in the AKT pathway have been demonstrated to activate AKT in cancer [32], and many lncR-NAs can contribute to the over-activation of the AKT signalling pathway through different mechanisms [33,34]. PI3K/AKT/mTOR signalling pathway is one of the vital intracellular signalling pathways that exert a potent effect on essential cellular functions [35]. Activation of the PI3K/AKT signalling pathway has also been reported as an important cancer-promoting pathway that facilitates cell proliferation and blocks cellular apoptosis [36,37]. Our current study is consistent with the findings as mentioned above, which all confirm that SLC4A4 could accelerate PCa progression through regulating the AKT pathway.
Some limitations of this study exist, such as the insufficient number of clinical specimens. Besides, prognostic implications of SLC4A4 in PCa and effects of SLC4A4 on different PCa cell lines need to be further investigated. The SLC4A4/NBCe1 has five multiple splice variants, in which expression of the B splice variant in mouse kidney cortical proximal tubule has been presented [38,39]. Despite the need to better understand PCa progression, the functional mechanisms of SLC4A4 in alternative splicing remain largely elusive [40][41][42]. The specific downstream genes and regulating mechanisms should be investigated and validated in the future.
Conclusion
In conclusion, the present study makes the first attempt to link SLC4A4 to human PCa progression. SLC4A4 could be an oncogene to predict tumour malignancy and survival in PCa patients. All these results of this study identified that SLC4A4 knockdown inhibited the occurrence and progression in PCa. SLC4A4 acts as a tumor promotor that accelerates tumor growth, inhibits apoptosis and arrests cell cycle progression among PCa by regulating key elements of the AKT pathway. Thus, SLC4A4 is a promising potential therapeutic target in the treatment of PCa. Fig. 8 The role of AKT pathway in SLC4A4-mediated PCa progression. In the presence of SC79, inhibition of SLC4A4 knockdown in DU 145 cells was distinctly reversed. A Scratch test was conducted to detect the migration abilities of cells treated with SC79. The wounds were photographed at a magnification of 100 × . B Transwell assay was employed to evaluate the invasion abilities of cells treated with SC79 at a magnification of 200 × . Error bars indicate the mean ± SD of at least three replicate experiments. ***P < 0.001, **P < 0.01 | 6,839 | 2022-03-19T00:00:00.000 | [
"Biology",
"Medicine",
"Chemistry"
] |
Resonant Gas Sensing in the Terahertz Spectral Range Using Two-Wire Phase-Shifted Waveguide Bragg Gratings
The development of low-cost sensing devices with high compactness, flexibility, and robustness is of significance for practical applications of optical gas sensing. In this work, we propose a waveguide-based resonant gas sensor operating in the terahertz frequency band. It features micro-encapsulated two-wire plasmonic waveguides and a phase-shifted waveguide Bragg grating (WBG). The modular semi-sealed structure ensures the controllable and efficient interaction between terahertz radiation and gaseous analytes of small quantities. WBG built by superimposing periodical features on one wire shows high reflection and a low transmission coefficient within the grating stopband. Phase-shifted grating is developed by inserting a Fabry–Perot cavity in the form of a straight waveguide section inside the uniform gratings. Its spectral response is optimized for sensing by tailoring the cavity length and the number of grating periods. Gas sensor operating around 140 GHz, featuring a sensitivity of 144 GHz/RIU to the variation in the gas refractive index, with resolution of 7 × 10−5 RIU, is developed. In proof-of-concept experiments, gas sensing was demonstrated by monitoring the real-time spectral response of the phase-shifted grating to glycerol vapor flowing through its sealed cavity. We believe that the phase-shifted grating-based terahertz resonant gas sensor can open new opportunities in the monitoring of gaseous analytes.
Introduction
An increasing demand for the monitoring of air quality has promoted the development of high-performance gas sensing devices operating on various chemical and physical principles such as optical, calorimetric, chromatographic, acoustic, as well as electrochemical [1][2][3][4][5].Among those, optical sensors exhibit unique advantages by being immune to electromagnetic interferences, free of external power supply, capable of operating in harsh environments, and allowing multiplexed remote sensing [6][7][8].Furthermore, for various gaseous analytes (e.g., gases, vapors, aerosols), the terahertz band is abundant with spectral fingerprints [9][10][11][12], thus opening new opportunities in optical gas sensing.As a complementary technique to the well-established infrared spectroscopy that probes electronic transitions in molecules [13], THz spectroscopy rather probes molecular vibrations, which are particularly pronounced in the gas phase [14].Additionally, to handle the submillimeter radiation, THz optics are usually much larger than infrared ones, thus enabling novel designs (e.g., integrate with gas cell) and fabrication techniques (e.g., additive manufacturing) of gas sensing devices.However, a significant challenge for gas sensing, particularly at low analyte concentrations, is the weak signal, which prompts the use of long straight gas cells [15,16] or circular multi-pass cells [17,18] to obtain the measurable absorption, thus resulting in large and cumbersome gas sensor systems.
It is, therefore, important to investigate the integrated resonant structures, particularly in the THz band, capable of reducing the size of sensor systems, compared to the freespace systems, without sacrificing sensitivity.One way to achieve this is by using hollow core waveguides filled with gaseous analytes to perform broadband molecular vibration absorption spectroscopy.Such waveguides operate using various guidance principles (e.g., ARROW, bandgap, plasmonic) and offer high field-analyte overlap [19][20][21][22] while occupying much smaller volumes (e.g., coiled hollow core fibers [23,24]) than free-space gas cells.They are predominantly used to monitor the frequency-dependent imaginary part (loss) of the analyte Refractive Index (RI).Therefore, for chemical species identification and component differentiation, one usually resorts to the costly THz optical sources supporting stable and broadband operation.
Alternatively, a THz waveguide-based sensor of relatively short length can be designed using various resonant elements in their structures (e.g., Bragg gratings, asymmetric directional couples, integrated Fabry-Perot resonant cavities, and coherent scattering elements [25][26][27][28][29][30][31]).Due to the low bandwidth nature of resonant devices, one can then monitor the gaseous analyte RI (mostly its real part) by tracking the spectral position of various singularities using cost-effective THz sources (e.g., resonant tunneling diodes).
Although high sensitivities are readily achievable by both one-dimensional (e.g., photonic crystal cavity on silicon wafer [26]) and two-dimensional resonators (e.g., pillar arrays [29]), it is noted that for most reported optical sensors, the gaseous analyte delivery infrastructure comes as an afterthought.In contrast, in this work, this crucial component is co-engineered with optical ones, thus ensuring the independent efficient operation of both with minimal mutual intrusion for gas sensing.This subtly integrated structure outperforms the conventional open-structured sensors in terms of compactness and performance stability.Particularly, by removing the employment of external gas cells, the proposed sensor is especially suitable for monitoring small quantities of gaseous analytes.
In this work, we propose a real-time resonant THz gas sensor based on phase-shifted waveguide Bragg grating (WBG).At the core of this device is a broadband two-wire plasmonic waveguide formed by metalizing polymer cylinders that are encapsulated within a closed polymer cage.The gaseous analyte flows inside the cage and in the air gap of a two-wire plasmonic waveguide.WBG is formed by a periodic conical pattern imprinted onto one of the cylinders of a two-wire waveguide and is optimized to feature a spectrally broad stopband.Finally, the phase-shifted grating is formed by inserting a Fabry-Perot cavity in the form of a uniform waveguide section in the middle of WBG.The cavity length and the number of grating periods should be chosen to support a single spectrally narrow transmission peak within a broad WBG stopband.The THz spectral response of phase-shifted gratings is then studied for different lengths of a cavity and different refractive indices of gaseous analyte that are filling the semi-sealed cavity.By tracking the position of the transmission peak, our sensor sensitivity near 0.14 THz is found to be ~14.5 GHz/mm for changes in the cavity length, and ~144 GHz/RIU for changes in the analyte RI (real part).A theoretical sensing resolution of ~7 × 10 −5 RIU is estimated from the 10 MHz resolution of our spectrometer.Finally, using a continuous-wave (CW) THz spectroscopy system, we experimentally demonstrate the real-time detection of glycerol vapors from an electronic cigarette as an analyte.Namely, when replacing dry air with glycerol vapor in the cavity of a phase-shifted grating module, a shift in the sensor resonant frequency (transmission peak) of ~50 MHz reveals an RI difference of ~3.5 × 10 −4 RIU.
Different from the most reported optical gas sensors whose delicate structures are realized using costly infrastructures (e.g., femtosecond laser and deep reactive ion etchers), the proposed gas sensor on a centimeter-scale THz waveguide can be rapidly manufactured using the emerging 3D printing technology with precision and robustness.Owing to the ubiquitous availability of hardware as well as the compact modular design that integrates various crucial elements, we believe that this sensor confronts a lower threshold for entering into production and less challenging engineering problems for operation in practical applications.
Two-Wire Waveguide Bragg Gratings
Unlike the conventional two-wire metallic waveguides [32,33], the two-wire waveguides used in this work and detailed in [34] feature a modular design with the wires in the form of metalized polymer cylinders encapsulated within a polymer enclosure (see Figure 1a).Such a micro-encapsulated design circumvents the intrinsic engineering defect of conventional one in alignment, and promises mechanical stable, cost-effective, and highly reconfigurable THz optical circuits for various applications (the comparison of transmission spectra is shown in Figure 2d in [34]).The waveguide cross-sectional design, including the wire diameter, the air gap size, as well as the topography of the enclosure, were carefully tailored to ensure the featureless transmission spectra with low insertion loss for a several-centimeter-long waveguide around 140 GHz.Such a design eliminates the presence of spectral ripples and enables distinct measured transmission spectra using THz spectroscopy, thus facilitating the signal identification for gas sensing.
PEER REVIEW 3 of 9
Two-Wire Waveguide Bragg Gratings
Unlike the conventional two-wire metallic waveguides [32,33], the two-wire waveguides used in this work and detailed in [34] feature a modular design with the wires in the form of metalized polymer cylinders encapsulated within a polymer enclosure (see Figure 1a).Such a micro-encapsulated design circumvents the intrinsic engineering defect of conventional one in alignment, and promises mechanical stable, cost-effective, and highly reconfigurable THz optical circuits for various applications (the comparison of transmission spectra is shown in Figure 2d in [34]).The waveguide cross-sectional design, including the wire diameter, the air gap size, as well as the topography of the enclosure, were carefully tailored to ensure the featureless transmission spectra with low insertion loss for a several-centimeter-long waveguide around 140 GHz.Such a design eliminates the presence of spectral ripples and enables distinct measured transmission spectra using THz spectroscopy, thus facilitating the signal identification for gas sensing.Additionally, the integration of the plasmonic terahertz waveguide and semi-sealed cavity promises the controllable interaction between the supported THz surface plasmon polariton wave and the gaseous analyte flowing through.However, as refractive indices (real part) for most gases are close to one, it is challenging to detect the difference between them, thus necessitating the use of long interaction distances (long gas cells) to accumulate sufficient phase differential between different analytes.In contrast, by using resonant devices like a Fabry-Perot cavity (in this work: realized in the form of a phase-shifted WBG), we can fold the optical path to realize much smaller devices.
Experimentally, we find that the two-wire WBGs featuring a sequence of end-to-end connected truncated cones on one wire was an optimal design that can be printed reliably with high precision and without supports, using a tabletop stereolithography 3D printer (see Appendix A for details in fabrication).In principle, one can further increase the grating strength (stopband bandwidth) by using other geometries such as deep rectangular Additionally, the integration of the plasmonic terahertz waveguide and semi-sealed cavity promises the controllable interaction between the supported THz surface plasmon polariton wave and the gaseous analyte flowing through.However, as refractive indices (real part) for most gases are close to one, it is challenging to detect the difference between them, thus necessitating the use of long interaction distances (long gas cells) to accumulate sufficient phase differential between different analytes.In contrast, by using resonant devices like a Fabry-Perot cavity (in this work: realized in the form of a phase-shifted WBG), we can fold the optical path to realize much smaller devices.
Experimentally, we find that the two-wire WBGs featuring a sequence of end-to-end connected truncated cones on one wire was an optimal design that can be printed reliably with high precision and without supports, using a tabletop stereolithography 3D printer (see Appendix A for details in fabrication).In principle, one can further increase the grating strength (stopband bandwidth) by using other geometries such as deep rectangular grooves on both wires.However, it is noted that realizing such designs is challenging due to microstructure deformation induced by the intrinsic cure-through defect of 3D printing and the difficulty of aligning such structures [35].
Specifically, the UV radiation in each exposure not only cures the resin within the top printed layer, but also leaks through the cured layer and solidifies some resin on the other side.Therefore, the resultant cumulative deformation has to be taken into consideration for the grating structure design, as it becomes explicit for prints where geometry changes rapidly from one layer to another.Additionally, the two-wire waveguide components were manually assembled from two complementary 3D-printed parts.When subwavelength features are superimposed on both parts, the postprocessing facet-polishing step can easily lead to their misalignments in practice.Furthermore, the optimal truncated ridge height was found to be ~0.2 mm, enabling a large bandwidth of the stopband, manageable loss in the passband, as well as the reproducible optical performance of printed WBGs (see Figure 1b).
For a stopband center frequency of ~140 GHz, the period of WBGs is found to be Λ = 1.03 mm.The transmission and reflection spectra for the 2.5 cm long WBGs containing N WBG = 10, 14, 18 periods are shown in Figure 1c, with numerical transmission and reflection coefficients in the vicinity of the stopband center frequency reaching <0.1 and >0.75 values, respectively, when the number of periods is over 14.The linear dependence of the stopband center frequency on the grating period Λ is shown in Figure 1d for a 14-period structure, with a slope of 131 GHz/mm.Experimentally, the transmission measurements were conducted using a CW-THz spectroscopy system (see Appendix A for details in characterization), and the spectral response of the 3D printed THz WBGs within the grating stopband agrees well with numerical simulation, as seen in Figure 1c.A minimal transmission coefficient of ~0.08 was found for the ~16 GHz wide grating stopband of a 14-period WBG.
Next, we realize a narrow transmission window within the WBG stopband by incorporating a Fabry-Perot cavity, which is a two-wire waveguide section with a length of L F-P = 2.75 mm, between two WBG reflectors.The resonance in the Fabry-Perot cavity results in the presence of transmission peaks within the WBG stopband.Experimentally, we find that a 14-period phase-shifted WBG shown in Figure 2a results in a superior performance in terms of the transmission spectra for gas sensing.It is worth noting that the elongation of the grating leads to a narrower transmission peak (~2 GHz bandwidth for a phase-shifted WBG containing 18 periods), but comes at the cost of deteriorated transmission peak intensity (~0.1 transmission coefficient difference between the resonant frequency and other frequencies within the grating stopband), thus posing challenges in identifying the desired transmission peak.Additionally, in a numerical simulation, the bandwidth of the exclusive transmission peak decreases from ~4.7 GHz to ~3.6 GHz when the waveguide length increases from ~0.5Λ to ~2.5Λ.Further reduction in bandwidth by extending the waveguide section is infeasible due to the appearance of multiple spectral singularities within the grating stopband, while the spectral position of the transmission peak with a basically unaffected bandwidth moves toward a lower frequency when the F-P cavity length slightly increases (see Figure 3a).
Because of the standing waves formed inside of the photomixer silicon lenses and free-space cavities of the CW-THz spectroscopy setup, parasite ripples are superimposed on measured transmission spectra [36], posing challenges in identifying the transmission peak of phase-shifted WBGs from the experimental data.To simplify the task, we identify the resonant peak position by subtracting the transmission spectrum of a uniform WBG from the spectra of the phase-shifted WBGs (see Figure 3b).A good correspondence between experiment and theory is found for the spectral position of a transmission peak as a function of the cavity length, with an exception of a small systematic frequency shift of ~2 GHz as seen in Figure 3c.We believe that this consistent discrepancy is mainly attributed to the structural nonuniformity of experimental gratings, which results in the longer equivalent F-P cavity compared with that of the ideal numerical model.Both in theory and experiment, the dependence is linear, with a slope of ~14.5 GHz/mm.and other frequencies within the grating stopband), thus posing challenges in ide the desired transmission peak.Additionally, in a numerical simulation, the band the exclusive transmission peak decreases from ~4.7 GHz to ~3.6 GHz when th guide length increases from ~0.5Λ to ~2.5Λ.Further reduction in bandwidth by ex the waveguide section is infeasible due to the appearance of multiple spectral sing within the grating stopband, while the spectral position of the transmission pea basically unaffected bandwidth moves toward a lower frequency when the Flength slightly increases (see Figure 3a).Because of the standing waves formed inside of the photomixer silicon lenses and free-space cavities of the CW-THz spectroscopy setup, parasite ripples are superimposed on measured transmission spectra [36], posing challenges in identifying the transmission peak of phase-shifted WBGs from the experimental data.To simplify the task, we identify the resonant peak position by subtracting the transmission spectrum of a uniform WBG from the spectra of the phase-shifted WBGs (see Figure 3b).A good correspondence between experiment and theory is found for the spectral position of a transmission peak as a function of the cavity length, with an exception of a small systematic frequency shift of ~2 GHz as seen in Figure 3c.We believe that this consistent discrepancy is mainly attributed to the structural nonuniformity of experimental gratings, which results in the longer equivalent F-P cavity compared with that of the ideal numerical model.Both in theory and experiment, the dependence is linear, with a slope of ~14.5 GHz/mm.
Two-Wire Waveguide-Based Resonant Gas Sensor
Finally, we demonstrate real-time THz gas sensing based on our thus-designed phaseshifted WBG.A 2.5 cm long phase-shifted grating module containing a cavity of LF-P = 2.75
Two-Wire Waveguide-Based Resonant Gas Sensor
Finally, we demonstrate real-time THz gas sensing based on our thus-designed phase-shifted WBG.A 2.5 cm long phase-shifted grating module containing a cavity of L F-P = 2.75 mm in the middle of 14-period gratings with Λ = 1.03 mm was sealed on both ends with polyethylene film (α < 0.01 cm −1 for a lower-terahertz band) with a thickness of tens of micrometers.In experiments, the addition of such a THz transparent material led to negligible changes in the transmission spectra of this module.To couple with the free-space THz beam for characterization, this module was placed between two 3 cm long featureless two-wire waveguide sections which support broadband operation.The assembled waveguide component was then fitted with conical horn antenna and placed inside the THz spectroscopy setup (see Figure 4).It is noted that three through holes were drilled on the side wall of the enclosure of the phase-shifted grating module for gaseous analyte delivery.
changes in the transmission spectra of this module.To couple with the free-space THz b for characterization, this module was placed between two 3 cm long featureless twowaveguide sections which support broadband operation.The assembled waveguide ponent was then fitted with conical horn antenna and placed inside the THz spectros setup (see Figure 4).It is noted that three through holes were drilled on the side wall o enclosure of the phase-shifted grating module for gaseous analyte delivery.Glycerol is one of the main ingredients of vaping liquid, to which nicotine and flavors are added.The gas mixture generated by electronic cigarettes is notoriously harmful to human health.Specifically, glycerol aerosol alone has been shown to have an impact on the liver and energy metabolism [37].Therefore, detecting glycerol vapor in air is of practical significance in health management, which was demonstrated by the proposed sensor in this work.In experiments, glycerol vapor generated by an electronic cigarette was introduced into the 0.6 mL volume flow cell through the inlet in the middle of a cell with a constant flow rate of ~20 mL/s, while the waste vapor was removed from the two ends of a flow cell through the outlets for waste treatment.The well-designed location of inlet and outlet openings as well as the short voiding time allow the cavity to completely replace its filled gas in sub seconds, enabling the real-time monitoring of gas RI changes.
The numerical simulations of the independent phase-shifted grating module predict that the spectral position of a transmission peak is linear with the gaseous analyte RI with the corresponding sensitivity of 144 GHz/RIU (see Figure 5a).Given the 10 MHz resolution of our CW-THz spectrometer, the theoretical resolution of our sensor is then estimated to be 7 × 10 −5 RIU, which is as much as an order of magnitude lower than the RI difference between most common gases (e.g., the difference is on the level of 10 −3 to 10 −4 RIU) [38].In experiments, the transmission spectra of a phase-shifted WBG with an empty cavity, the cavity with dry airflow, and the cavity with glycerol vapor flow were measured subsequently.In dynamic measurements covering the spectral range of a transmission peak, the scanning time for a single data point was ~10 s to alleviate the impact of the inherent latency of a CW spectroscopy system using lock-in acquisition, and to ensure fine spectra with 10 MHz resolution.The center position of a transmission peak was found by first fitting a data cloud of the normalized phase-shifted WBG transmission spectra within the grating stopband using smooth Lorentzian lineshapes, and then finding the spectral position of the fit maximum υ center , similarly to what is shown in Figure 3b.
A typical sensor readout is presented in Figure 5b from which we see that the spectral position of the transmission peak is relatively stable in continuously recorded transmission spectra of the same analyte.Additionally, for an empty cell or a cell with a flow of dry air, the position of the transmission maximum also remains practically unchanged, indicating the immunity of the proposed sensor to changes in gas flow rate.At the same time, when introducing the glycerol vapor, the transmission peak shifts by ~50 MHz, which corresponds to the RI change of ~3.5 × 10 −4 compared to that of dry air.Highly consistent experimental results were obtained in each measurement of this sensor.Owing to its compact integrated structure and insensitivity to the environment change, this twowire waveguide-based sensor can find its practical applications in gas sensing by simply replacing the external infrastructure for gas delivery (see the setup out of the black dotted region in Figure 4).For instance, one can detect the concentration of explosive or toxic gas flowing in pipelines or dispersed in the air remotely in petrochemical industry.
subsequently.In dynamic measurements covering the spectral range of a transmission peak, the scanning time for a single data point was ~10 s to alleviate the impact of the inherent latency of a CW spectroscopy system using lock-in acquisition, and to ensure fine spectra with 10 MHz resolution.The center position of a transmission peak was found by first fitting a data cloud of the normalized phase-shifted WBG transmission spectra within the grating stopband using smooth Lorentzian lineshapes, A typical sensor readout is presented in Figure 5b from which we see that the spectral position of the transmission peak is relatively stable in continuously recorded transmission spectra of the same analyte.Additionally, for an empty cell or a cell with a flow of
Discussion
In this work, we propose micro-encapsulated two-wire plasmonic waveguide-based phase-shifted Bragg gratings and demonstrate their applications in real-time THz gas sensing.End-to-end connected truncated cones with a ridge height of ~0.2 mm superposed on one of the two wires were chosen as an optimal WBG design.Low transmission and high reflection coefficients were found within the ~16 GHz wide stopband of such WBGs.Phaseshifted WBG featuring a Fabry-Perot cavity was then developed by placing a uniform waveguide section in the center of a WBG.A single narrow transmission peak of ~3.6 GHz (HFWM) bandwidth in the middle of a WBG stopband was realized by using a ~2.75 mm long cavity flanked on both sides by two seven-period WBGs with a Q-factor of ~39.The theoretical sensitivity of the peak spectral position to changes in the RI of gaseous analytes inside the 2.5 cm long phase-shifted WBG is estimated to be 144 GHz/RIU.The response of our sensor to glycerol vapor flow at low concentrations was then verified in a proof-ofconcept time-resolved experiment, which reliably detected the displacement of dry air by glycerol vapor with a resultant RI change of ~3.5 × 10 −4 RIU.
For future work, we note that higher sensitivity sensor designs are readily achievable by moving the sensor operational frequency to higher frequencies [39], while also increasing the number of periods in the WBG to reduce the spectral width of a transmission peak.The long-term stability of the proposed sensor also needs to be characterized and further optimized for practical applications.Additionally, considering the modular and reconfigurable design of micro-encapsulated two-wire waveguide components, the sensing of selectivity is readily available via collaboration with THz waveguide-based spectroscopy [19] for various monitoring applications of gaseous analytes such as trace gas analysis and detection [40].
Figure 1 .
Figure 1.Micro-encapsulated two-wire waveguide and WBG fabricated using stereolithography and wet chemistry deposition.(a) Schematic of an encapsulated two-wire waveguide.(b) The twowire WBG features a sequence of end-to-end connected truncated cones written on one of the two wires.(c) Transmission and reflection spectra of WBGs featuring a different number of periods, Λ = 1.03 mm.(d) Numerical transmission spectra of WBGs for different period lengths, NWBG = 14.Inset: The center frequency of a WBG stopband as a function of its period length.
Figure 1 .
Figure 1.Micro-encapsulated two-wire waveguide and WBG fabricated using stereolithography and wet chemistry deposition.(a) Schematic of an encapsulated two-wire waveguide.(b) The two-wire WBG features a sequence of end-to-end connected truncated cones written on one of the two wires.(c) Transmission and reflection spectra of WBGs featuring a different number of periods, Λ = 1.03 mm.(d) Numerical transmission spectra of WBGs for different period lengths, N WBG = 14.Inset: The center frequency of a WBG stopband as a function of its period length.
Figure 2 .
Figure 2. Phase-shifted waveguide Bragg grating.(a) Schematic and photo of a two-wire wa based phase-shifted WBG.(b) Numerical transmission and reflection spectra of a phas WBG as a function of the number of periods, Λ = 1.03 mm and LF-P = 2.75 mm.
Figure 2 . 9 Figure 3 .
Figure 2. Phase-shifted waveguide Bragg grating.(a) Schematic and photo of a two-wire waveguidebased phase-shifted WBG.(b) Numerical transmission and reflection spectra of a phase-shifted WBG as a function of the number of periods, Λ = 1.03 mm and L F-P = 2.75 mm.R PEER REVIEW 5 of 9
Figure 3 .
Figure 3. Spectral response of phase-shifted WBGs for various cavity lengths with N WBG = 14 and Λ = 1.03 mm.(a) Numerical transmission spectra and (b) experimental normalized transmission spectra of phase-shifted WBGs.Inset: the transmission spectra of phase-shifted WBGs and a uniform WBG.(c) The spectral position of the transmission peak within the WBG stopband as a function of the cavity length.
Figure 4 .
Figure 4. Schematic of the experimental setup to fill the cavity hosting two metalized wires glycerol vapor.
Figure 4 .
Figure 4. Schematic of the experimental setup to fill the cavity hosting two metalized wires with glycerol vapor.
finding the spectral position of the fit maximum υcenter, similarly to what is shown in Figure 3b.
Figure 5 .
Figure 5.The spectral response of a phase-shifted WBG with gaseous analytes of different RIs in the cavity.(a) Numerical transmission spectrum of a phase-shifted WBG in the vicinity of a resonant peak for different values of gaseous analyte RI.Inset: The spectral position of the transmission peak as a function of the analyte RI.A slope of ~144 GHz/RIU can be found in the linear fit (red line).(b) Experimental time dependence of the spectral position of the transmission peak.Its variation can be found in the red dotted line.
Figure 5 .
Figure 5.The spectral response of a phase-shifted WBG with gaseous analytes of different RIs in the cavity.(a) Numerical transmission spectrum of a phase-shifted WBG in the vicinity of a resonant peak for different values of gaseous analyte RI.Inset: spectral position of the transmission peak as a function of the analyte RI.A slope of ~144 GHz/RIU can be found in the linear fit (red line).(b) Experimental time dependence of the spectral position of the transmission peak.Its variation can be found in the red dotted line. | 6,174 | 2023-10-01T00:00:00.000 | [
"Physics",
"Engineering",
"Chemistry"
] |
Ethical framework of assistive devices: review and reflection
The population of ageing is growing significantly over the world, and there is an emerging demand for better healthcare services and more care centres. Innovations of Information and Communication Technology has resulted in development of various types of assistive robots to fulfil elderly’s needs and independency, whilst carrying out daily routine tasks. This makes it vital to have a clear understanding of elderly’s needs and expectations from assistive robots. This paper addresses current ethical issues to understand elderly’s prime needs. Also, we consider other general ethics with the purpose of applying these theories to form a proper ethics framework. In the ethics framework, the ethical concerns of senior citizens will be prioritized to satisfy elderly’s needs and also to diminish related expenses to healthcare services.
Introduction
Ethnographic reports present that ageing population is growing significantly all over the world [1,2]. This increase gives rise to particular needs of elderly people [3][4][5]. Moreover, population increase leads to substantial issues such as shortage in medical centres, healthcare services, and medical professionals [6] and burdens of enormous amount of healthcare expenses [7]. Recently, there have been noticeable technological innovations in Information and Communication Technology (ICT). These developments have resulted in creation of various types of assistive medical robots such as RIBA, paro-robot, telerobot, and remote presence robot [7], assistive devices, home automation systems, and canes [8][9][10][11]. Assistive devices and robots are developed with the purpose of fulfilling elderly's needs and expectations, compensating their disabilities, boosting their life quality, providing assistance to carry out task(s), whilst maintaining their autonomy [7,12]. Research studies revealing the primary needs of older adults are listed in Table 1 [13,14]. Medical assistive robots including walking devices can be adopted by elderly if they prove to be useful, reliable, efficient, effective, and also easy to utilize [15,16].
Paper organization
This paper is organized as follows: "Theories of Ethics" section introduces a summary of literature of general ethics, human rights and values; "Ethical Issues of Assistive Medical Robots" section addresses existing ethical issues related to assistive medical robots; "Discussion and Conclusion", summarizes the important role of ethical framework on both assistive medical robots and walking devices
Theories of ethics
Under this section, general ethics theories as well as human rights and values are described.
theories can be applied in design and use of medical robots like walking devices. The prime example is the robot's program where the program's codes are written based on ethics general theories, whilst taking ethics concerns into consideration. This section of the paper describes three (3) relevant general theories of ethics and also bases of ethics concerns.
Deontology
The word "deontology" is taken from two (2) Greek words which are duty and study. Deontology ethics, which is established by Immanuel Kent, is known as non-consequentialist or duty based [17]. According to this theory, individuals are enforced morally to perform or take actions in accordance with series of principles as well as rules without considering the outputs of taken actions [18]. This theory mainly considers rightness and wrongness of an action itself rather than focusing on its consequences and outputs [19]. In accordance with Sullivan [20], this is the first ethics theory prioritizing decision-making to a person. Moreover, [21] stated that in a moral action of an individual, feelings and incentive refuse to play a significant role. Therefore, incentive for taking an action is based on obligation before the action takes place [17]. In other words, in accordance with this theory, in spite of destructive consequences, individuals are required to take right actions which are based on rules [22].
The prime example of applying duty-based theory in assistive medical robots is giving medicine to an older adult. If a senior citizen requests a painkiller from his/ her assistive medical robot, in spite of being allergic to the painkiller, the medical agent is required to follow rules and to provide the medication to older adult. It is evident that medical robot action triggers older adult's health condition. In contrast, in other general theories of ethics such as consequentialism, the action of medical robot endangering older adult's well-being declines to be accepted; therefore, assistive agent is required to provide another solution to relive older adult's discomfort.
Virtue
This ethics virtue is recognized as character-based ethics which is far towards individual based rather than action based. Virtue ethics is recognized a character-based ethics highlighting an individual's right action in all the same circumstances [23]. This theory emphasizes on virtue and moral character of a person carrying out an action rather than considering action's consequences or ethical rule [24]. The concern of this ethics theory is not only focusing on rightness or wrongness of an individual's action but also offers a number of behaviours.
Virtue theory is beneficial if an individual incline to assess another individual character rather than goodness or badness of a particular action. In this theory, individuals are required to have series of characteristics for virtuousness [25].
Character-based theory sporadically tends to deontology ethics theory, whilst it is contrary to consequentialism ethics theory. The prime example is helping needy: based on consequentialism theory, helping needy improves well-being. On the other hand, deontology theory says that helping needy is in accordance with moral rule, whilst virtue theory argues that this kind of assistance is a character of generosity.
Consequentialism
This ethics theory is known as result-based theory which highlights two (2) primary principles. The first one states that rightness or wrongness of an action is based on its result and potential consequences. The second concept indicates that when the result of an action has greater consequences, that action is considered as a more right action [26]. In accordance with consequentialism, an action is favourable if its consequences refuse to produce harmful consequences.
Hedonism and utilitarianism are two forms of consequentialism ethics theory. Hedonism indicates that it is necessary for individuals to ameliorate human, whilst utilitarianism states that it is essential for individuals to enhance human health. In addition, another form of consequentialism states that individuals are required to improve their preferences satisfaction and happiness.
It is stated by Cummiskey [27] that in result-based theory a murder is considered right if its consequences produce good result. In other words, if a murderer inclines to kill a group of innocent individuals, it is accepted based on consequentialism to kill the murderer to save the victim's lives. In contrast, based on both deontology and virtue theories, in spite of victims death, killing the murderer is wrong [28].
Human rights and values
Human rights related to senior citizens consist of the right to a standard of living which is sufficient for health Table 1 Primary needs of elderly
Primary needs of elderly
To stay in their own places, whilst keeping their independency and quality of their lives safely To grow medical professionals and doctors attention towards elderly's well-being To have control on their own life during course of emergency requiring assistance To be motivated to take part in community life with the purpose of alleviating negative feelings, namely social isolation and welfare, freedom from discrimination, inhuman and torture or humiliating treatment, and private and family life. A focus on human rights provides support to highlight that physical and psychological well-being of older adults is as significant as the well-being of other member of society. Therefore, it is substantial to make sure those assistive medical robots embedded into older adults lives aim at benefiting elderly, and not embedded to diminish care burden on the other people [29]. In addition, it is essential to consider twelve human values which are introduced to technological developments [30].
Ethical issues of assistive medical robots
The debates about the ethical actions of robots date back to 70 years ago [31]. From 1950, when Asimov presented his three laws, [33] there has been arguments about the potency of those particular rules to render robots capable of making ethical decisions independent of human interference. The key argument of Asimov's laws considered the self-directedness of robots. Being autonomous, robots were assumed to have the physical and intellectual capacity to make moral decisions, using the knowledge and rationality which they were equipped with [34]. Asimov's three laws discussed these notions: (1) A robot may not be a source of damage for a human being or, its inactivity expose a human being to harm. (2) A robot must follow human beings' orders except the ones which would confront the first law. (3) A robot should guard itself as long as such defence does not contrast with the other two laws.
Both researchers and science fiction writers have expressed their concerns about a number of ethical issues that daily use of robots has made them possible. However, the robots that we use daily are limited to vacuum cleaners, grass cutters, and robot toys. These are not same to the advanced science fiction robots that are the subject of the recent robotics ethics [32]. Consequently, the ethical concerns related to robots should not be based on empirical data and studies done by users. Instead, taking Asimov's laws as an opening point for ethical debates [35], they need to discuss ethics according to their potential, future application [36].
Robots-exclusive Concerns. Ryan Calo is a law professor who wrote the "Robots and Privacy" chapter in Robot Ethic. He points out that the debated on robots are currently paying attention to ubiquity, and, conceivably, this is not that good [37]. Calo detects three privacy dangers which robots can create: "surveillance", "access to living and working spaces", and "social impact". The anxiety about such an access is exacerbated by the research done by Denning et al. [38]. In this research, the authors explore vulnerable security measures in several toy robots [37].
Certainly, in areas such as robotics, producers need to be very innovative. Current world is witnessing a technological explosion with new possibilities. One argument is that ignoring speculation about future robots and their use can create ethical dilemmas. However, our argument is that it is necessary to adapt a perspective that is in agreement with the experiences resulted from empirical use of robots. This will help to complement the current debate on robot ethics.
A list of ethical issues related to the use of assistive medical robot from older adults' perspective are explained in details in this section and the following section. There are a noticeable number of ethical issues which are stated by senior citizens about the use of assistive medical robots. Amongst the stated issues, there are primary issues which are of significant concern not only to the older adults but also to elderly's family and caretakers, robot designers and developers.
Moreover, trust is a vital element for the formation and preservation of humans' dynamic relationship with assistive robots [39][40][41].
The lack of trust is the main reason that seniors do not wish, do not need, or do not consider robots.
Lack of trust results from some factors [42,40]: • Privacy: how can youth and the elderly leave their privacy in the hands of a robot? • Safety: If a robot is set to undertake physical responsibilities, the physical interaction of human-robot leads to serious challenges. Besides, upgraded methods are necessary to eliminate the failures raised by safety problems and confirm the absence of any unreliable behaviour. • Robustness: despite the circumstances, how the elderly can be convinced about the suitability of the behaviour of a robot? • Security: Affirming that the robot is not harmful for the elderly. • Data protection: how can the elderly be convinced of the safety of the significant data?
The ethical issues of assistive medical agents are listed in sections in below.
Privacy of older adults
Privacy of senior citizens is of paramount importance that it is well in line with other ethical issues such as data protection and security and safety. This ethical issue is of great concern to scholars [43][44][45][46][47][48][49][50][51][52]. This issue has substantial effect on older adults to lose their appeal to adopt smart home technology. The main process of smart home technology includes collection, transmit, distribution, and exchange of elderly's private information. This main process has impact on elderly to refuse smart home technology [43,46,53,54]. Take home healthcare robots as an example; this kind of robot enable medical specialists to keep a wary eye on their patients' well-being in remote places by means of various tools such as camera, ultrasound, and speaker [7].
The process of Ambient Intelligent Technology (AIT) consists of various procedures such as collecting, distributing, and storing full confidential data of user [55].
The key functions of this technology are to keep eyes on robot's user and to combine data from different resources by sensors to obtain the details of circumstance [45]. In the process of data collection, profound, medical, and confidential data of robot user are gathered. In addition, other parties might have access and control to the gathered data; therefore, user's privacy might be abused [55,56].
In addition, home automation system is one of the main ICT devices employed for fall prevention purpose. Home automation system is type of device which is wearable attached to the body of user by means of transparent film and neoprene belt. The primary function of this device is to detect fall incidents through video monitoring [9,10]. It is asserted from various studies that noticeable numbers of senior citizens are of critical concern about their privacy. Consequently, it is far favourable for them if the wearable device captures unclear photographs when they are at personal places such as bedroom. In contrast, it is accepted for elderly if the device takes clear images when they are at other rooms such as living room [57,10]. It is claim that privacy concern slackens older adults' interest towards this kind of devices especially visual surveillance or cameras [58].
Two-way visual contact is a way of communication and connection through webcams and television monitors, though it is not widely used despite its rather cheap price. This allows family members or employed carers to "look in" on older persons and their homes with no need to commute [59]. If older people feel at ease in working with computers, virtual visiting and communication is reasonable and easily established. It is not more difficult than installing and making a Skype account. Even there are virtual visiting systems which are more user-friendly than Skype and operate by connecting to local broadband networks.
Data protection
The ethical issue of data protection is well connected to privacy issue. In the process of home healthcare services, there should be a connection between both medical centre personnel and the place of robot user to provide not only safety services but also social care and daily basis services [49]. In multi-user cases, the intelligent system is in charge of distinguishing different data, namely robot user's private data, caretaker's data, as well as other relevant information to monitor well-being of user [60]. Consequently, it is essential to subject the collected data to act of data protection [61].
The primary function of assistive walking devices including fall detection devices such as home automation systems is to capture image or record video of older adults. The captured images or recorded videos might be inappropriate or unwanted; therefore, these images or video are unfavourable to elderly. Moreover, it is of importance concern to older adults if their personal data, namely images or videos, are accessed and viewed by third parties.
Some assistive robots are used to help in remote sensing and monitor the elderly in variety of locations. These robots are as assistant for those specialists who want to check their patients remotely, mainly in critical situations. They do this by making use of speakers, light, cameras, remote controls, ultrasound, and electronic medical recording accessories [62].
Security and safety
Safety and security ethical issue is well related to privacy concern [63]. It is strongly recommended that there should be a balance amongst the needs of elderly for safety, whilst preserving elderly's privacy and autonomous [29,56,64]. In addition, it is claimed that older adults, their families and caretakers have contrary point of view about privacy, safety and security concerns [56]. A conducted study reveals that family and caretakers of senior citizens are more concern about safety and security rather than privacy and independency concerns [56]. Moreover, although some scholars subscribe to the belief that there should be a balance between privacy and safety concern [29,64], other scholars believe that safety and security of elderly is of dramatic importance [43,[65][66][67].
Regarding security and safety of walking devices, over recent decades, one of the substantial and pricy public health issues is fall incidents and injuries happen to older adults [68][69][70]. It is found that one older adult out of three with the age of sixty-five or above falls yearly resulting in serious injuries which require treatment in medical centres [71][72][73]69]. Although there have been significant developments in fall prevention devices, fall incidents take place with severe consequences such as morbidity and mortality. Injuries resulted from fall incident are ranked number five in terms of causing mortality in ageing group with the age of sixty-five and above [71]. For this reason, safety and security of assistive walking devices are of dramatic concern to senior citizen.
In addition, it is imperative to ensure that walking devices especially ICT ones which function based on human-made programs do not pose fall incidents to elderly on account of negligible errors in their programs. Besides, fall incidents give rise to another ethical issue which is responsibility of fall incidents. It is evident that assistive walking devices including fall detection ones play important role not only in the well-being of older adults but also in occurrence of fall incidents. For this reason, it is dramatic to identify the responsibility of such incidents.
Various types of tasks are made possible by making use of the services offered by autonomous service robots. Samples are taking care of old people at home [74] or accompanying guests in multi-level buildings [81].
Robotic service solutions include the simplest telepresence to the most complex functions to back caregivers. Examples are the Giraff (www.giraff.org) advanced in the ExCITE project [75], AVA (www.irobot.com/ava) and Luna [76], assisting needy persons in their everyday movements (www.aal-domeo.eu), selfmanagement of long-lasting illness [77], comfort and safety as in the cases of Florence [78] and Robo M.D [79], and unification in an environment controlled by smart applications [80]. On the other hand, the number of robotic applications that are dedicated to social services in settings like smart office buildings is very few [81].
Error and safety
Safety of elderly using assistive medical robots is of significant concern to older adults, their family members and caregivers, and robot designers and programmers. The assistive medical agent carries out a task in accordance with program(s) which is written by a range of codes through robot developers. For this reason, a negligible error in robot's program might trigger older adults' wellbeing and might cause fatal and severe consequences [82].
Technological care giving is already realized in most of Western European counties, but the technology that is usually used in this case is not robotic. On the opposite, some of it is no doubt low-tech. The aiding technology that is mostly available for old people in the UK ranges from portable alarms for requesting help; smoke, CO 2 and flood sensors; pillboxes or containers that are designed in a way that let older people take their drug on time; fall sensors are another samples as well [83].
Responsibility
In Ambient Intelligent Technology (AIT), artificial robot and its user interact with each other directly. This interaction amongst them has led to several issues such as responsibility of tasks, designation of control, decrease in human force, and allocation of decision-making [43,45,46,82,84]. In today's world, artificial robots are increasingly and pervasively becoming autonomous which has resulted in diminishing human participation in some actions including decision-making. For this reason, liability of autonomous action is of critical concern [45,85].
Responsibility concerns about robots for older adults
Robots are capable of interacting with human being and the encompassing environment in very intricate ways. The traditional theories of moral responsibility are challenged by social robots. The production of robots results in various ethical questions: what are the possible harmful consequences of such production? What would be the end of key moral concepts such as autonomy and privacy in a time when robots are integrated with human life? Are these robots moral agents? Is it ethical to take them responsible? These ethical issues result from the developing sovereignty of the smart technical products the most remarkable representatives of which are the social robots. Can robots be assumed as socially autonomous responsible confidant agents that care and, meanwhile, perform their duty as technical gadgets?
Whilst most of these concerns are related to other fields of engineering, the capacity of robots to turn into ethical agents puts forth another set of moral questions, such as those related to the rights and responsibilities of robots [86].
People's ideas about the moral concerns introduced by autonomous products like robots very and address various notions such as the application of robots in, for instance, healthcare tasks. These views imply an understanding about the achievements of technology which depends on the ideas about the entity of technology and the relation of mind and matter in human and machine. The main focus of the usual approach of research in robot ethics deals with the robot and its entity and thoughts.
It helps to answer questions about the intelligence and rationality of robots, to see are they "moral agents". Or, it restricts ethical concerns to things that, interactions with robots, might go wrong. For most of philosophers of morality, ethics is related to feeling of responsibility, the appropriateness of some one's actions, and, then, the centrality of questions that consider moral status and action [87]. Usually, moral responsibility is only attributed to creatures that enjoy a tenable levee of moral agencywhat does it mean-and concentrates on the suitability of what that agent performs, has performed, or can perform [88]. To investigate the ethics of robot technology, Coeckelbergh [88] puts forth an approach which centralizes human or interaction. Instead of thinking of a mental philosophy which regards the real entity and thought of robots, it would be better to adopt a philosophy of interaction and seriously consider the ethical importance of exterior form [89].
One of the benefits of the Accompany focus group's discussions was the agreement that for monitoring the programming of robots, it is necessary to consider the communication of the older person who lived with a robot, with other organizations of formal and informal carers, instead of basically gratifying an aged person's desires. Still, the data also propose that, at least, one approach-the "let's do it together" strategy-may itself destabilize sovereignty by (unintentionally, perhaps) treating the older persons like children [83]. A robot would be considered as a social one when it takes responsibility, not when it is assigned with responsibility.
Human responsibility and robot responsibility
Robots have the power and ability of interacting with human being and human context in complicated ways. Robotics and making robots bring to the fore variety of applied ethical questions. Following introduces some of them: what are the potential risky consequences of making these robots? What autonomy and privacy concerns will be raised when robots turn to be an inseparable part of human life? Whilst most of these concerns are expressed in relation to other fields of engineering, the capability of robots to act as ethical entities introduces some other moral concerns, amongst them the right and responsibilities of robots?
The ethical issues have different layers that need to be discussed. The most central concerns deal with the responsibilities of robots [90][91][92] and human beings [92,93].
There is a question shared by many people who are worried about this matter: who is responsible for the mistakes committed by robots? In cases that a robot does not pass the limits of autonomous function, a minimum level of the product liability is assumable. Given that robots follow the plan and procedure decided by some persons or companies, those people or companies are clearly responsible for failure (barring misuse). In the cases that robots are equipped with the accessories to be programmed by customers, the realm of liability will be clear. Still, in semi-autonomous robots such as self-driving cars, the concept of liability would be complicated, particularly when an accident happens in the cases of cooperation between robot and human agent.
In the cases that robot is autonomous, responsibility will be considered entirely as that of robot. It means that the robot is not under the direct influence of programs, programmers, or operators [94].
Equal right for use of robot
It is found that one of the noticeable issues in robot ethics is having equal access to assistive medical robots. There have been a great number of debates surrounding this issue to consider whether it is affordable for every individual or particular group of individuals over the world to utilize and benefit from AIT or not [45,95]. It is stated that unequal access to robots and healthcare systems might result injustice [47].
One of the ethical chief issues is having unequal access to assistive walking devices. In other words, it is injustice that particular groups of older adults because of different factors such as being from third-world countries do not benefit from assistive walking tools. In addition, it is pointed out that a noticeable number of senior citizens are strongly concern about the cost and also maintenance expenses of assistive walking device. Consequently, this factor slackens their interest towards use of walking devices [10,58,96,97].
Social impact
In some cases, use of assistive medical robots instead of weakening negative impacts, it strengthens the adverse effects such as social isolation which results in reducing social interaction [29,82,98]. The result of conducted research studies reveals that assistive robots such as telecare decline social communication [99]. In addition, Chan et al. [49] believed that smart home technology affects human's relationship and communication with others owing to decreasing interaction between robot users and their caretakers.
It is found that albeit assistive walking devices such as wheeled walker compensate elderly's disabilities in moving, yet there is a gap for amelioration to diminish fall incidents, whilst improving elderly's appearance in public [8]. It is asserted that older adults encounter difficulties indoor and outdoor when they employ wheeled walkers. These issues take place when they move in curve, uphill, downhill, over obstacle(s), passing a door, on uneven ground, and carrying an object. In addition, mentioned issues might pose fall incidents to elderly. These issues might have negative effect on older adults' morality and make them to feel embraced to carry out outdoor activities such as visiting medical doctors, using public transportation, and visiting family members or friend [8].
Technology development
Over the past decades, there has been an abrupt development in technology. This has created hardship for technology users specifically older adults to learn and cope with new modern technology and systems. It is pointed out by Weiser and Brown [100] that it is significant for computer technology to be invisible when assisting users. In other words, technology users are not required to gain knowledge about technology. However, it is said that it is essential for technology users to be aware of advantages and disadvantages of technology's role in their lives [101].
Apart from the mentioned ethical issues, there is another significant issue which the authors of this paper believe that it is essential to take this ethical issue into consideration and embed it in ethical framework of assistive medical robots. This issue is related to robots users' feelings towards assistive robots. It is claimed that direct interaction between robots and individuals poses social isolation; therefore, this may influence robot users to have human feeling, namely love towards assistive robots. For this reason, it is important to consider appropriate standards in behaviours of robots to handle this issue.
Recently, there have been substantial technological developments in assistive walking devices. Some researchers believe that older adults are novice users; therefore, they prefer simple functions. Besides, older adults' behaviour is towards emergency situation is different; they refuse to ask for assistance from their caretakers or nurses [102]. On the other hand, it is stated that some older adults found utilization of technology easy and convenient [103] [104]. It can be learned from literature review that there are common ethics issues between assistive walking devices and robots. Therefore, proper framework can be formed to alleviate and solve the ethical issues with the purpose of satisfying elderly's needs.
Discussion and conclusion
It is evident that assistive walking devices and robots play imperative role in senior citizens' lives. These assistive agents and devices have embedded themselves into human's daily tasks pervasively. It is obvious that robots increasingly have been empowered; therefore, the action of robots might have either destructive effect or useful impact on older adults. In other words, the consequence of assistive robot including walking device is far of significant concern rather than its action. In this case, the concept of consequentialism ethics theory can be applied in assistive walking devices and autonomous agent framework. Moreover, the common ethical issues of both assistive walking devices should be taken into consideration to complete a proper ethics framework which can be applied globally. In addition, a proper ethics framework play beneficial role to promote elderly's standard of living, improve elderly's satisfaction, compensate elderly's disabilities, whilst reducing burdens of expenses related to healthcare services and centres. | 6,879.6 | 2017-11-15T00:00:00.000 | [
"Engineering",
"Medicine",
"Philosophy"
] |
The war on cryptococcosis: A Review of the antifungal arsenal
Cryptococcal meningitis is the most common central nervous system infection in the world today. It occurs primarily, but not exclusively, in immunocompromised individuals and despite substantial improvement in management of clinical events like AIDS, the numbers of cases of cryptococcosis remain very high. Unfortunately, despite several antifungal agents available for treatment, morbidity and mortality rates remain high with this fungal infection. In this Review, we will describe the treatments and strategies for success, identify the failures, and provide insights into the future developments / improvements for management. This sugar-coated yeast can play havoc within the human brain. Our goals must be to either prevent or diagnose disease early and treat aggressively with all our clinical tools when disease is detected.
Cryptococcosis is a global invasive mycosis that is associated with high morbidity and mortality. Patients with HIV infection are at a significantly increased risk of developing this fungal disease. With its profound propensity to locate within the central nervous system (CNS), Cryptococcus spp. frequently causes fungal meningitis. In fact, this encapsulated yeast remains the most common cause of meningitis in HIV-infected individuals living in sub-Saharan Africa. It is estimated that in 2014 there were over 220,000 new cases of cryptococcal meningitis globally resulting in more than 180,000 deaths and is responsible for 15% of all AIDS-related deaths. Although the annual rate of cryptococcal disease has decreased after the widespread use of highly active anti-retroviral therapy (HAART) in developed countries, the prevalence of cryptococcal infection remains at a high level in low and middle-income countries despite the availability of HAART (Tenforde et al. 2017). The one-year mortality after cryptococcal meningitis ranges from 10-30% in North America to up to 50-100% in low-income countries (Rajasingham et al. 2017, Williamson 2017. Furthermore, non-HIV patient populations are also at risk of cryptococcal infection, notably transplant recipients and patients on immunosuppressive therapies. For example, approximately 2-3% of solid organ recipients have been found to develop cryptococcal infection with most patients presenting with disseminated infection (Larsen et al. 1994, Mayanja-Kizza et al. 1998, Milefchik et al. 2008, Pappas et al. 2009). With over 33,000 solid organ transplants performed in the United States alone in 2016, the number of cryptococcal infection cases remains unacceptably high (HRSA 2017). With our increasing use of immune-modulators from corticosteroids, biological modifiers (i.e. anti-TNF and anti-CD54) to new anticancer agents such as ibrutinib (Messina et al. 2017), we can expect the number of patients with cryptococcosis to remain concerning (George et al. 2017).
Antifungal drug therapy remains the mainstay of treatment of these cryptococcal infections. This review aims at highlighting the drugs and strategies utilized for the management of this life-threatening infection, as well as the new developments in treatment.
Therapeutic principles for cryptococcal meningitis -Before examining the details of our arsenal, several therapeutic principles for management of cryptococcal meningitis should be listed: (1) early diagnosis is helpful to a successful outcome of treatment with a lower burden of yeasts and less destruction from a persistent, dysregulated immune system; (2) identification of new, old, and changing risk factors is necessary to properly utilize our outstanding biomarkers for disease; (3) a fungicidal anticryptococcal regimen that rapidly clears viable yeasts from the subarachnoid space is optimal management; (4) major complications of cryptococcal meningitis should be carefully identified and managed. These include (a) increased intracranial pressure and (b) development of the immune reconstitution inflammatory syndrome (IRIS); (5) further understanding of in vitro anticryptococcal yeast susceptibility testing for resistance (there are no validated drug break points) and strain evaluation genetically for identification of possible "bad actor" strains will be helpful. This area requires further research to become more clinically relevant and precise; (6) we must control the concomitant underlying diseases at all costs and this will likely demand attention to the "Goldilocks's Paradigm of Immunology, not too much and not too little but must get it just right"; (7) our goal is to reduce mortality but it must also be accompanied by a reduction in morbidity, which is less chronicled in present reviews.
Amphotericin B & flucytosine (5-FC) -The polyene, amphotericin B, has been the mainstay of treatment of cryptococcal meningitis in HIV-infected individuals and transplant recipients as well as non-HIV and non-transplant patients for several decades. Using a polyene-based regimen has been associated with significant reduction of the yeast burden within the CNS and is correlated with improved survival (Sloan & Parris 2014). Furthermore, the combination of amphotericin B with 5-FC has been shown to be the most fungicidal regimen at present in the clinics and is associated with improved survival among those with cryptococcal meningitis compared to amphotericin B treatment alone (Larsen et al. 1990, de Gans et al. 1992, Brouwer et al. 2004, Dromer et al. 2008, Day et al. 2013, Sloan & Parris 2014. Recently, a large randomized study (ACTA) with over 700 patients confirmed the superiority of this combination (Kanyama & Molloy 2017). In medically-developed countries, using the combination of amphotericin B and 5-FC for induction of HIV-infected patients with cryptococcal meningitis in a three-part strategy of induction-consolidationmaintenance is recommended at a dose of amphotericin B deoxycholate of 0.7-1 mg/kg/day intravenously with 5-FC 100 mg/kg/day orally for at least two weeks (Perfect et al. 2010). Studies that have assessed the use of the higher dose of amphotericin B (1 mg/kg/day vs. 0.7 mg/kg/day) with 5-FC found that despite the improved fungicidal activity, the number of serious adverse events related to higher doses increased (Bicanic et al. 2008).
Substituting lipid formulations of amphotericin B for amphotericin B deoxycholate is favored in most patients in medically developed countries and particularly in transplant recipients who are at significantly higher risk of nephrotoxicity due to their potential concomitant use of other nephrotoxic drugs such as calcineurin inhibitors (Coker et al. 1993, Sharkey et al. 1996, Leenders et al. 1997, Baddour et al. 2005, Singh et al. 2005, Hamill et al. 2010). Liposomal amphotericin B or amphotericin B lipid complex are recommended at a dose of 3-4 mg/kg/day or 5 mg/kg/day, respectively (Perfect et al. 2010). Extending the induction period depends on the prognosis and response of the individual at the time. In general, there is a switch toward primary use of lipid products of amphotericin B in resource-available health care systems and shorter course amphotericin B therapy (one week) in resource-limited healthcare systems for the induction period.
Azoles: fluconazole -Fluconazole is used in the three-part strategy of induction-consolidation-maintenance in combination with amphotericin B as a substitute for 5-FC in the induction phase when 5-FC is not available as well as alone in the consolidation and maintenance phases for cryptococcal meningitis (Perfect et al. 2010). Fluconazole has a fungistatic effect on cryptococcal meningitis and so the burden of yeast must be low or dose of fluconazole must be very high for it to have a significant therapeutic impact in the CNS (Martinez et al. 2000, Rollot et al. 2001, Aberg et al. 2002, Vibhagool et al. 2003, Mussini et al. 2004. For this reason, reduction of fungal burden with amphotericin B in combination with fluconazole is necessary for successful outcome in resource-limited settings (Saag et al. 1992, Haubrich et al. 1994, Menichetti et al. 1996, Robinson et al. 1999, Bicanic et al. 2007). Although amphotericin B and fluconazole in combination are not as potent as amphotericin B plus 5-FC, it clearly can effectively be used in induction therapy at higher doses of fluconazole of ≥ 800 mg/day (Pappas et al. 2009). The use of high dose fluconazole as monotherapy for induction therapy is still associated with significantly higher mortality than combination therapy and needs further investigation to implement successful monotherapy regimens in resource-limited settings (Nussbaum et al. 2010, Gaskell et al. 2014). However, the use of fluconazole with 5-FC in combination as an all oral regimen continues to be studied and most recent data suggest that it is not significantly inferior to the amphotericin B-containing regimens (Kanyama & Molloy 2017).
Following induction therapy, relatively high doses of fluconazole are used in the consolidation phase for a recommended period of eight weeks and this has been shown to be superior to other azoles such as itraconazole (Perfect et al. 2010, Bicanic et al. 2015. The last phase of the three-part strategy, suppression or maintenance, is also continued with fluconazole if the patient is stable after consolidation. The use of a suppression strategy has been validated by several studies, due to the high relapse rate of cryptococcal meningitis prior to HAART (15%) (Perfect 2016). Fluconazole was shown to be superior to itraconazole and even weekly amphotericin B for suppression (Perfect 2016). Suppression with fluconazole is generally continued for at least one year in HIV patients and for 6-12 months in non-HIV patients (Perfect et al. 2010). In HIV patients, after successful introduction of HAART, discontinuing suppressive therapy with fluconazole is recommended when CD4 ≥ 100 cell/µL and an undetectable HIV RNA level is sustained for ≥ 3 months (Perfect et al. 2010).
For other cryptococcal infections, such as pulmonary cryptococcosis, fluconazole monotherapy is recommended as treatment for mild to moderate disease (Perfect et al. 2010).
Itraconazole, voriconazole and posaconazole -The extended-spectrum azoles have a limited experience in treatment of cryptococcosis. Itraconazole has successfully treated cryptococcal meningitis but its poor absorption and limited CNS penetration makes it unreliable (Chotmongkol & Jitpimolmard 1992, de Gans et al. 1992, Pitisuttithum et al. 2005. Voriconazole, which does penetrate into the CNS well, has had some success in primary treatment, particularly in normal hosts (Yao et al. 2015). Posaconazole has excellent activity against Cryptococcus spp. but has limited CNS penetration. The largest series of cryptococcal meningitis treatment with posaconazole had 14/29 (48%) success (Pitisuttithum et al. 2005).
Isavuconazole -The VITAL study was an open label phase III trial to assess the use of oral and intravenous isavuconazole for primary or salvage therapy of Cryptococcus spp. and dimorphic mycoses infections. Of the nine patients with cryptococcal infections receiving isavuconazole, 67% had either success, partial success or stable disease with an all-cause mortality of 11%. Furthermore, preliminary data suggests that isavuconazole has adequate CNS penetration and might likely have a role in treating cryptococcal meningitis and CNS infections, but further studies are necessary. Adverse events occurred at around 37% with this azole but were more favorable than amphotericin B and none of the patients discontinued therapy due to adverse events (Thompson 3rd et al. 2016).
HAART -HAART during management of cryptococcal meningitis is a double-edged sword. On the one hand, it could improve the immune response to the infection but on the other, it might contribute to IRIS as well as possess drug interactions and toxicity to curtail the management plan. Deciding when to initiate HAART is critical. Delaying HAART for five weeks after diagnosis and initiation of antifungal therapy improved survival compared to only one-two weeks (Zolopa et al. 2009) and clearly, early use of HAART in first two weeks of cryptococcal meningitis treatment increased deaths (Boulware et al. 2014). Current recommendations suggest delaying HAART for two-10 weeks after initiation of antifungal therapy (Perfect et al. 2010). Furthermore, in resource-limited countries with difficulty implementing wide spread HAART for HIV patients, initiating HAART within four-five weeks might be required to adequately control the severe underlying disease.
Combination screening & pre-emptive therapy -Cryptococcal antigen (CrAg) can be detected in serum more than three weeks prior to onset of symptoms of cryptococcal meningitis. Global cryptococcal antigenemia in 2014 was estimated at 6% of HIV-infected patients with CD4 counts less than 100 cells/µL, translating to approximately 280,000 individuals (Rajasingham et al. 2017). Asymptomatic cryptococcal antigenemia is well known to occur in patients with HIV infection (Longley et al. 2016). Furthermore, HIV-infected patients with low CD4 counts who have a positive CrAg have significantly higher mortality (Desmet et al. 1989, Nelson et al. 1990, Tassie et al. 2003, Liechty et al. 2007, Micol et al. 2007). As a result, the World Health Organization (WHO) recommended that HIV patients with CD4 ≤ 100 cells/µL should be tested for cryptococcal antigenemia (McKenney et al. 2015). If tested positive, asymptomatic patients would be given oral fluconazole for pre-emptive antifungal therapy unless titer was high (≥ 1:160) in which case CSF should be examined for presence of meningitis (Letang et al. 2015, McKenney et al. 2015. This strategy was shown to not only decrease cryptococcal disease and improve survival, but has proven cost-effective in resource-limited countries with high incidences of disease (Meya et al. 2010, Kaplan et al. 2015, McKenney et al. 2015. The development of cheaper lateral flow assays can be utilized with an accuracy of nearly 100% to diagnose cryptococcal disease and even suggest prognosis, with each 2-fold increase in titers associated with higher mortality at two and 10 weeks (Kabanda et al. 2014). Therefore, in resource-limited areas with high antiretroviral drug resistance and significant prevalence of cryptococcal disease, a CrAg screening and preemptive therapy strategy should be considered (Ssekitoleko et al. 2013).
Immunotherapy -Despite the abundance of in-vitro and in-vivo studies demonstrating the significant contribution of immune modulation to the course of cryptococcal infections, there is a lack of sufficient clinical data to make strong recommendations for the use of immunotherapy in treating these infections (Antachopoulos & Walsh 2012). In murine models of cryptococcal infections, administering IL-12 and IL-18 have been shown to significantly reduce the fungal cell burden in many organs and enhance elimination of Cryptococcus spp.. These murine models have also shown an increase in IFN-γ after administering IL-12 or IL-18, with anti IFN-γ antibodies eliminating the protective effect of the interleukins (Kawakami et al. 1996b(Kawakami et al. , 1997. Direct administration of IFN-γ prolonged survival in murine models of cryptococcosis (Kawakami et al. 1996a). Administration of TNF-α in-vitro increased the anticryptococcal activity of macrophages and combining TNF-α with granulocyte-macrophage colony stimulating factor enhanced phagocytosis by murine macrophages (Collins & Bancroft 1992, Kawakami et al. 1999. Murine models using TNF-α related antibodies decreased fungal cell burden, mediated by an increase in IFN-γ (Zhou et al. 2006). A study of 62 HIV patients receiving antifungal therapy for cryptococcal meningitis showed that survivors compared to non-survivors had higher CSF concentrations of IFN-γ, TNF-α, IL-6 and 8 (Siddiqui et al. 2005). Furthermore, some non-HIV, non-transplant and supposedly immunocompetent patients with cryptococcal disease have been shown to possess autoantibodies to GM-CSF (Saijo et al. 2014).
With this very supportive background for immunotherapy, several human studies have been performed. First a phase II trial evaluating the use of adjuvant recombinant IFN (rIFN)-γlb in HIV patients with cryptococcal meningitis showed accelerated clearance of CSF cultures but results did not achieve statistical significance (Pappas et al. 2004). Second, another recombinant interferon-gamma study showed a positive effect on the reduction of yeasts in the CSF without adverse effects (Jarvis et al. 2012). Despite these two positive studies for adjunct recombinant gamma interferon current guidelines have a low level recommendation for the addition of rIFN-γ to the antifungal regimen of patients with persistent cryptococcal infection (Perfect et al. 2010). This hesitancy to elevate immunomodulation therapy as first line therapy in the induction regimen may be due to concerns about monitoring for appearance of IRIS in CNS infections and the general lack of robust clinical trials.
Other immunotherapeutic options being studied involve the use of monoclonal antibodies targeting virulence factors of Cryptococcus spp. such as capsule, as well as radiolabelled monoclonal antibodies to deliver radiation to cryptococcal cells and induce an apoptosis-like cell death (Larsen et al. 2005, Antachopoulos & Walsh 2012. Adjuvant steroid use -Given that adjuvant steroid use in patients with other types of meningitis like tuberculosis and some types of bacterial meningitis has been shown to reduce morbidity and mortality, a large clinical trial was performed to evaluate the combination of dexamethasone with amphotericin B and fluconazole antifungal therapy. The trial was stopped early due to the increased mortality at 10 weeks and six months as well as more adverse events in the group receiving dexa-methasone compared to placebo. The adverse events included progression of infections, nephrotoxicity and cardiac toxicity. Furthermore, the addition of corticosteroids reduced the fungicidal activity of the drugs (Beardsley et al. 2016). However, it is important to note that this study evaluated the routine administration of dexamethasone at the beginning of therapy and not its use in management of IRIS. There have been some expert opinions on the use of steroids to curb intrathecal inflammation after achieving microbiological control (Panackal et al. 2016). A study involving patients with cryptococcal spinal arachnoiditis showed that excessive inflammation prolonged symptoms that later improved with administration of corticosteroids . Therefore, despite the disadvantages of routine use of dexamethasone in treatment of cryptococcal meningitis, it is important to recognize that steroid therapy may be life-saving during IRIS. A taper of corticosteroids can be considered over two-six weeks, based mainly on expert opinions.
Lumbar puncture (LP) -Studies have shown that raised intracranial pressure (ICP) in cryptococcal meningitis is associated with increased mortality (Graybill et al. 2000). The withdrawal of CSF appears to be effective in reducing intracranial pressure caused by an outflow obstruction of the arachnoid villi by clumping yeasts (Denning et al. 1991). A lumbar puncture to control ICP and symptoms has been recommended and a study evaluating the impact of therapeutic LPs on survival showed a 69% relative improvement associated with therapeutic LPs irrespective of initial ICP, suggesting that any patient with cryptococcal meningitis may benefit (Perfect et al. 2010. Hence, utilizing therapeutic LPs in the management of cryptococcal meningitis must be considered at times but its precise use and frequency are not yet defined. New strategies: advancing cryptococcal meningitis treatment for Africa (ACTA) -ACTA is a phase III trial aimed at finding more feasibly implemented regimens for resource-limited healthcare systems and to determine if 5-FC or fluconazole is better adjuvant treatment. Three treatment strategies were compared: (1) an all oral strategy, of high-dose fluconazole and 5-FC for two weeks; (2) amphotericin B plus either high-dose oral fluconazole/5-FC for one week only; (3) amphotericin B plus high dose fluconazole / 5-FC for two weeks (control). An early presentation from the trial revealed that the short course arms of amphotericin B and the all-oral fluconazole/5-FC were non-inferior to the control group. Furthermore, using 5-FC as adjuvant therapy with amphotericin B led to lower mortality compared with fluconazole. The one-week amphotericin B plus 5-FC arm had the lowest mortality among the treatment arms (Kanyama & Molloy 2017).
AmBisome plus high dose fluconazole for treatment of HIV-associated cryptococcal meningitis (AMBITION-cm)
-AMBITION-cm is a phase II randomized, controlled, non-inferiority trial that has recently shed some light on the use of a few large doses of liposomal amphotericin B with high dose fluconazole for the treatment of cryptococcal meningitis in HIV patients. The short-course highdose polyene arms received one, two or three doses of liposomal amphotericin B while being given a high-dose fluconazole backbone regimen. Data has shown the rate of clearance of CSF cultures in all the short-course, highdose arms to be non-inferior to the standard two-week course of daily polyene, with none of the participants necessitating any treatment interruptions. The single dose (10 mg/kg/day) liposomal amphotericin B regimen is being taken to a phase III trial (Jarvis et al. 2017). These new strategies have the potential to reduce cost and toxicity.
Pipeline drugs -APX001 is a first-in-class antifungal compound inhibiting fungal protein Gwt1 with no cross-reaction with the human protein. It has recently been shown that combining APX001 with fluconazole will reduce the fungal burden in mice with cryptococcal meningitis significantly compared to either agent alone. It also has the added advantage of oral bioavailability and even more potent anticryptococcal compounds in this class have been discovered (Schell et al. 2017). AR-12, a celecoxib derivative, repurposed for antifungal therapy, inhibits fungal acetyl-CoA synthase I and down regulates host chaperone proteins that reduce host immune response. It has shown antifungal activity against Cryptococcus spp. as well as candida and moulds (Perfect 2017). T-2307, an allylamine that inhibits fungal mitochondrial membrane potential has been shown to have extremely potent antifungal activity in vitro and against cryptococcosis in animal models. The azoles have further evolved and with new technology to reduce CYP450 interactions, several compounds, VT1129 and VT1598, have shown outstanding fungicidal activity in murine models of cryptococcosis (Perfect 2017).
In the area of repurposing available drugs, both sertraline and tamoxifen have the potential to be used in combination with existing antifungal drugs for treatment of cryptococcal meningitis and cryptococcosis. In fact, sertraline has been successfully used in a pilot study and a more definitive trial is heading toward completion (Rhein et al. 2016). The calcineurin pathway has also been shown to be an important factor in cryptococcal virulence and utilizing calcineurin inhibitors such as tacrolimus / cyclosporine in combination with antifungal drugs could have potential in the future. However, it is likely that it will be necessary to discover congeners of these landmark drugs that possess more potent antifungal activity but also reduced immunosuppressive activities (Perfect 2017).
Neurapheresis -Neurapheresis is a new, potential technology for managing cryptococcal meningitis that is currently in development. Cryptococcal meningitis causes an increase in CSF pressure by mechanical occlusion of the arachnoid villi and thus rapid killing and removing yeasts from the subarachnoid space may change our management of cryptococcal meningitis. Utilizing the size of the encapsulated yeast, neurapheresis involves mechanically filtering the CSF from the subarachnoid space (SAS) using a membrane with pores ≤ 5 μm in diameter to trap the fungus. A peristaltic pump circulates the CSF of rabbits with cryptococcal meningitis through this filter and reintroduces it back into the SAS. After six hours of cycling through the filter, the fungal colony forming units in the CSF were reduced by up to 99%. Further studies may aim at combining neurapheresis with direct antifungal drug delivery such as amphotericin B intrathecally to create high efficacy regimens along with systemic antifungal agents (Cutshaw et al. 2016).
In conclusion -Mortality and morbidity still remain high for cryptococcal meningitis despite several strategies for management. Infections in the CNS can be unforgiving and therefore, all tools available must be utilized. Complications such as increased intracranial pressure and IRIS must be recognized and treated despite unclear guidelines. Early diagnosis, expert care ) and rapidly fungicidal regimens will allow more optimal management. However, clearly, the discovery of more potent and less toxic antifungal agents will be helpful to reduce the negative impact of this CNS fungal infection. We are poised today to make an improvement in our antifungal arsenal tomorrow.
AUTHORS' CONTRIBUTION
Dr Mourad and Dr Perfect contributed to the article by conducting the appropriate literature review, drafting the article and revising important content; Dr Perfect gave final approval of the version to be submitted. | 5,286.4 | 2018-02-19T00:00:00.000 | [
"Medicine",
"Biology"
] |
Quantum gravity stability of isotropy in homogeneous cosmology
It has been shown that anisotropy of homogeneous spacetime described by the general Kasner metric can be damped by quantum fluctuations coming from perturbative quantum gravity in one-loop approximation. Also, a formal argument, not limited to one-loop approximation, is put forward in favor of stability of isotropy in the exactly isotropic case.
Introduction
Standard [1] and loop [2,3,4] quantum cosmology heavily depends on the implicit assumption of (quantum) stability of general form of the metric. As a principal starting point in quantum cosmology, one usually chooses a metric of a particular (more or less symmetric) form. In the simplest, homogeneous and isotropic case, the metric chosen is the (flat) Friedmann-Lemaï¿oetre-Robertson-Walker (FLRW) one. Consequently, (field theory) quantum gravity reduces to a much more tractable quantum mechanical system with a finite number of degrees of freedom. It is obvious that such an approach greatly simplifies quantum analysis of cosmological evolution, but under no circumstances is it obvious to what extent is such an approach reliable. The quantum cosmology approach could be considered unreliable when (for example) the assumed symmetry of the metric would be unstable due to quantum fluctuations. More precisely, in the context of the stability, one can put forward the two, to some extent complementary, issues (questions): (1) assuming a small anisotropy in the almost isotropic cosmological model, have quantum fluctuations a tendency to increase the anisotropy or, just the opposite, to reduce it? (2) assuming we start quantum evolution from an exactly isotropic metric should be we sure that no quantum fluctuations are able to perturb the isotropy? In this Letter, we are going to address the both issues of the quantum stability of spacetime metric in the framework of standard covariant quantum gravity. Namely, in Section 2, we address the first stability issue for an anisotropic (homogeneous) metric of the Kasner type, to one loop in perturbative expansion. In Section 3, we give a simple, formal argument, not limited to one loop, concerning the second issue.
One-loop stability
The approach applied in this section is a generalization of our approach used in [5] in the context of FLRW geometry. In our present work, the starting point is an anisotropic (homogeneous) metric, of the Kasner type, i.e.
where k i are the Kasner exponents. One should stress that we ignore any assumptions concerning matter content, and consequently, no prior bounds are imposed on k i .
In the perturbative approach then is small, as expected, closely to the expansion (reference) point t 0 . Using the gauge freedom to satisfy the harmonic gauge condition (see, the second formula in (11)), we gauge transform the gravitational field h µν as follows, where the gauge parameter Then, and, skipping the prime for simplicity, we have where spacetime indices are being manipulated with the Minkowski metric η µν . Now, we should switch from our present h µν to standard perturbative gravitational variables, i.e. to the "barred" fieldh µν defined bȳ andh The Fourier transform ofh µν is where, for the h i of the explicit form (4), we have (from now on, we denote classical gravitational fields with the superscript "c") According to (A.8) a one-loop quantum contribution corresponding to the classical metric (12) equals Defining the auxiliary functioñ we have Its Fourier reverse is where ψ is the digamma function, and according to (14) (17)). For the Kasner exponents k ∈ 1 4 , 1 , δh A (k) is evidently decreasing function thus supporting the damping of cosmological anisotropy.
Performing the gauge transformation in the spirit of (7) we can remove the first (time) component in (18), and (once more, skipping the prime for simplicity) we get a quantum contribution to the Kasner metric Only the "anisotropic" part of (19), i.e.
can influence the anisotropy of the evolution of the Universe. Since the dependence of δh A i on k j is purely "diagonal" (δh A i depends only on k j with j = i, see (17)), we have the following simple rule governing (de)stabilization of the isotropy: the increasing function δh A (k) implies destabilization (there is a greater contribution of quantum origin to the metric in the direction of a greater classical expansion), whereas the decreasing function implies stabilization. Unfortunately, δh A (k) is not a monotonic function because the digamma function ψ oscillates, and moreover (17) is (in general 1 ) a Λcutoff dependent function. Nevertheless, if we assume the point of view that it is not necessary to expect or require the stability of the isotropy in the whole domain of the Kasner exponents k i , but only for some subset of them, considered physically preferred, a definite answer emerges. Since k = 1 2 corresponds to radiation, and k = 2 3 corresponds to matter, we could be fully satisfied knowing that δh A (k) is monotonic in the interval k ∈ 1 4 , 1 (⊃ 1 2 , 2 3 ). Furthermore, since α 1 > 0 for any spin (see , Table A.1), δh A (k) is a decreasing function in this interval, implying (quantum) damping of the anisotropy (see, Fig. 1).
Above one loop and final remarks
Section 2 has been limited to one-loop perturbative analysis of the stability of isotropy of cosmological evolution. But one can give a simple, formal argument ensuring stability of the "exactly isotropic" expansion, i.e. for which is perturbative but not limited to one loop, making use of (A.1), where now D and Π is the full propagator and the full vacuum polarization, respectively. Since no spatial coordinate x i is singled out in (22), and consequently no spatial coordinate can be singled out on LHS of (A.1). This argument is only of purely formal interest as any other fluctuations can destabilize the isotropy. Recapitulating, as far as perturbative quantum gravity in one-loop approximation is concerned we have observed that in a (hopefully) physically preferred region of the Kasner exponents k i ∈ 1 4 , 1 we should expect damping of anisotropy by quantum fluctuations, thus supporting reliability of the approach of quantum cosmology in this regime. One should point out that this result is subjected to several limitations. First of all, since κh i (t) should be small, t should be close to t 0 because of (4). But for t ≈ t 0 , by virtue of (18), we have i.e. for pure radiation (see, [5]). Intuitively, it could be explained by the fact that a scale-independent classical source, the photon field, implies vanishing of scale-dependent logarithms (no quantum "anomaly"). The coefficients entering (A.8) taken from [8,9,10,11] (see, also [12]. In particular, α 1 enters (20), and its positivity for any spin supports the damping of cosmological anisotropy.
Therefore, to stay in the perturbative regime, t 0 should be greater than t Planck , and we have to be away from the (primordial) classical singularity. Instead, for t 0 many orders greater than t Planck , according to (23) the quantum contribution becomes small, and moreover classical matter (including radiation) is expected to begin to play a role. In particular, it was shown in [13] 2 that the effects of viscosity in the radiation and pressure from collisionless radiation ensure isotropization (and stability) of cosmological evolution at late times. is the free graviton propagator in the harmonic gauge, and Π αβ µν (p) is the (one-loop) graviton vacuum polarization (self-energy) tensor operator. Here with the coefficients α 1 and α 2 given in Table A Then the final formula assumes the form: h q µν (p) = iκ 2 p 2 I(p 2 )(2α 1 E + 4α 2 P)h c µν (p) . (A.8) | 1,796.2 | 2011-07-18T00:00:00.000 | [
"Physics"
] |
Tailoring surface phase transition and magnetic behaviors in BiFeO3 via doping engineering
The charge-spin interactions in multiferroic materials (e.g., BiFeO3) have attracted enormous attention due to their high potential for next generation information electronics. However, the weak and deficient manipulation of charge-spin coupling notoriously limits their commercial applications. To tailor the spontaneous charge and the spin orientation synergistically in BiFeO3 (BFO), in this report, the 3d element of Mn doping engineering is employed and unveils the variation of surface phase transition and magnetic behaviors by introducing chemical strain. The spontaneous ferroelectric response and the corresponding domain structures, magnetic behaviors and spin dynamics in Mn-doped BFO ceramics have been investigated systematically. Both the surface phase transition and magnetization were enhanced in BFO via Mn doping. The interaction between the spontaneous polarization charge and magnetic spin reorientation in Mn-doped BFO are discussed in detail. Moreover, our extensive electron paramagnetic resonance (EPR) results demonstrate that the 3d dopant plays a paramount role in the surface phase transition, which provides an alternative route to tune the charge-spin interactions in multiferroic materials.
microscopy (PFM), superconducting quantum interference device (SQUID) magnetometer, differential scanning calorimetry (DSC) and electron paramagnetic resonance (EPR) spectroscopy can be found in Methods. We name the undoped BiFeO 3 , 0.5 at.% Mn and 2.0 at.% Mn doped BiFeO 3 as BFO, 0.5% Mn-BFO and 2.0% Mn-BFO, respectively. Figures 1(a) and 1(b) display the atomic configuration of BFO unit cell with Mn substitution and 2.0% Mn-doped BFO supercell, respectively. Notably, BFO has a rhombohedrally distorted perovskite-type structure of space group R3c, and its unit cell has the lattice parameter a 5 5.57874 Å and c 5 13.8688 Å [ Fig. (a)]. The distorted FeO 6 /MnO 6 octahedra are formed with Fe 31 ions surrounded by six neighboring oxygen anions, and two octahedra are connected by sharing their oxygen. The introduction of the Mn ions into the Fe ion site significantly destabilizes the pole symmetry due to chemical strain effects, which stem from the size mismatch between the two Bsite cations (Mn and Fe ions) 11 . Additionally, the multi-valence states of Mn ions provide additional force to drive the charge compensation. Figure 1(c) shows the XRD patterns of the undoped and Mndoped BFO samples with various Mn doping concentrations. All XRD patterns can be identified as a rhombohedral phase (JCPDS File No. 71-2494) with space group of R3c. Clearly the crystallographic symmetry has not been modified by the Mn substitution, since the Mn solubility in BFO is close to 30% at Fe substitution sites 12 . A small amount of Mn-related secondary phase of Bi 25 Fe 1-y Mn y O 39 is observed within the detection limits; and yet this phase is a paramagnetic phase so that it will not affect our analysis discussed later 13 . As shown in the inset of Fig. 1(c), the cell parameters calculated from the (001) c peak based on the pseudocubic lattice type are 3.969, 3.955 and 3.941 Å for BFO, 0.5% Mn-BFO and 2.0% Mn-BFO, respectively, confirming that the different ionic radii of Mn and Fe ions generate the chemical strain.
Figures 2(a), 2(b), and 2(c) show the high-resolution TEM (HRTEM) images of the undoped and Mn-doped BFO samples, respectively, along with the corresponding selected area electron diffraction patterns (SAED) shown in the insets. These results further confirmed that the BFO samples are well crystallized with perovskite structure, which are consistent with our XRD results. The resolved crystalline domain with a uniform interplanar spacing of 3.96, 3.95 and 3.94 Å are corresponding to the (001) c d-spacing of the pseudocubic structure. The electron diffraction (ED) patterns in Figs. 2(d), 2(e), and 2(f) reveal the details of octahedral tilting in the samples 14 . The superstructure patterns are associated with the antiphase rota-tions of FeO 6 octahedra at 1 2 hkl f g c positions around six out of 12 ,110. c zone axes 15 . The intensity of the superstructure reflections is enhanced with increasing Mn content, indicating that the FeO 6 octahedral symmetry has been significantly tilted due to the Mn substitution at the Fe-sites. The tilted octahedral structure will further influence the spin orientation of BFO host as a consequence of the change of ''-Fe-O-Fe(Mn)-'' bond angle and the perturbation of the spatial spin modulation. Meanwhile, the atomic (polar) displacements and the chemical strain in the bulk surface and inside are not the same, which can result in different charge release 16 . The piezoresponse force microscope (PFM) is used to evaluate the piezoelectric response of samples 17,18 . The surface morphologies of the polished samples are shown in Figs. 3(a), 3(d) and 3(g). The AFM images were recorded in an AC scanning mode with a deflection set point of 0.2 V and at a scanning rate of 0.5 Hz. The spring constant of the tip is about 2 N/m. It is clear that the surfaces are smooth enough for PFM scanning. The PFM experimental setup was described in detail previously 19 The majority of the domains in undoped BFO are oriented with the polarization upward (yellow bright), and only small portion of the domains exhibits the polarization oriented downward (corresponding to a smaller concentration of the 180u domain walls). In contrast, Mn-doped BFO phase images show a ratio between upward and downward polarized domains close to 654 for 0.5% Mn-BFO and 456 for 2.0% Mn-BFO, respectively. A number of areas in different samples were scanned via PFM to confirm the domain distribution is uniform in the sample. Furthermore, the Mn-doped BFO specimens exhibit a higher volume density of domain walls than that of undoped one, suggesting that the Mn ion can effectively reduce the domain size in BFO 20 . This modification of ferroelectric domain structure via Mn doping enables us to utilize the chemical strain to control and tune the spontaneous charge state. Meanwhile, the electronic transition between the domain and domain wall opens a door for us to incorporate magnetic degree of freedom into multiferroic BFO. Before delving into the magnetic behaviors, the macroscopic ferroelectric and dielectric properties were investigated first at room temperature. Figure 4(a) shows the ferroelectric polarization versus electric field (P-E) curves of the undoped and Mn-doped BFO samples measured at ,1 kHz frequency and at room temperature. Noted that the loops are not saturated, which can be ascribed to the leakage current due to the defects (e.g. oxygen vacancies), and the relatively low applied electric field cannot switch the ferroelectric domain effectively for thick samples. Nevertheless, with Mn additions, an enhancement in polarization is achieved. The enhanced polarization can be attributed to the elevated density of ferroelectric domains, considering that the domain structure obtained via PFM in Mn doped BFO sample, Additionally, the Mn substitutions increase the distortions in the FeO 6 octahedral and Fe-O-Fe bond angles, and thus the tetragonality in the crystal structure. The resultant chemical strain from the structure variation can augment the polar displacement of Bi 31 ions and the 6s 2 lone pair electrons of Bi 31 ion 21 , as a result, the increased polarization is as expected. The leakage current density (J) versus the applied electric field (E) is shown in the inset of Fig. 4(a). Undoped BFO shows the lowest leakage current, while the leakage of Mn-doped BFO samples are nearly one order of magnitude higher than that of undoped BFO. This is consistent with the increased density of more conductive ferroelectric domain walls in Mn-doped BFO. As for the Mn-doped BFO, the leakage current increases with increasing Mn concentration. This can be ascribed to the fact that the substitution of Mn on the host Fe site provide more electrons due to Mn ion multivalent compared with Fe ions, which agrees with the previous results 22 . Figure 4(b) illustrates the dielectric constant, e r , and dielectric loss, tand, as a function of frequency for pure and Mn-doped BFO. At frequency (10 2 , 10 6 Hz), e r and tand of the samples varies slightly. The Mn doping significantly enhance the dielectric constant, whereas only limited influence is noticeable on the dielectric loss. This is due to the charge compensation of the charged defects, such as At high frequency (.10 5 Hz), a slight increase of tand is observed, indicating that the charge carriers cannot catch up with the frequency of the applied field. It is noteworthy that the increase of dielectric loss at low frequency is highly likely originated from the space charges effects 23 . Magnetization versus magnetic field (M-H) curves of all samples are measured at a maximum magnetic field of 5000 Oe at 300 K as shown in Fig. 5 clearly verify the ferromagnetic nature. The enhanced magnetization is ascribed to the fact that the Mn ions change the canting of antiferromagnetically ordered spins 24 . The net magnetization might increase with the variation in canting angle 25 . In addition, Mn dopants can break the superexchange between Fe ions. According to the Goodengough-Kanamori rules 26 The increase in the Mn content results in less antiferromagnetic superexchange interactions but induces more frustrated antiferromagnetic superexchange interaction 12 . This reduces the Néel temperature, T N and enhances macroscopic magnetization, which is in good agreement with our DSC measurements. The DSC curves for undoped and Mn-doped BFO are displayed in Fig. 5(b). The inset of Fig. 5(b) shows the Néel temperature (T N ), which is determined by the specific heat measurement, as a function of Mn-concentration. The T N value decreases with increasing Mn content at a rate of 0.63 K per 1% mole Mn, which is calculated through a linear fitting of the experimental data. Then we apply a simple rule of mixture and calculate T N as the weighted average between the transition temperature of BFO and BiMnO 3 , where x is the molar concentration of Mn in the mixture. Given that T BiMnO3 N eel~1 05 K 27 and T BiFeO3 N eel~6 43 K, the estimated decreasing rate of T N is 0.54 K per 1% mole Mn, which is smaller than that obtained from our experimental results. Such discrepancy can be attributed to the oxygen vacancies (V .. O ) which act as the charge compensation associated with the reduction of Mn 41 to Mn 31 ion states 28 .
To determine the interaction between the electronic properties and the magnetic spin behaviors, we perform a series of measurements of EPR spectroscopy. These measurements are very sensitive to the surface phase transition involving the electronic charges and magnetic spin dynamics 16 . Here, the X-band EPR spectra were measured from 120 to 300 K for Mn-doped BFO. As shown in Fig. 6(a), Mn doping remarkably affects the EPR spectra. A typical ferromagnetic (FM) spin-wave resonance can be divided into two resonances segments, i.e., low field (LF) and high field (HF) resonance shoulders, which are related with distinct defect types and magnetic anisotropy. The LF resonance with g-factor near 4 has a characteristic of magnetically isolated high spin Fe 31 (S 5 5/2) in a low symmetric environment, which corresponds to the defect complexes, e.g., Fe 2z defect dipoles. On the other hands, the HF resonance with g-factor close to 2 can be ascribed to the Fe ions and is correlated with the resonant absorption in the cycloidal spin structure and the defect induced free spins 29,30 . Furthermore, a striking absorption splits into two peaks in the Mn-doped BFO spectra, indicating that the magnetic environment for the unpaired electrons in Fe ions has significantly changed due to the Mn ion substitution and the existence of Fe 31exchange-coupled magnetic secondary phase 31,32 . The theoretical g factor can be expressed in terms of spin Hamiltonian 33,34 where the first term describes the electronic Zeeman interaction with g 5 2.0023, B and E are the axial and rhombic zero-field splitting (ZFS) parameters, S x, S y and S z are components of spin along three mutually perpendicular axes x, y and z. In a polycrystalline sample, the axis of symmetry of the different magnetic centers is randomly oriented with respect to the magnetic field, therefore, only one resonance line corresponding to the higher field with an isotropic gfactor value , 4 can be observed 35 . The g-factor value exhibits a sharp increase at the HF resonance whilst it decreases at the LF resonance with increasing temperature, as shown in Fig. 6(b). The g-factor value is strongly correlated with the Mn concentration at higher temperature in LF resonance range, suggesting the existence of a high con- and Fe defect dipoles in a lower Mn concentration doped BFO. A sharp variation at HF below 190 K can be attributed to magnetic fluctuations as the establishment of long range order 36 . The spin-reorientation stems from the charge localization at low temperature and thermally activated hopping-induced ferromagnetic interactions 37 . The surface phase transition at 140 K and spin-glass behaviors at 200 K were reported for BFO, which is due to interaction between the structural strain (i.e., the atomic displacements and oxygen octahedral tilts) and the spin 16 . Our results also reveal these anomalies in pure BFO, as shown in the Fig. 6(b), where g-factor for BFO shows transitions at 140 and 200 K in HF and LF, respectively. In addition, the transition at , 270 K is ascribed to the polar H 2 O molecules 38,39 melting from the left condensed water during cooling, which is beyond scope of the present work. The surface phase transitions were also observed in Mn doped-BFO samples, reflecting that the Mn doping can tailor the charge release and the onset of glassy behaviors. Clearly, with increasing Mn ion concentration, the phase transition occurs at higher temperatures, indicating that the more the Mn ions the more energy is required to drive the transition. Moreover, the degree of canting of spins is also related with the g-value and a large g value results in a severe spin canting. With increasing Mn ion concentration, the short-range antiferromagnetic superexchange interaction transfers to a long-range antiferromagnetic coupling due to the distorted crystal structure 35 . The linewidth is related with the spin-spin relaxation time and the spin-phonon relaxation time 34 . The peak to peak linewidth, DH pp is extracted in order to clearly distinguish the temperature dependence linewidth, in particular, the ln(DH pp T) is plotted as a function of 1/T, as shown in Fig. 6(c). The linewidth for the Mn doped BFO is narrower than that of the pure BFO in the temperature range, suggesting that the spin-lattice interaction is larger in Mn doped BFO than that of the pure BFO. The narrowing of the EPR signals can be ascribed to the hopping of the e g electrons, which averages out the spin-spin interactions between Mn and Fe magnetic moments, such as Dzyloshinsky Moriya (DM) exchange interaction and the anisotropic crystal-field (CF) 40 . The increased spin-lattice interaction in Mn-doped BFO can contribute to the charge injection during the phase transition at 150 K. Figure 6(d) shows the integrated intensity I EPR as a function of temperature, where the integrated intensity of EPR spectra are proportional to the concentration of unpaired electrons in the samples. It is found that the I EPR for BFO and 2% Mn-BFO decreases abruptly at 140 K, implying that the existence of internal fields caused by charge release. This behavior is associated with the long-range antiferromagnetic superexchange interaction between Fe-O-Fe for BFO and Fe-O-Mn for 2.0% Mn-BFO, respectively. In contrast, for 0.5% Mn-BFO, an abrupt increase occurs at about 200 K, which is corresponding to the spin rearrangement. The long range antiferromagnetic ordering of magnetic moments does not happen at low temperature. As temperature increases to 200 K, the long-range antiferromagnetic super exchange interaction between Fe-O-Mn commences. The Mn ion EPR signal is contributed by both the tetragonal Jahn-Teller (J-T) distorted Mn 31 (3d 4 , S 5 2) and Mn 41 (3d 3 , S 5 3/2) ions.
Discussion
Bismuth ferrite, i.e., BiFeO 3 (BFO) has become an extremely exciting material system nowadays due to its unique properties of possessing room-temperature ferroelectric and magnetic order. Up to now, despite the intense research activities, however, there remain a number of open questions concerning the structure, phase diagrams, ferroelectric and magnetic characteristics in BFO. Doping engineering is widely used to tailor the band structures of bulk and nanoscale materials, promising and facilitating the construction of various multifunctional materials and devices [41][42][43][44][45][46][47][48][49][50][51] . To stabilize the perovskite state and to induce ferromagnetism at room temperature in BFO, 3d element doping engineering was adopted as an effective strategy in our experiments. This report reveals that, substituting Fe site with Mn ions in BFO induces ferroelectric domain structure modulation, surface phase transition and manipulated magnetic behaviors. Homogeneous samples from microstructural point of view were obtained for all the compositions analyzed. The spontaneous ferroelectric response and corresponding domain structures, magnetic behaviors and spin dynamics in Mn-doped BFO have been investigated systematically. Both the surface phase transition and magnetization are boosted in BFO via Mn doping engineering. The interaction between the spontaneous polarization charge and magnetic spin in Mn-doped BFO are discussed in detail. Our temperature dependent EPR results further elucidate that the 3d dopant engineering plays a paramount important role in the surface phase transition and provides an alternative dimension to tune the spin-charge and spin-lattice interactions in multiferroic materials. The extrinsic properties are impossible to be satisfactorily controlled by normal ceramics processing. By tailoring the extrinsic contributions to the dielectric and magnetic properties, the ceramic BFO system might be a valuable multiferroic materials for magnetoelectric/magnetoelastic applications, in particular at room temperature.
In summary, the Mn dopants effect on the ferroelectric domain structure and magnetic phase transitions has been systematically investigated via PFM and EPR. The magnetic phase transitions associated with the spin reorientation are observed in the Mn-doped BFO and the transition temperatures are increased by the substitution of Mn in Fe-site of the BFO. In addition, the electrical conductivity is increased significantly through the charge compensation of electrons and variation of the density of domain walls in the Mndoped BFO.
Methods
Materials synthesis. The pure and Mn doped BFO powders are fabricated using the conventional solid solution methods with starting materials at high purity Bi 2 O 3 , Fe 2 O 3 and MnO 2 (Sigma Aldrich, .99.99%) at 830uC for 2 hours. The starting materials are Bi 2 O 3 , Fe 2 O 3 and MnO 2 (Sigma Aldrich, 99.99%) powders. These are mixed in an agate mortar and pestle for ,30 min, with acetone added periodically to form a paste. Pellets are pressed uniaxially and placed on sacrificial powder of the same composition in alumina boats. Initial firing was at 830uC for 30 min after which the pellets were ground, repressed, fired again at 830uC for 30 min, ground, and repressed isostatically at 300 MPa, given a final firing at 830uC for 2 hours in air, and then cooled naturally to the room temperature.
Characterization and instrumentation. The crystalline structure of the BFO powders was characterized using X-ray diffractometer (Shimadzu XRD-7000) with a Cu Ka radiation. Differential scanning calorimetry (DSC) measurements were performed in a Netzsch DSC404C (Selb, Germany) from room temperature to 850uC with heating/cooling rates 10uC min 21 . TEM was performed by Philips CM200 FEG TEM operated at 200 kV and a FEI Nova G2 TEM operated at 200 kV. TEM specimens were prepared using an FEI Nova 200 Nanolab focused ion beam (FIB) and Zeiss Auriga focused ion beam (FIB). This was done followed by a ''lift-out'' technique. Magnetization measurements were carried out using a superconducting quantum interference device (SQUID) magnetometer (Quantum Design). The surface topographies, behaviors of polarization, and nanoscale ferroelectric www.nature.com/scientificreports SCIENTIFIC REPORTS | 5 : 9128 | DOI: 10.1038/srep09128 polarization switching of the pure and Mn-doped BFO pellets were characterized using a commercial atomic force microscope (AFM) (Asylum Research MFP-3D) with combination techniques of piezoresponse force microscope (PFM) for the local polarization detection. A Pt-coated cantilever (Olympus AC240, nominal spring constant ,2 N/m, resonant frequency ,70 kHz) was used with a scanning rate of 0.5 Hz. Both surfaces of the sample pellets were polished and coated with silver paste as electrode in order to measure the electrical properties. The macroscopic currentvoltage characteristics and ferroelectric measurements were performed using an electrometer (Keithley 6487) and ferroelectric tester (Radian, USA). The dielectric properties were measured at room temperature using an impedance analyzer (Wayne Kerr Electronics 6500B Series) at zero bias voltage over the frequency range from 10 2 to 10 6 Hz. The X-band (9.5 GHz) Electron Paramagnetic Response (EPR) measurements were performed on EPR spectrometer with a flow nitrogen cryostat (120 , 300 K), and as a standard field marker, DPPH with a g value equal to 2.0036 were used for the determination of the resonance magnetic field values. | 4,866.4 | 2015-03-16T00:00:00.000 | [
"Materials Science",
"Physics"
] |
Defined astrocytic expression of human amyloid precursor protein in Tg2576 mouse brain
Abstract Transgenic Tg2576 mice expressing human amyloid precursor protein (hAPP) with the Swedish mutation are among the most frequently used animal models to study the amyloid pathology related to Alzheimer's disease (AD). The transgene expression in this model is considered to be neuron‐specific. Using a novel hAPP‐specific antibody in combination with cell type‐specific markers for double immunofluorescent labelings and laser scanning microscopy, we here report that—in addition to neurons throughout the brain—astrocytes in the corpus callosum and to a lesser extent in neocortex express hAPP. This astrocytic hAPP expression is already detectable in young Tg2576 mice before the onset of amyloid pathology and still present in aged Tg2576 mice with robust amyloid pathology in neocortex, hippocampus, and corpus callosum. Surprisingly, hAPP immunoreactivity in cortex is restricted to resting astrocytes distant from amyloid plaques but absent from reactive astrocytes in close proximity to amyloid plaques. In contrast, neither microglial cells nor oligodendrocytes of young or aged Tg2576 mice display hAPP labeling. The astrocytic expression of hAPP is substantiated by the analyses of hAPP mRNA and protein expression in primary cultures derived from Tg2576 offspring. We conclude that astrocytes, in particular in corpus callosum, may contribute to amyloid pathology in Tg2576 mice and thus mimic this aspect of AD pathology.
| INTRODUCTION
Transgenic animal models are essential experimental tools to mimic aspects of Alzheimer's disease (AD) in order to study pathogenic mechanisms and to test therapeutic strategies. Typically, the transgenes are overexpressed and encode mutant forms of disease-related human tau and amyloid precursor (APP) proteins or their processing enzymes (Bodendorf et al., 2002;Hsiao et al., 1996;Oakley et al., 2006;Oddo et al., 2003;Sturchler-Pierrat et al., 1997). With regard to the amyloid pathology, a large magnitude of human APP (hAPP) constructs with mutations within the Abeta sequence or close to the beta cleavage site of the APP was used to generate transgenic mice that develop AD-typical histopathology. These constructs differ with regard to the hAPP isoform used (APP695, APP751, APP770), the mutations introduced (Swedish, London, Dutch, Florida, Iberian, Arctic, Indiana, Iowa) and the promoter-driving transgene expression (PDGF-β, Thy1, CMV/β-actin, CaMKII-α, prion protein, neuronspecific enolase) . As a result, the onset and degree of amyloid pathology, neuronal and synaptic loss as well as changes in long-term potentiation, neuronal network activity, and the degree of cognitive impairment differ substantially between the animal models established. Thus, it is indispensable to thoroughly characterize each model with regard to cell type and brain region-specific transgene expression patterns. The Tg2576 mouse model was established two decades ago (Hsiao et al., 1996) and had been made broadly available to the scientific community. These mice overexpress hAPP695 carrying the Swedish double mutation KM670/671NL under control of the hamster prion protein promoter and develop amyloid plaques at 11-13 months of age predominantly in neocortex and hippocampus (Hartlage-Rübsamen et al., 2011;Hsiao et al., 1996;Kawarabayashi et al., 2001). Interestingly, synaptic spine loss in the hippocampal CA1 region (Lanz, Carter, & Merchant, 2003), changes in hippocampal long-term potentiation (Jacobsen et al., 2006), hypersynchrony of neuronal network activity (Shah et al., 2016), and cognitive impairment (King & Arendash, 2002) were reported at 4-6 months of age, indicating the importance of soluble neurotoxic Abeta assemblies or of intracellular APP C-terminal fragments (Xu, Fitzgerald, Nixon, Levy, & Wilson, 2015) present before amyloid plaque pathology.
Another important aspect to explain variability in neuropathology and behavioral disturbances among APP-transgenic mouse lines is the cell type and neuron type-specific expression of hAPP. Both, the hamster prion protein promoter-driving hAPP expression in FVB/N and Tg2576 mice (Hsiao et al., 1995;Hsiao et al., 1996) and transgenic hAPP expression itself were reported to be neuronspecific (Irizarry, Locascio, & Hyman, 2001;Irizarry, McNamara, Fedorchak, Hsiao, & Hyman, 1997). More specifically in brain, the endogenous hamster prion protein was found to be expressed by neurons of the hippocampus, septum, caudate putamen, thalamic nuclei, and dorsal root ganglia cells (Bendheim et al., 1992). In general, similar patterns of hAPP transgene expression and amyloid pathology are observed in Tg2576 mice (Hartlage-Rübsamen et al., Hsiao et al., 1996;Irizarry et al., 2001;Kawarabayashi et al., 2001). So far, to the best of our knowledge, no glial expression of hAPP in Tg2576 brain was demonstrated. However, using a novel antibody that differentiates between endogenous mouse and hAPP , we here report the presence of nonneuronal cells immunoreactive for hAPP in a brain region-specific manner. To identify the glial cell types expressing the transgene, double immunofluorescent labelings of hAPP with cell type-specific marker proteins were performed in brains of 3-and 18-month-old Tg2576 mice and analyzed by confocal laser scanning microscopy. In addition, primary neurons and glial cells derived from Tg2576 mice were analyzed for hAPP mRNA and protein expression by RT-qPCR and immunocytochemistry, respectively.
Our data indicate that not only neurons, but also astrocytes may contribute to amyloid pathology in the Tg2576 mouse model of AD.
| Experimental animals
Breeding pairs of the hAPP-transgenic mouse line Tg2576 were kindly provided by Dr. Karen Hsiao (University of Minnesota). Tg2576 mice were maintained on C57BL/6xSJL background in which transgene expression is driven by the hamster prion protein promoter (Hsiao et al., 1996). For the characterization of hAPP expression, transgenic Tg2576 mice and their wild type littermates were examined at the postnatal age of 3 and 18 months. Animals were housed in groups of 3-5 animals per cage and separated by sex with food and water ad libitum at 23 C for 12 hr day/12 hr night cycles in cages that contained red plastic houses (Tecniplast, Hohenpeißenberg, Germany) and shredded paper flakes to allow nest building. At the age of 6 weeks, the transgenicity of the animals was tested by polymerase chain reaction of tail DNA, as described elsewhere (Hsiao et al., 1996).
All experimental protocols were approved by Landesdirektion Sachsen, license T28/16 and all methods were carried out in accordance with the relevant guidelines and regulations.
| Tissue preparation
For immunohistochemistry, adult mice (3 and 18 months of age) were sacrificed by CO 2 inhalation and transcardially perfused with 50 ml of 0.9% saline followed by perfusion with 50 ml of 4% paraformaldehyde in PB (0.1 M; pH 7.4). The brains were removed from the skull and postfixed by immersion in the same fixative overnight at 4 C. After cryoprotection in 30% sucrose in 0.1 M PB for 3 days, coronal sections (30 μm) were cut at the level of basal forebrain (Bregma 1.10 mm) and hippocampus (−1.80 mm) on a sliding microtome and collected in 0.1 M PB containing 0.025% sodium azide.
The level of the basal forebrain was selected because of the prominent existence of nonneuronal cells, for example, oligodendrocytes and astrocytes in the corpus callosum white matter. The hippocampal coronal cutting level was chosen according to previously described findings of senile Abeta plaques in the hippocampus and neocortex of Tg2576 mice.
2.3 | Immunohistochemistry 2.3.1 | Single labeling hAPP immunohistochemistry All immunohistochemical procedures were performed on free-floating brain sections. Sections were pre-treated with 1% H 2 O 2 in 60% methanol for 1 hr to abolish endogenous peroxidase activity. Unspecific staining was blocked in TBS containing 5% normal donkey serum and 0.3% Triton-X100 before incubating the brain sections with the primary antibodies against hAPP (rat anti-hAPP, clone 1D1, 1:4) at 4 C overnight. The following day, sections were subsequently incubated with secondary, biotinylated donkey antibodies directed against rat IgG (Dianova; 1:1,000) for 60 min at room temperature, followed by the ABC method, which composed incubation with complexed streptavidin-biotinylated horseradish peroxidase. Incubations were separated by washing steps (3 × 5 min in TBS). Binding of peroxidase was visualized by incubation with 4 mg 3,3 0 -diaminobenzidine (DAB) and 2.5 μl H 2 O 2 per 5 ml Tris buffer (0.05 M; pH 7.6) for 3-5 min, resulting in brown labeling.
The specificity and applicability of the 1D1 antibody has been extensively characterized recently . It allows differentiation between endogenous mouse and transgenic human APP and binds an extracellular N-terminal hAPP epitope between amino acids 40 and 64. Thus, it does not detect Abeta peptides, but rather the hAPP and can be applied for immunocytochemistry, immunohistochemistry, immunoprecipitation and Western blot as well as FACS analyses . Importantly, using Western blot analyses, robust hAPP bands at approximately 100 kDa were present in brain tissue homogenates of APP-transgenic Tg2576, 3xTg and I5 mice, whereas bands were neither detected at molecular weights of Abeta peptides or C-terminal APP fragments nor in wild type mouse brain tissue (Supporting Information Figure S1).
| Confocal laser scanning microscopy
Confocal laser scanning microscopy (LSM 510, Zeiss, Oberkochen, Germany) was performed to allow allocation of hAPP to neurons, astrocytes, microglia, and oligodendrocytes, respectively in Tg2576 mouse brain. For Cy2-labeled cell type marker proteins (green fluorescence), an argon laser with 488 nm excitation was used and emission from Cy2 was recorded at 510 nm applying a low-range band pass (505-550 nm). For Cy3-labeled hAPP (red fluorescence), a heliumneon-laser with 543 nm excitation was applied and emission from Cy3 at 570 nm was detected applying high-range band pass (560-615 nm). Photoshop CS2 (Adobe Systems, Mountain View, CA) was used to process the images obtained by light and confocal laser scanning microscopy with minimal alterations to brightness, sharpness, color saturation, and contrast.
| Cultivation of Tg2576 primary neuronal and glial cells 2.5.1 | Primary neurons
The preparation and cultivation of neural primary cells was conducted according to a modified method from Löffner, Lohmann, Walckhoff, Walter, and Hamprecht (1986) as described in Hartlage-Rübsamen et al. (2015). Briefly, fetuses of Tg2576 mice rom gestation day 16 of Tg2576 mice were prepared, individually genotyped and cultured.
Typically, 50% of the offspring was hAPP-transgenic and 50% wild type. Neurons were dissociated into single cells by triturating the brains by means of a pipette and passing the cell suspensions through sterile nylon meshes (20 μm). Suspensions were then grown in seeding medium (DMEM/Ham's F-12 supplemented with 5% fetal horse serum (FHS) and 1% penicillin-streptomycin-neomycin (PSN) antibiotic mixture) in T25 cell culture flasks (for mRNA analyses) and on poly-L-lysine-coated glass coverslips in 24-well culture plates (for immunocytochemistry), respectively. The cells were cultured at 37 C in a humidified atmosphere containing 5% CO 2 . On the following day, the seeding medium was exchanged by neuronal medium (DMEM/ Ham's F-12 supplemented with 5% FHS, 1% PSN, 1% N-2 supplement, and 30% astrocyte conditioned medium). On day 3, cells in the T25 cell culture flasks were rinsed with ice-cold, sterile PBS, covered with 2.5 ml TRIzol reagent (Invitrogen, Carlsbad, CA) and stored at −20 C for cell lysis and later RNA extraction. The cells on coverslips in the 24-well plates were fixed with 4% PFA for 10 min and stored in TBS (0.1 M, pH 7.4) at 4 C for immunofluorescent labeling.
| Primary glial cells
For the cultivation of glia-rich primary cell cultures, newborn Tg2576 mice were sacrificed by decapitation and brains as well as tail tips for genotyping were collected. The brains were dissociated into single cells by triturating through pipettes of decreasing width and passed through sterile nylon meshes (150 μm). After resuspending the cells with 5 ml astrocyte medium (DMEM supplemented with 10% fetal bovine serum [FBS], and 1% PSN), 5 ml of each cell suspension were transferred into a T25 cell culture flask and 4 × 500 μl into a 24-well plate containing sterile glass coverslips, respectively. The cells were cultured at 37 C in a humidified atmosphere of 5% CO 2 . The medium was renewed on the third day after seeding and henceforth, once a week.
Between Culture Days 12 and 20, primary microglial cells were separated from the primary astrocytes by subjecting the suspensions to vibrations in a shaking incubator (SI500, Stuart) at 260 rpm and 37 C for 30 min. The cell suspensions were then transferred to new T25 cell culture flasks (5 ml) and 24-well plates (4 × 500 μl) containing sterile glass coverslips, respectively, and were cultured for 3 days under the conditions mentioned above.
For the separation of primary oligodendrocytes, the glia-rich primary cultures were incubated in 5 ml oligodendroglia medium (DMEM supplemented with 1% PSN and 1% N-2 supplement) in a shaking incubator at 220 rpm and 37 C for 18 hr. Afterward, the cell suspensions were split and filled in precoated T25 cell culture flasks (5 ml) and 24-well cell culture plates (4 × 500 μl) containing precoated, sterile glass coverslips, respectively. Primary oligodendrocytes were cultured for 3 days under the conditions mentioned above.
Subsequently, all primary glial cell cultures in the T25 cell culture flasks were rinsed with ice-cold, sterile PBS, covered with 2.5 ml TRIzol reagent (Invitrogen) and stored at −20 C for cell lysis and RNA extraction at a later time point. The primary glia cultures in the 24-well cell culture plates were fixed with 4% PFA for 10 min and stored in TBS (0.1 M, pH 7.4) at 4 C awaiting immunofluorescent labeling.
| Immunocytochemistry
To determine the cell type-specific expression of transgenic hAPP in primary neuronal and glial cells using the fixed primary cells grown on glass coverslips, double immunofluorescent labelings with 1D1 and cell type-specific antibodies were performed as described for immunohistochemistry on brain sections. Finally, the coverslips were airdried, embedded in entellan/toluene on microscopic slides and stored at 4 C in the dark.
In addition, double labeling with all cell type-specific antibodies was performed in each subculture of neuronal and glial primary cells, to examine the identity and purity of the respective primary cultures (see Supporting Information Figure S2).
| RNA isolation and RT-qPCR
To analyze the respective mRNA expression of transgenic hAPP in neuronal and glial primary cell cultures of transgenic Tg2576 mice, reverse transcription quantitative polymerase chain reaction (RT-qPCR) was performed. RNA of cultivated wild type and Tg2576 neurons, astrocytes, microglia, and oligodendroglia was isolated using the TRIzol RNA isolation protocol (Chomczynski & Mackey, 1995). The specificity of the PCR products was evaluated by melting curves which were generated by a rise in temperature to 84 C. Furthermore, the specificity was tested by separating all samples in agarose gel electrophoresis (2% agarose in TAE [1×] containing 0.01% GelRed) to control correct product length.
| RESULTS
The inspection of Tg2576 mouse brain slices labeled with the antibody 1D1 revealed the expression of hAPP by numerous neurons as described previously . To allow for a comprehensive evaluation of the hAPP immunoreactivity in Tg2576 mouse brain, we provide a link to access a serial hAPP staining, where every fourth coronal brain section was stained and scanned (http://cmbnnavigator.uio.no/navigator/filmstrip_viewer.html?publicOnly=true& entityType=block&entityId=4247#). This link to the virtual microscopy viewer Navigator3 allows interactive zooming and panning of high-resolution images. The series has been registered to the Allen brain mouse atlas available from: http://connectivity.brain-map.org/) using the QuickNII software tool (Puchades, Csucs, Ledergerber, Leergaard, & Bjaalie, 2017). This link shows the same sections with their custom atlas overlay, adjusted for angle deviations.
(http://cmbn-navigator.uio.no/navigator/filmstripzoom/filmstripzoom _lite.html?atlas=300000&series=4247&preview=ABAMousev2Previ The typical appearance of hAPP immunoreactivity in parietal cortex and corpus callosum of 3-and 18-month-old Tg2576 mouse brain is shown in Figure 1. Note the membranous labeling of cells with neuronal size and shape in young and aged Tg2576 mouse brain in parietal cortex and corpus callosum and the complete absence of hAPP immunoreactivity in wild type mouse brain sections. In addition to the neuronal labeling, hAPP was detected in smaller, branched nonneuronal cells as shown at high-magnification images in Figure 1 (arrows).
This glial-like hAPP immunoreactivity is more prominent in corpus callosum compared with parietal cortex, which might be due to the higher density of neuronal cell bodies in cortex. There are no obvious differences in neuronal and glial-like hAPP immunoreactivity between 3-and 18-month-old Tg2576 mice. However, hAPP immunoreactivity was found to be also present in amyloid plaque-associated dystrophic neurites of 18-month-old transgenic mice (Figure 1).
To identify the origin of the hAPP immunoreactive glial processes, double immunofluorescent labelings with cell type-specific marker proteins were performed and evaluated by laser scanning microscopy.
In corpus callosum and neocortex of 3-month-old Tg2576 mice without amyloid pathology, hAPP frequently co-localized with the neuronal marker NeuN, but neither with the microglial marker Iba1 nor with the oligodendrocyte marker GSTπ (Figure 2). The neuronal expression of hAPP was validated by double labeling of hAPP with the neuronal marker MAP2 (Supporting Information Figure S3A). However, a substantial proportion of GFAP-immunoreactive astrocytes in corpus callosum and some astrocytes in cortex were found to be hAPPimmunoreactive ( Figure 2). Also in 18 month-old Tg2576 mouse brain, hAPP was localized to neurons and to astrocytes in corpus callosum and cortex, but not to microglia and oligodendroglia ( Figure 3). Interestingly, in corpus callosum both resting astrocytes distant from amyloid plaques and reactive astrocytes in proximity to amyloid plaques displayed hAPP immunoreactivity, whereas in cortex only nonreactive astrocytes close to or distant from amyloid plaques expressed hAPP immunoreactivity ( Figure 3). The APP expression by astrocytes was also confirmed by using two alternative antibodies (6E10 and 22C11) in combination with GFAP antibodies (Supporting Information Figure S3B). However, it should be noted that endogenous mouse APP might have contributed to these signals as 6E10 and 22C11 do not differentiate as well between mouse and transgenic hAPP as does the antibody 1D1.
The robust astrocytic hAPP immunoreactivity raises the question whether hAPP produced by and released from neurons is taken up by astrocytes or whether it is expressed by these glial cells. To address this issue, primary neuronal and glial cell cultures from Tg2576 mice were established and analyzed for hAPP mRNA and protein expression. Brains of newborn mice were prepared, genotyped, and individually cultured. With regard to genomic DNA, approximately 50% of the offspring was hAPP transgenic and 50% wild type. Figure 4a shows representative examples of hAPP PCR products from five different neuronal, astrocytic, microglial, and oligodendroglial cultures of wild type and Tg2576 mice separated on agarose gels (left) and the CycAnormalized quantification of hAPP PCR products in the respective cell types (right). In neurons and astrocytes derived from transgenic mice, significant hAPP mRNA expression was detected by RT-qPCR. In contrast, in microglial and oligodendroglial cultures of hAPP transgenic mice, very low hAPP transcript levels were detected. In neuronal and glial cultures established from wild type mice, no hAPP mRNA was detected, demonstrating the specificity of the primers for hAPP ( Figure 4a).
Immunocytochemical labeling of neuronal and glial cultures of wild type origin with the hAPP-specific antibody did not result in any labeling (not shown). Neurons and astrocytes of transgenic mice were found to be immunoreactive for hAPP (Figure 4b). This is in agreement with immunohistochemical labelings of brain slices, and FIGURE 1 Immunohistochemical detection of hAPP in corpus callosum and cortex of young and aged Tg2576 mice (tg) and wild type littermates (wt) as indicated. Immunoreactivity for hAPP is absent in wild type brain sections demonstrating the specificity of the 1D1 antibody. Although the majority of the 1D1 labeling arises from neurons, glial structures (arrows) are also hAPP-immunoreactive. This is displayed at higher magnification in the bottom images (arrows) supportive of de novo synthesis of hAPP by astrocytes, rather than uptake of neuron-derived hAPP. In microglial and oligodendroglial cultures, no hAPP immunoreactive cells were detected. The minor hAPP mRNA levels detected in these cultures may arise from the presence of some astrocytes (Supporting Information).
| DISCUSSION
The Tg2576 mouse model developed by Hsiao et al. (1996) is one of the most frequently used animal models to investigate aspects of amyloid pathology, accompanying gliosis as well as functional and behavioral consequences of the formation of pathogenic Abeta assemblies. To completely understand pathogenic mechanisms in this mouse model, a thorough analysis of the brain region and cell type-specific transgene expression is indispensable. A recently developed rat monoclonal antibody differentiating between mouse and human APP allowed for such an analysis and had already been used by us to demonstrate alterations in transgene expression patterns of different hAPP transgenic mouse lines and rats ).
Subsequently, we were able to reveal a spatial correlation between transgene expression and the formation of amyloid plaques (Hartlage-Rübsamen et al., 2018).
Here, in serial sections immunohistochemically labeled for hAPP, we observed stained cellular structures that did not display neuronal morphology. Glial hAPP expression was already mentioned for FVB/N mice expressing the same promoter or hAPP695s that were constructed, but the glial cell type and brain region-specific occurrence were not reported (Hsiao et al., 1995). Therefore, we here analyzed the cell type-specific hAPP expression in young and aged Tg2576 FIGURE 2 Cell type-specific expression of hAPP in brains of 3-month-old Tg2576 mice. The hAPP in corpus callosum and cortex was visualized using the antibody 1D1 and detection with secondary Cy3-conjugated antibodies (red fluorescence) in combination with marker proteins for neurons (NeuN), astrocytes (GFAP), microglia (Iba1), and oligodendrocytes (GSTπ) detected with Cy2-conjugated secondary antibodies (green fluorescence). Note the frequent co-localization of hAPP with neurons and astrocytes (arrows) mouse brain. By immunohistochemical double labelings of hAPP and cell type-specific marker proteins, we demonstrate the expression of hAPP by astrocytes in the corpus callosum and to a much lesser extent in neocortex of young Tg2576 mice without amyloid plaque pathology. In contrast, neither microglial cells nor oligodendrocytes were found to express hAPP.
To rule out that glial immunoreactivity arises from uptake of hAPP produced by neurons, primary glial cultures from Tg2576 offspring were analyzed for hAPP mRNA and protein expression. These experiments confirmed gene expression of hAPP in astrocytes but not in microglial cells and oligodendrocytes.
Our results are corroborated by another study which recently demonstrated that primary cultured astrocytes from Tg2576 mice express transgenic hAPP and secrete Abeta peptides into the culture medium decreasing the number of readily releasable synaptic vesicles and excitatory synaptic transmission in co-cultured neurons (Katsurabayashi et al., 2016). Moreover, the expression of hAPP in astrocytic Tg2576 cultures and its processing into amyloidogenic fragments were found to be stimulated by addition of oligomeric and fibrillary Abeta preparations, most likely through the stimulation of astrocytic BACE1 expression (Zhao, O'Connor, & Vassar, 2011). However, in brain tissue of aged Tg2576 mice with robust amyloid pathology, we did not observe an induction of hAPP expression in reactive astrocytes in proximity to Abeta plaques. The reason for this discrepancy is not known but could be based on the application of defined Abeta preparations in cell culture experiments, whereas, in brain tissue, astrocytes are exposed to a broad spectrum of hAPP cleavage products and Abeta peptide variants. Cell type-specific expression of hAPP in brains of 18-month-old Tg2576 mice. The hAPP in corpus callosum and cortex was visualized using the antibody 1D1 and detection with secondary Cy3-conjugated antibodies (red fluorescence) in combination with marker proteins for neurons (NeuN), astrocytes (GFAP), microglia (Iba1), and oligodendrocytes (GSTπ) detected with Cy2-conjugated secondary antibodies (green fluorescence). Note the frequent co-localization of hAPP with neurons and astrocytes (arrows) and the absence of hAPP immunoreactivity in amyloid plaque-associated reactive astrocytes in cortex Nevertheless, the stimulus-dependent capacity of reactive astrocytes to express APP (Avila-Muñoz & Arias, 2015;Siman, Card, Nelson, & Davis, 1989) as well as the APP processing enzyme BACE1 (Hartlage-Rübsamen et al., 2003) and components of the γ-secretase complex (Nadler et al., 2008) indicate a potential astrocytic contribution to amyloid pathology. Indeed, intracellular Abeta was detected in astrocytes of AD cortex (Akiyama et al., 1999).
As hAPP expression in Tg2576 mouse brain is driven by the hamster prion protein promoter, its expression pattern reflects to a large extent on this promoter activity and may differ drastically from that of the endogenous mouse APP promoter. Hamster prion protein mRNA and protein were reported to be predominantly expressed by neurons, but also by astrocytes, particularly in corpus callosum and optical nerve of hamster and rat (Moser, Colello, Pott, & Oesch, 1995) as well as in cultured mouse primary astrocytes (Lima et al., 2007). The translation of the endogenous prion protein mRNA in astrocytes was shown to be robust, but not upregulated during reactive astrogliosis in mouse brain (Jackson, Krost, Borkowski, & Kaczmarczyk, 2014), which might partly explain the lack of hAPP induction in reactive astrocytes in proximity to Abeta plaques of Tg2576 mice. In neuronal and astrocytic and to a much lesser extent in microglial and oligodendroglial cultures of Tg2576 mice hAPP mRNA is detected, whereas in the corresponding cultures of wild type mice no hAPP mRNA was present. This is consistent with the specificity of primer pairs used for hAPP versus mouse APP. On the left hAPP PCR products separated on agarose gels from different cell types are shown. The diagram on the right shows the CycA-normalized quantification of hAPP mRNA by RT-qPCR in the respective cell types. (B) Primary neuronal and astrocytic cultures of Tg2576 mice display hAPP immunoreactivity (arrows), which is absent from microglial and oligodendroglial Tg2576 cultures Also in human brain, nonneuronal expression of APP mRNA has been demonstrated (Golde, Estus, Usiak, Younkin, & Younkin, 1990) and multiple pro-inflammatory cytokines have been shown to upregulate APP expression and Abeta secretion in human astrocyte cultures (Blasko et al., 2000;Brugg et al., 1995). Astrocytic cell lines and human astrocytes respond with increased APP expression when exposed to TGFβ (Amara, Junaid, Clough, & Liang, 1999;Burton, Liang, Dibrov, & Amara, 2002;Gray & Patel, 1993), a cytokine which is associated with AD pathogenesis (Luedecking, DeKosky, Mehdi, Ganguli, & Kamboh, 2000). This implies that TGFβ increases Abeta levels in the AD brain by inducing APP upregulation in astrocytes (Frost & Li, 2017). In the neuroinflammatory context of AD, reactive astrocytes express higher levels of APP than resting astrocytes and, therefore, could produce more Abeta and contribute to amyloid pathology to a greater extent. Thus, with regard to the induction of hAPP expression by reactive astrocytes, the Tg2576 mouse model clearly differs from the human condition in AD.
Another important aspect of astrocytic hAPP expression in Tg2576 brain is the more prominent hAPP immunoreactivity in callosal compared with neocortical astrocytes (see also filmstrip images at Section 3). It is well-known that astrocytes compose the most abundant and diverse type of glial cells in the brain (Lundgaard, Osório, Kress, Sanggaard, & Nedergaard, 2014). During development astrocytes adapt during development to the needs of the surrounding tissue which could be a reason for the different density of astrocytes in different brain regions (Emsley & Macklis, 2006;Wang & Bordey, 2008). It is unknown which factors are decisive for the adaption of the specific morphology, but there are distinct morphological differences between astrocytes in grey and white matter which result in two prominent types: protoplasmic and fibrous astrocytes, respectively. White matter astrocytes have smaller cell bodies and their processes are aligned with myelinated fibers. This diversity of morphology is accompanied by different protein expression profiles.
Astrocytes also express glutamate transporters for clearing the extracellular space of this neurotransmitter. The synapse densitynormalized glutamate transporter activity is significantly higher in white matter than in grey matter (Hassel, Boldingh, Narvesen, Iversen, & Skrede, 2003). In addition, the capacity for glutamate metabolism to glutamine is higher in white matter than in grey matter astrocytes (Goursaud, Kozlova, Maloteaux, & Hermans, 2009) supporting the hypothesis that glutamate clearance might be more important in white matter to avoid excitotoxicity due to glutamate overload. In the Tg2576 mouse model, an age-dependent elongation of magnetic resonance transverse relaxation time (T2) values in the corpus callosum but a significant T2 decrease in grey matter cortex and hippocampus was reported (Kara et al., 2015). This noninvasive measure is indicative of loss of corpus callosum integrity and was confirmed by histological analyses of demyelination, gliosis and amyloid-plaque deposition in the corpus callosum (Kara et al., 2015).
In addition to neurons and glial cells, cultured vascular muscle cells from brains of Tg2576 mice were shown to produce hAPP and to deposit Abeta intracellularly, indicating that cerebrovascular amyloid in Tg2576 mice is at least partially of nonneuronal origin and that vascular smooth muscle cells are a source of these amyloid deposits (Frackowiak, Miller, Potempska, Sukontasup, & Mazur-Kolecka, 2003). This is also consistent with the expression of endogenous prion protein by muscle cells (Bendheim et al., 1992).
Together, we here demonstrate that hAPP expression in brains of Tg2576 mice is not restricted to neurons but has a significant astrocytic component, particularly in corpus callosum white matter. These astrocytes may, therefore, contribute to amyloid pathology in Tg2576 mouse brain making them targets for experimental pharmacological intervention studies. Interestingly, astrocytes in the APP/PS1 AD mouse model were also shown to be implicated in disease mechanisms by contributing to neuronal-glial network dysfunction, which can be ameliorated by P2Y1 receptor antagonists (Reichenbach et al., 2018). Our data underline the importance of investigating cell typespecific transgene expression in hAPP transgenic mouse lines. | 6,648.6 | 2018-11-28T00:00:00.000 | [
"Biology"
] |
A magneto-gravitational trap for studies of gravitational quantum states
Observation time is the key parameter for improving the precision of measurements of gravitational quantum states of particles levitating above a reflecting surface. We propose a new method of long confinement in such states of atoms, anti-atoms, neutrons and other particles possessing a magnetic moment. The earth gravitational field and a reflecting mirror confine particles in the vertical direction. The magnetic field originating from electric current passing through a vertical wire confines particles in the radial direction. Under appropriate conditions, motions along these two directions are decoupled to a high degree. We estimate characteristic parameters of the problem, and list possible systematic effects that limit storage times due to the coupling of the two motions.
Observation time is the key parameter that controls the precision. We propose a new method of long confinement of neutral particles possessing a magnetic moment in GQSs in a Magneto-Gravitational Trap (MGT). A key feature of the new method is the combination of the vertical confinement by gravity and quantum reflection from a mirror and the radial confinement by the magnetic field of a vertical linear current. Both confinement principles are well established [3,[49][50][51][52][53]. However, the combination of the two approaches seems to be challenging as the magnetic field might produce large false effects thus making impossible any precision studies of GQSs. We show that one can achieve small mixing of radial and vertical motions and thus can control false effects to an acceptable degree.
The most typical trap for cold atoms is the Magneto-Optical Trap (MOT) [54]. It combines magnetic trapping and optical cooling. Such a trap was used in the groundbreaking experiments on Bose-Einstein Condensate (BEC) in a gas of ultracold atoms [55,56]. Since there is no maximum magnetic field in 3D, low-field-seeking (lfs) atoms are trapped in MOTs in the field minimum. Since they are not in the lowest internal state, any disturbance (collisions of atoms, magnetic field inhomogeneities) can flip their spin. This magnetic relaxation to the untrapped state is the main mechanism of losses from MOTs. However, it is absent for traps for high-field-seeking (hfs) atoms, which may therefore allow the trapping of clouds of atoms of much higher density. Dynamic magnetic traps for hfs atoms based on rapidly varying electromagnetic fields have been proposed [79] but they are typically shallow.
In contrast, the MGT provides a deep trapping potential and is especially suitable for the lightest atoms: H and D.
For H , a trap barrier height of ∼ 0.5 K can be easily realized, which allows trapping of a large number of atoms at temperatures of ∼ 100 mK. Optical cooling methods based on the 1S − 2P or two-photon 1S − 2S transitions can be used down to the recoil limit of ∼ 2 mK. Further cooling of the trapped gas can be done using evaporation over the trap barrier. Atomic collisions in the high-density regime provide high equilibration rate leading to substantially lower temperatures.
Since the general principles of operation of the MGT are the same for different particles, we first describe them in a general way in Sect. 2, and coupling of vertical and radial motions of the particles in the MGT is analyzed in Sect. 3. However, a specific implementation of the MGT and experimental methods (type of mirror, electric current, particle loading/unloading and storage time, size, temperature, specific interferometry and spectroscopy method, etc) depends on the type of particle (neutron, atom, antiatom, etc) as well as their velocity spectrum. In specific examples, we will indicate the reason why the respective method was chosen. The feasibility of loading/unloading the MGT as well as examples of precision measurements of GQSs in the MGT are presented in Sect. 4.
We focus on the properties of the MGT and leave the crucial topics of specific implementations of loading/unloading the trap and spectroscopy/ interferometry of GQSs for later publications. We will only show the feasibility of their implementation and their compatibility with the operation of the MGT. Note that general methods of spectroscopy/interferometry of GQSs have been developed in detail [40,43,46,57,59], and that fast changes of the electric current can be used to load/unload the MGT in case of atoms and anti-atoms, while super-fluid 4 He in the trap exposed to the flux of cold neutrons allows producing ultra-cold neutrons (UCNs) directly in the trap [65]. Figure 1 shows a scheme of the MGT. The mirror and gravitational field confine particles vertically. The interaction of a particle's magnetic moment with the vertical electric current and the centrifugal acceleration confine particles radially.
Description of the trap
Under appropriate conditions that will be discussed in the paper, in particular for a sufficiently long wire, forces Fig. 1 On the right side, a schematic representation of the Magneto-Gravitational trap (MGT) for neutral particles with a magnetic moment is shown. The linear gravity potential and quantum reflection from the horizontal mirror (blue) confine particles in the vertical direction. The particle's wave functions in four lowest GQSs are shown in the insert on the left as a function of the height above the mirror. The magnetic field of a vertical linear wire carrying an electric current I and the centrifugal acceleration confine particles in the horizontal plane. The magnetic field is inversely proportional to the distance to the wire. The particle adiabatically moves along closed elliptical trajectories (red dashed lines around the wire), similar to the orbital motion of planets around the sun acting on particles in the vertical and horizontal directions are almost orthogonal so that the corresponding motions are decoupled to a high degree. The vertical motion of particles with lowest vertical energies is governed by quantum mechanics while the horizontal motion can be considered classical in realistic conditions.
Motion of a magnetic dipole in the field of a linear current
The quantum motion of a magnetic dipole in the magnetic field of linear current can be described analytically [66,67].
Here, an adiabatic approximation, which allows classical treatment, is sufficient for our purpose. It relies on the hierarchy of characteristic times associated with the fast spin motion and the slowly varying magnetic field in a frame comoving with the particle in the plane perpendicular to the current. In the MGT, the particle trajectories follow Keplerlike orbits around the wire. For the simplest case of a circular orbit with radius r , the rotation frequency Ω of a magnetic dipole μ is given by: where I is the electric current, μ 0 the magnetic permeability of vacuum, m the particle mass, and r the radial distance from a given point to the wire carrying the current. To meet the adiabaticity condition, the angular velocity of rotation has to be much smaller than the Larmor frequency ω L of the spin precession, Ω ω L = γ B(r ), with γ being the gyromagnetic ratio for the magnetic dipole under consideration.
Radial confinement will occur in the Coulomb-like potential: with the Bohr energy and the Bohr radius of the particle The Coulomb potential depth reaches a maximum at the wire surface. The value of the potential strength depends on the type of particle; it is roughly three orders of magnitude larger for electron spins than for nuclear spins, or for the neutron spin. Therefore, one needs much stronger magnetic fields and electric currents for trapping n or 3 He. Let's evaluate the feasibility of achieving magnetic trapping for n with their small magnetic moments. The current density in the wire is i 0 = I /π R 2 , where the wire radius R is a free parameter. A typical value for the most common superconductors based on NbTi is i 0 ≈ 10 5 A/cm 2 in a magnetic field of 3 T and a temperature of 4.2 K. With this current density constraint, the potential depth as a function of the wire radius is It looks that increasing the wire radius is beneficial for making a stronger magnetic trap. This is, however, only true until we reach another constraint associated with the maximum critical current density. Then the current density has to be reduced. This effect depends on the type and manufacture of the wire and the temperature. For the values given above and a wire radius of 0.5 cm, the field near the wire surface is B max ∼ 3 T, which is close to the critical value. Therefore, the specified maximum field and the trap depth are reached at a wire radius of 0.5 cm; further increase of the wire radius would not help. The field strength of 3 T corresponds to the trap depth of ∼ 2 mK ∼ 1.5 × 10 −7 eV for n, values typical for ultra-cold neutrons [70,71]. Further improvement can be obtained by a decrease of temperature of the wire to 1.5-1.7 K or the use of a superconductive wire with a larger ratio of NbTi/Cu. This may increase the trap depth by a factor of 2-3, thus making magnetic trapping of the full UCN energy range quite realistic.
Using the MGT for atoms with unpaired electron spin (H and D) in a high field seeking state should be much easier because of their much larger magnetic moments. We can reduce the current by three orders of magnitude, or, keeping the same current density, decrease the wire radius, or trap atoms at higher temperatures. Reducing the wire radius, one should take care of not violating the adiabaticity condition for the atomic motion at the smallest distances to its surface Table 1 Typical parameters (orbit radius r , energy E, rotation frequency Ω, velocity v) of a particle (hydrogen atom, deuterium atom, neutron) state bound in the magnetic field of a linear current I Table 2 Eigenvalues λ i (roots of Airy function), gravitational energies E i , characteristic transition frequencies ν i and classical turning points z i for neutrons, hydrogen and anti-hydrogen atoms in the Earth's gravitational field above a mirror. Index i stands for the quantum state number when r ≈ R. Keeping the same constraint of the fixed current density, the Larmor precession frequency scales as ω L ∼ r , while the orbital rotation frequency scales as Ω ∼ 1/ √ r . However, even having a micrometer radius wire still does not violate adiabaticity, and such a trap could be realized for H and D. Table 1 presents typical parameters of various particles in the MGT.
Gravitational quantum states
The particle is confined vertically by the gravitational field and a mirror. This motion is quantized and described by GQSs. Such states were predicted [1] and discovered [3] for n, and predicted forH and H atoms [41]. All details about the physical properties of such states can be found in the cited papers; here, we give only a summary of the main properties of these states in Table 2 for the reader's convenience.
The characteristic energy, spatial and time scales of such states are given by: The surface of the mirror should be flat enough, and the roughness should be small enough to provide specular reflection of the particles. The material of the mirror for n must have a positive neutron-optical potential and a low loss coefficient. Neutron-optical potential arises due to the coherent interaction of the neutron with nuclei in matter and was introduced by Enrico Fermi in [80]. Most materials satisfy these conditions. Quantum reflection of (anti)atoms from the surface can be provided by their interaction with the van der Waals/Casimir-Polder potential of the surface. The mirror materials for (anti)atoms is chosen so as to increase the probability of quantum reflection and/or provide high control of this interaction. Since the quantum reflection of (anti)atoms occurs without their direct contact with the surface, the requirements for the mirror material are the same for atoms and antiatoms. High reflection is provided by the surface of liquid He, and this process is studied for instance in Ref. [68].
Coupling of vertical and radial motions
Vertical and horizontal motions of particles are decoupled only to a finite precision. Below we consider phenomena which can mix them.
The effect of wire non-verticality and mirror non-horizontality
Although the precision of setting the mirror and wire directions can be high, the magnetic field would slightly deviate from Eq. (2), in particular due to the environment. A vertical field gradient could result in false effects. To estimate these, we derive an expression for the magnetic field in the vicinity of a particle's circular trajectory.
Here ρ is the distance between the wire and a particle, ϕ is the particle angle in the horizontal plane, z is the vertical coordinate of a particle above the mirror plane, α is the angle between the wire and vertical direction (in x-z plane). Below, we show that the ratio z/r 1 is small for all states of interest. Taking into account the smallness of the deviation of α from the vertical direction, the potential energy is: Extra acceleration and the gravitation transition frequency shift in the magnetic field of a linear current due to the non-verticality of wire alignment (hΔω 21 The corresponding vertical component of acceleration due to the wire non-verticality is: Due to the periodicity of the function cos(ϕ(t)), a gravitational energy correction due to the extra acceleration in the gradient magnetic field vanishes in the first order of the small parameter a 0 /g: The first non-vanishing correction to the unperturbed gravitation energy level E i = mgz i appears in the second order: In Table 3, we present typical values of a 0 and the correction to the frequency shift between second and first GQSs for different orbits of a trapped particle.
As one can see in Table 1, the particle rotation frequencies on orbits in the trap are small compared to the transition frequencies between low GQSs. Thus, no resonance effects could be found.
An even smaller systematic effect is associated with the broadening of the peak of the resonance transitions between GQSs. It would be caused by transitions between GQSs due to the non-verticality of the wire, which would decrease storage times in GQSs. An additional suppression factor arises from the fact that under the adiabaticity condition, which is valid for all practically interesting cases, there are no transitions between GQSs caused by the wire non-verticality.
Effect of a non-vanishing vertical gradient of the magnetic field
A non-vanishing vertical gradient of the magnetic field could be due to a finite trap size. It might produce sizable effects which have to be compensated to a maximum degree. A residual gradient would result in a transition frequency shift proportional to the current. Thus, it could be extracted from experimental data by extrapolating the frequency shift to zero current.
Effect of vibration of the wire and mirror
The wire vibration effect can be estimated by assuming that the angle between the wire and the vertical direction is a periodic function of time, which is changing with an oscillation frequency: Then the perturbing potential is: Though its amplitude is small, as established above, oscillations of the perturbing potential could appear to be in resonance with the transition frequency between GQSs. In this case, the transition probability P ik between initial (i) and final (k) GQSs is given by the Rabi-type expression [69]: Here, the transition rate is: The time T ik needed for the complete transition from state i to state k is Characteristic times T 12 are given in Table 4. A similar effect of parasitic transitions between GQSs caused by vibrations of the mirror was studied theoretically and experimentally in Ref. [64]. The main conclusion of this work, as well as the estimations in this paper, is that one has to design the experimental setup in such a way as to suppress the vibrations of its components with frequencies close to the frequencies of the resonant transitions between GQSs. Moreover, one should measure the vibration spectrum of the wire and the mirror and make sure that there are no dangerous frequencies in the spectrum. Otherwise, the probability of parasitic transitions can be significant.
Effect of Earth's rotation
The effect of Earth's rotation results in an additional Coriolis acceleration that a moving particle acquires in the noninertial frame. The vertical component of such an acceleration a c of a particle trapped on a circular orbit is given by: Here, Ω E = 7.27 × 10 −5 rad/s is the Earth's rotation frequency around its axis, Θ is a latitude of geographic position (45 o in Grenoble). In case of an H atom trapped in a circular orbit with radius r = 0.5 cm and current I = 10 A, the acceleration is a c = 7.6 × 10 −5 m/s 2 . After averaging over the trajectory, the first order Coriolis effect is canceled due to the periodic cos(ϕ) factor. The second order Coriolis effect is well below the accuracy of our experiment and can be neglected.
Feasibility of precision studies of GQS in the MGT
The goal of this chapter is to show the principle feasibility of precision studies of GQSs in the MGT. Concrete measuring schemes would be developed in other papers. The prove of feasibility can be achieved by fulfilling the following conditions: (a) long storage of neutral particles in GQSs, (b) the ability to load and unload the trap, (c) the possibility of spectroscopy and interferometry of particles in the MGT.
By long storage times of particles in GQSs we mean, in this context, the times much longer than the characteristic time of formation of GQSs as defined in Eq. (5) that is equal to ∼ 1 ms. In this paper, we limit the analysis of storage times to only systematic effects associated with the MGT, namely, all effects mixing horizontal and vertical motion of particles in the MGT. They are presented in Sect. 3. The other systematic effects are not specific to the presented method of storing particles in the MGT and are analyzed in detail in our previous works, as well as in works of other groups working in this field. Among those systematic effects can be noted the frequency shifts associated with the van der Waals/ Casimir-Polder interaction [58] or with the excitation of resonant transitions [59,60], finite resolution of GQSs associated with the effect of absorber/scatterer [6,61,62], quenching of GQSs by surface charges [63], parasitic transitions between GQSs induced by vibrations of the mirror [64], various effects associated with waviness and roughness of the mirror surface, parasitic magnetic fields, thermal effects and so on.
Convenient options for loading and unloading the MGT are presented in Sect. 4.1. They are different for H /H and n due to the large difference of their magnetic moments and the methods of their production.
In Sect. 4.2, we extend the method of spectroscopy of GQSs to H atoms: it is compatible with the MGT operation.
In Sect. 4.3, we propose a method of interferometry with GQSs ofH compatible to the MGT operation.
Loading/unloading the MGT
The MGT can be loaded with H ,H , for instance, by rapidly switching the wire current. This is technically feasible due to the relatively large magnetic moment (compared to the neutron magnetic moment), thus, the relatively small electric currents needed to trap sufficiently slow H ,H . For an atom velocity of ∼ 1 m/s and a trap size of ∼ 10 −2 m, the ∼ 10 A current switching time should be significantly shorter than ∼ 10 ms (see Table 1). The trap phase-space volume is estimated by assuming that all atoms with a velocity v lower than the escape velocity v c for a given atom-wire distance would be captured. Then, the number of captured atoms is: Here, r 2 , r 1 are maximum and minimum radius of the trap, f 0 is the average density of atoms in a phase volume which is characterized by the maximum velocity v max = √ μ 0 μI /(π mr 1 ) and the spatial size 2π(r 2 2 − r 2 1 ). In the case of n, rapid switching of the wire current is not feasible because of very large current values. In contrast, n offer another particularly elegant method of loading the MGT. It consists of producing UCNs directly in the trap filled in with superfluid 4 He [65]. This method provides the highest phase-space densities of > 10 3 n/cm 3 and allows avoiding UCN losses associated with their extraction and transportation. A few dozen small mirrors can be superimposed in one experimental setup (within the limits of the height of a cold neutron beam). For the typical parameters of the existing intense cold neutron beams [76,77], a conservative estimation of the number of UCNs that can be trapped simultaneously in one GQS is 10 −2 − 10 −1 , which roughly corresponds to count rates in the existing GQS experiments. The main difference of the proposed method is that it can provide a much longer time of observation of n GQSs, comparable with the neutron lifetime.
Other methods include adiabatically changing the wire current or an additional uniform magnetic field in certain mirror geometries, spin-flip by a radio-frequency magnetic field, and various mechanical devices.
Resonance spectroscopy
Methods of spectroscopy of GQSs [35] have been developed in detail theoretically and experimentally. In the case of n, it was realized by the qBounce collaboration using excitation by mechanical vibrations of the bottom mirror [37,38,78]. Non-resonant transitions to a set of GQSs were used by the Tokyo collaboration [39]. GRANIT currently measures resonant transitions induced by a periodically changing magnetic field gradient [57,59]. ForH , resonant spectroscopy of GQSs was discussed in [59]. The theoretical formalism and experimental methods can be easily extended to H . Taking into account the large magnetic moment of the H atoms and the need to measure at cryogenic temperatures, we consider the method of magnetic excitation of resonant transitions between GQSs of H atoms to be most appropriate.
Due to the large spatial extension of GQSs and the magnetic moment of H , one can observe in the MGT the resonant changes in spatial density of the particles localized in GQSs above a mirror as a function of the oscillating frequency of an additional vertical magnetic field gradient. The resonant transitions result from the interaction of the magnetic dipoles of the trapped particles with the field gradient. The changes in spatial density are enhanced when the oscillating frequency coincides with the transition frequency ω ik = (E k − E i )/h. The additional field is assumed to have the form: Here, B 0 is the amplitude of the static field component, β is the oscillating magnetic field gradient; this equation is valid for z values much smaller that the size of the magnetic system. The oscillation frequency is determined by the transition frequency between the lowest GQSs; its typical value is ω ∼ 10 3 rad/s. For simplicity we omitted any radial components of the magnetic field, which do not influence the dynamics in the system. The static axial component B 0 provides a nonzero z component of the atomic magnetic moment inducing the force in the oscillating field gradient. The strength of this component should not exceed the characteristic strength of the trapping field. Such a field configuration can be provided with a pair of coils in the anti-Helmholtz configuration arranged around the trap. First, H are prepared in the ground GQS. This is achieved by absorbing highly excited states using a scatterer/absorber plate lowered down to a certain height H a above the mirror surface. Adjusting the H a value close to the characteristic delocalization height of the ground state l 0 λ 1 < H a (Eq. 5) will effectively remove all other GQSs from the trap. This technique was used for in-beam spectroscopy of GQSs of n [6,43,57,74]. Vertical motion of the absorber over ∼ 10 μm can be achieved with piezo-actuators.
At the second stage, the absorber is lifted up by 20-30 μm to allow trapping a few exited GQSs. An oscillating vertical gradient of the magnetic field is applied at a frequency ω. The field induces transitions from the ground to excited GQSs with a probability, which depends resonantly on the oscillating frequency Third, the number of H remaining in the ground GQS can be measured by placing the absorber down again and eliminating H in excited states. The numbers of H in the ground state before the excitation and after can be measured by releasing them from the trap to a detector.
The probability to excite H , initially prepared in the ground state, is given by the following expression: Here, t is the time of interaction of H in the ground state with the oscillating magnetic field gradient, and C k (t) is the population of quantum state k at time t. Assuming that the field frequency is close to the resonance ω 1k and the time is sufficiently large, t h/ω1k, one can get a simplified expression for the probability averaged over a periodh/ω 1k : with Γ = ω g Im λ i the width of a quasi-stationary state.
The resonance frequency value corresponds to a maximum loss of H from the ground state as a function of the applied magnetic field frequency. Similarly, it is possible to measure transitions between any other pairs of low GQSs.
Interferometry of quasi-stationary states
Interferometric methods of observation of GQSs are based on the use of position-sensitive or time-resolving detectors. Such detectors are routinely used for n, and respective exper-imental methods have been developed in detail in studies of neutron whispering gallery [40]; we do not reproduce these details here. For H , efficient detectors of this type still have to be developed; therefore, we cannot yet propose a reliable experimental scheme for H . Here, we propose a new interferometric method of measurement of simultaneously several GQSs ofH . It combines the efficient use ofH produced, easy implementation and high precision. The method is based on the observation of a time distribution of detection events at a given z-location, or a vertical position distribution of detection events at a given time. An interference pattern can be observed if a pure initial state or a superposition of states is shaped.
Here, we analyze an example of the time evolution of an initially prepared wave-packet ofH with a well-defined initial location z 0 above the mirror: σ is the spatial size of the initial state.
We study the vertical motion and ignore the classical radial motion in the following.
Evolution of the wave-function Ψ (z, t) is given by the following expression: Here K (z, z, t) is the particle propagator in GQSs: ω g is a characteristic gravitational frequency: λ i is an eigenvalue (complex) of GQS, ξ i = z/l g − λ i . In the limit σ l g , the following simplified expression for the wave function is valid: Here ξ 0 i = z 0 /l g − λ i . Due to the weak annihilation ofH on the surface, the mirror plays the role of a detector. The corresponding rate of disappearance ofH is given by the following expression [41]: The interference terms in expression (28) are controlled by the frequencies ω i j of transitions between GQSs. Measurements of the transition frequencies give access to the characteristic gravitational energy value ε g (5).
Sudden mirror drop
The vertical momentum (velocity) distribution F( p) provides information about spatial properties of GQSs. A method to measure it forH can consist in a prompt downward shift of the mirror. The mirror acceleration a should significantly exceed the free fall acceleration g. The vertical shift should significantly exceed the characteristic gravitational length scale l g (5) and can be achieved using a piezoactuator. Then, the initial superposition of GQSs starts falling freely at time t 0 down to the mirror (detector) installed at a distance H d below. One can use the sudden approximation to describe this motion. A fraction of theH annihilate; the remainingH are reflected from the surface of the mirror due to quantum reflection and continue bouncing until their full annihilation.
The initial momentum distribution F( p) is mapped into the time distribution of free fall events [45,46]: Here Φ is the flux ofH falling on the annihilation surface, t f = √ 2H d /g is the classical free fall time. The momentum distribution F( p, t 0 ) (t 0 is the moment of the sudden mirror drop) can be evaluated by Fourier transform of the distribution (27) taken at time t 0 . By measuring the time distribution of free fall events, one obtains the momentum distribution of GQS, which gives access to the characteristic spatial scale of GQSs l g (5).
Prompt kick
One could also use a prompt kick to make allH acquire the same upward momentum p 0 . Such a kick could be achieved either by absorbing a photon from a laser beam, or by prompt switching of a gradient magnetic field W (t, z) = f (t − t 0 )μz∂ B/∂z. f (t − t 0 ) is a function, which characterizes the time dependence of the gradient magnetic field and is localized around t 0 with a typical dispersion τ 1/ω g . In the limit τ → 0, the wave-function change after promptly switching the field is: Here t + 0 and t − 0 are the moments just after and before the kick, and The momentum distribution in an upstream detector is: Evaluating the energy and spatial gravitational scales from interferometry experiments and comparing these values with theory, one can conclude on a presence of extra interactions between atom and mirror at a micrometer scale.
Conclusion
We have proposed in this paper a new method for producing long confinement times in gravitational quantum states (GQSs) and a magneto-gravitational trap (MGT) for atoms, anti-atoms, neutrons and other particles possessing a magnetic moment. The Earth gravitational field and a reflecting mirror confine the particles in the vertical direction. The interaction of the particle's magnetic moment and the magnetic field originating from an electric current passing through a vertically installed wire confines particles in the radial direction, combined with the centrifugal acceleration of the particles. We underline that the observation time is the key parameter that defines the precision of measurements of such states. In case of anti-hydrogen atoms, one can achieve the observation time limited only by their annihilations in the surface [75]. In case of hydrogen atoms, storage times will be significantly longer due to a contribution from their specular reflection from the surface in the case of direct contact. In case of neutrons, storage times can approach the neutron lifetime. Our analysis shows that the mixing of vertical and horizontal motions of particles can be controlled to an acceptable level not prohibiting precision measurements of GQSs in the MGT. We give examples, which prove the feasibility of precision studies of GQS in the MGT.
In the limit of low particle velocities and magnetic fields, precise control of the particle motion and long storage times in the MGT can provide ideal conditions for gravitational spectroscopy: for the sensitive verification of the equivalence principle forH ; for improving constraints on extra fundamental interactions from experiments with n, atoms andH . These ideas can be applied in particular to the GRASIAN (co-authors of the present article), GBAR [72] and GRANIT [73] projects. S.V. thanks Academy of Finland for support (Grant N.317141).
Data Availability Statement
This manuscript has no associated data or the data will not be deposited. [Authors' comment: This work proposes and develops a new method and does not contain experimental data.] Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecomm ons.org/licenses/by/4.0/. Funded by SCOAP 3 . | 7,592.2 | 2020-06-01T00:00:00.000 | [
"Physics",
"Geology"
] |
Research on intelligent interactive music information based on visualization technology
: Combining images with music is a music visualization to deepen the knowledge and under - standing of music information. This study brie fl y introduced the concept of music visualization and used a convolutional neural network and long short - term memory to pair music and images for music visualiza - tion. Then, an emotion classi fi cation loss function was added to the loss function to make full use of the emotional information in music and images. Finally, simulation experiments were performed. The results showed that the improved deep learning - based music visualization algorithm had the highest matching accuracy when the weight of the emotion classi fi cation loss function was 0.2; compared with the traditional keyword matching method and the nonimproved deep learning music visualization algorithm, the improved algorithm matched more suitable images.
Introduction
Music is an acoustic way of expressing emotional thoughts, and it is also an art form. Music uses changes in the rhythm and pitch of sounds to convey information. When people receive such information, they can not only appreciate the rhythm and melody, but also feel the change of emotion [1]. With the improvement of living standards, people's demands for spiritual culture are also getting greater. People are no longer satisfied with listening to music, but want to "see" the emotional changes in music while listening. Music visualization can connect people's auditory and visual senses, so that people can feel the information contained in music more intuitively [2]. In addition to enhancing the appreciation of music, music visualization can also show the sound characteristics that are not intuitive enough and need to be subjectively perceived by people in a more accurate and straightforward way, and visually observing the sound characteristics of music can assist in music teaching or composition [3]. Plewa and Kostek [4] proposed a graphical representation method of song emotions based on self-organizing mapping and created a map in which music excerpts with similar moods were organized next to each other on the two-dimensional display. Li and Li [5] have constructed a music visualization model based on graphic images and mathematical statistics by combining mathematical, statistical methods such as K-mean clustering and fusion decision trees based on music graphic images to address the current shortcomings in the field of music visualization. The actual case analysis and performance test results showed the superiority of the music visualization method based on graphic images and mathematical statistics. Lopez-Rincon and Starostenko [6] proposed a method to normalize data in musical instrument digital interface files by 12-dimensional vector descriptors extracted from tonality and a novel technique for dimensionality reduction and visualization of extracted music data by three-dimensional projections. They found through experiments that the method retained 90% of the original data in the dimensionality reduction projection. The literature mentioned above show that feature extraction of music data is important for music visualization. In this article, convolutional neural network (CNN) and long short-term memory (LSTM) were used to extract features from images and music more accurately, and the images were combined with music. This study provides an effective reference for music visualization. This article studied the combination of music and images to make audience understand the information contained in music more comprehensively. The novelties of this study are that music was paired with images using CNN and LSTM, and the sentiment classification loss function was added to the traditional loss function to make the algorithm consider the sentiment contained in music and images during training and achieve the proper matching of music and images.
Music visualization
Music visualization, in a narrow sense, is the visualization of the sound characteristics of music, such as the time-frequency diagram of sound, but in a broader sense, it is the interpretation of the information contained in music through pictures or videos, providing an intuitive visual presentation to users. This study focuses on the visual representation of the emotional information contained in music in music visualization. Broadly speaking, music visualization uses images to visually interpret the content of music, and its principle lies in synesthesia [7]. The literary rhetoric of synesthesia refers to the fact that a stimulus to one sense evokes the perception of that sense and the perception of another sense. Music visualization mentioned in this article is audio-visual synesthesia, which refers to the visual association caused by auditory perception or auditory association caused by visual perception. Among the various human senses, vision is arguably the most important channel for receiving information, and when visual and auditory senses produce a synesthesia effect, auditory perception is enhanced so that more accurate judgments can be made when receiving external information [8].
Music visualization technology is also a classification of interaction technology, where "interaction" is defined as an action between plural objects that affect each other. Interaction can be divided into humanhuman interaction, human-computer interaction, and computer-computer interaction. Music visualization technology is human-computer interaction. The user inputs music into the computer, and the computer processes the music through the corresponding algorithm and outputs it to the user after stitching it together with the corresponding images, which affects the user from both auditory and visual aspects [9].
3 Deep learning-based music visualization
Music and image matching based on a deep learning algorithm
Traditionally, music is visualized by means of a weak emotional label that corresponds to some of the characteristics of the music. However, on the one hand, the emotion weak tags of the images have the wrong emotion descriptions, and on the other hand, the emotional information of music often needs to be fully expressed through the whole, which means that the emotion expressed by music needs to be contextualized [10]. Due to the above two reasons, traditional music-image matching methods are not effective for music visualization. Therefore, this study uses a deep learning algorithm to pair music signals and images to visualize the sentiment information in music signals. For the candidate images, CNN is used for feature extraction; for the music signal, as its emotional information features need to be related to the context, that is, the order of information affects the feature expression, a variant of recurrent neural network (RNN) -LSTM is used to extract the music signal before feature-based pairing. Figure 1 shows the basic flow of the above deep learning-based music emotion visualization.
1. After preprocessing the candidate images, feature extraction is performed using CNN [11]. The convolution and pooling layers are interleaved, that is, one convolution layer and one pooling layer, or multiple convolutional layers and one pooling layer. The image features after multiple convolutional kernel pooling operations are used for subsequent matching classification. 2. The music signal is preprocessed, and then the fast Fourier transform [12] is performed on the signal to obtain the spectrogram. The emotional semantic features of the spectrogram are extracted using LSTM. 3. The candidate image features extracted by CNN and the sentimental features of the music signal extracted by LSTM are stitched together to obtain the fused features. 4. The fused features of the candidate image and music signal are input into the fully connected layer for classification to judge whether the music signal matches the candidate image. If they do not match, the candidate image is replaced, and feature extraction and judgment are performed again; if they match, the music and image are stitched together according to the timeline of the music [13].
The above steps are the basic steps for the practical application of the music visualization algorithm after training. Samples with corresponding pairing labels are input into the algorithm during training, and the same steps 1-3 are followed for feature extraction and feature fusion of the samples. Then, the fused sample features are classified to determine whether they match. The classification results are compared with the actual classification labels of the training samples. For the error calculation of the classification results, cross-entropy is often used to calculate the loss function, and then the calculated loss function is used to reversely adjust the structural parameters.
To further improve the accuracy of matching and enhance the emotional embodiment of matched images to deepen listeners' understanding of musical emotions, this study improves the loss function by introducing emotional information labels to supervise the training. The formula of the improved loss function is as follows: where Loss, Loss 1 , and Loss 2 are the overall matching loss, music and image matching loss, and music and image sentiment classification loss, respectively, λ is the weight of sentiment classification loss, c i and c i ′ are the true labels of music and image matching and sentiment matching, respectively, and p i and p i ′ are the music and image matching probability and sentiment matching probability, respectively.
Synthesis of music and images
In the previous subsection, the matching algorithm of music segments and pictures was introduced and improved. After the matching of musical phrases and pictures is completed, it is necessary to synthesize the musical phrases and pictures into music videos to visualize the music, and the basic flow is shown in Figure 2.
1. The period in the music file is marked with a timestamp label.
2. The duration of the music is calculated based on the timestamp label of the period, and the duration that the matched image corresponding to the period can last is calculated. 3. The images corresponding to the period are clipped into a video stream in the order of the timestamp of the period. The duration of every image in the video stream is calculated by step 2. 4. The periods corresponding to the images are synchronized with the video stream to obtain a music video to realize music visualization.
Experimental analysis 4.1 Experimental environment
In this study, simulation experiments were conducted in a laboratory server using MATLAB software (The MathWorks, Inc., Natick, Massachusetts, USA) [14] with the following relevant configurations: Windows 7 operating system, Core i7 processor, and 32 GB memory.
Experimental data
The deep learning-based music emotion information visualization algorithm can realize intelligent human-computer interaction between music, images, and people. After the user input music into the algorithm, it matched with the candidate images, and the matched music and images were stitched together and fed back to the user, so that the user could more deeply experience the emotion expressed in the music. Before using this visualization algorithm, the algorithm was trained by training samples in order to match the music with images accurately. A crawler written in Python crawled 1,500 English songs with lyrics from music platforms. The periods of these songs were divided according to the numbered musical notation. Then, images were retrieved by the Baidu search engine with the aid of the lyrics of the periods. The top 20 images retrieved according to the lyrics of each period were taken as the candidate images. Then, period and candidate image pairings were initially screened using voting and finally constructed by manual scoring and voting, and the pairs were marked with sentiment labels. Finally, constructed period-image pairs with sentiment labels were used as positive samples, and the period-image pairs composed of period and other candidate images without sentiment labels were used as negative samples. The ratio of positive samples to negative samples was 1:3. The basic data formats in the samples were jpg format for the image and mp3 format for the period. The names of the images and periods paired in the positive samples were the same. Every pair had a json file in which the sentimental type and song name of the pair were recorded, and the name of this file was the same as the name of the pair.
Experimental setup
In the deep learning-based algorithm for visualizing music emotional information, the parameters of the CNN used to extract image features are as follows. There were 13 convolutional layers and 5 pooling layers. The structural distribution of the two kinds of layers was two convolutional layersone pooling layertwo convolutional layersone pooling layerthree convolutional layersone pooling layerthree convolutional layersone pooling layerthree convolutional layersone pooling layer. The convolutional layers had 64 convolutional kernels in a size of 5 × 5. The Relu function was used as the activation function. The pooling layer adopted max-pooling. The size of the pooling frame was 2 × 2. The sliding step length was 2. As to LSTM used for extracting music features, a four-layer bidirectional RNN was used as the hidden layer, which contained long and short-term memory units such as forgetting gates, recurrent gates, and output gates as described earlier.
In the improved loss function used for supervising algorithm training, the weight λ of the sentiment classification loss would affect the matching accuracy of the whole algorithm. The value of λ was set as 0, 0.2, 0.4, and 0.8, respectively, in this study. The accuracy of the trained algorithm under the four weights was tested.
To further verify the matching accuracy of the algorithm, it was compared with the traditional music matching method. The traditional music matching method matched images and music based on the weak tags of the images and the keywords contained in the lyrics of the period. During the comparison, the sentiment classification loss weight λ of the algorithm was set as 0 and 0.2, respectively; the former represented the visualization algorithm before the improvement of the loss function, and the latter represented the visualization algorithm after the improvement of the loss function. The reason for selecting 0.2 was that 0.2 was the optimal weight obtained in the previous weight test.
Evaluation indicators
Accuracy under R@K [15] was used as the evaluation indicator of the music visualization algorithm, and the candidate images matched with the periods were ranked according to the matching probability calculated by the algorithm from the largest to the smallest. The matching was considered as successful when the first K images contained the correct images. The value of R@K meant the proportion of the successfully matched periods under the K value. The value of K was set as 1, 5, and 10. The reason for using accuracy under R@K as an evaluation criterion is that the information contained in the music and the information conveyed by the images only partially overlapped unless a specific figure was drawn for the music. The difference was the degree of overlapping. Therefore, there was more than one matching result when matching a music with the image bank. In addition, as the sentimental information of music images was strongly subjective and difficult to be quantified, different values of K meant the visualization algorithm could give K candidate images that were closest to the period for the user to select, improving human-machine interaction.
Experimental results
In this study, the loss function used for supervising training in the deep learning-based music visualization algorithm was improved to make full use of emotional information in the periods and images. Sentiment classification loss was added to the loss function, and the proportion of the sentiment classification loss was adjusted by the weight. The accuracy of the improved sentiment visualization algorithm with different weight proportions is shown in Figure 3. Figure 3 shows that the matching accuracy of the visualization algorithm increased with the increase of the value of K in R@Kwhen the loss weight of sentiment classification was the same. The reason for the above result is that the user's feeling on the music and image was subjective; the smaller the value of K was, the smaller the available range for matching was, the more difficult the matching was. In addition, comparing the matching accuracy of the visualization algorithm with different loss weights of sentiment classification under the same R@K. Figure 3 showed that the visualization algorithm had the highest accuracy when the weight was 0.2. The reason for the above result is that the sentiment information in the periods and images could assist in matching the periods with the images more accurately, thus making the music visualization more accurate, but the sentiment labeling of the sample set used in the algorithm training was determined by using the voting score, which had errors. The larger the loss weight of sentiment classification was, the larger the influence on the matching results was.
Limited by space, this article only showed partial matching results of the three music visualization algorithms under R@1, as shown in Figure 4. Figure 4 provides the time-domain diagram of the period, but the sentiment information cannot be directly felt from the time-domain diagram alone due to the text format; therefore, the lyric of the period was used to assist in the illustration. It was seen from the lyric shown in Figure 4 that the period described a person walking alone on a snowy road, and the peak in the time-domain diagram of the period was gradually decreasing to reflect the emotion of loneliness and desolation. The image given by the traditional keyword and music matching method only showed a few snow-covered trees on the snow, and there was no significant intersection except for the overlap between the element of "snow" and the lyrics of the period; the image given by the nonimproved music visualization algorithm had a larger area of snow and trees, and the various marks on the snow also reflected the element of "road"; the improved music visualization algorithm gave an image of a snowy road, with snow on both sides of the road, visible ruts in the snow, dead branches on both sides of the road, and small houses with snow on one side, which not only reflected the "road" but also embodied a sense of loneliness.
To further verify the matching performance of the improved deep learning-based music visualization algorithm for periods and images, it was compared with the nonimproved deep learning-based music visualization and the traditional keyword matching method, in which the improved music visualization algorithm used a loss weight of 0.2 for sentiment classification. The comparison results are shown in Figure 5. Figure 5 shows that the improved deep learning-based music visualization algorithm had the highest matching accuracy under the same R@K, the nonimproved deep learning-based music visualization algorithm had the second highest matching accuracy, and the traditional keyword matching method had the lowest. In addition, with the increase of the value of K in R@K, the matching accuracy increased no matter what kind of music visualization algorithm was used.
Conclusion
This study introduced the concept of music visualization, paired music and images with CNN and LSTM for music visualization, and added the sentiment classification loss function to the loss function to make full use of the emotional information in music and images, and finally conducted simulation experiments. The results are as follows: (1) as the value of K in R@K increased, the candidate range of matching increased, the matching difficulty decreased, and the accuracy of the improved deep learning-based music visualization algorithm increased accordingly. (2) The improved deep learning-based music visualization algorithm had the highest accuracy when the loss weight of sentiment classification was 0.2. (3) The improved deep learning-based music visualization algorithm matched images more closely to music than the traditional keyword matching method and the nonimproved deep learning music visualization algorithm. (4) The improved deep learning-based music visualization algorithm had the highest matching accuracy under the same R@K; the matching accuracy increased as the value of K in R@K increased no matter what kind of music visualization algorithm was used.
The future research direction is to expand the range of training samples to improve the accuracy of music and image matching and offer an effective reference for music visualization technology.
Conflict of interest:
Author states no conflict of interest. | 4,565.4 | 2022-01-01T00:00:00.000 | [
"Computer Science"
] |
Chorioamnionitis accelerates granule cell and oligodendrocyte maturation in the cerebellum of preterm nonhuman primates
Background Preterm birth is often associated with chorioamnionitis and leads to increased risk of neurodevelopmental disorders, such as autism. Preterm birth can lead to cerebellar underdevelopment, but the mechanisms of disrupted cerebellar development in preterm infants are not well understood. The cerebellum is consistently affected in people with autism spectrum disorders, showing reduction of Purkinje cells, decreased cerebellar grey matter, and altered connectivity. Methods Preterm rhesus macaque fetuses were exposed to intra-amniotic LPS (1 mg, E. coli O55:B5) at 127 days (80%) gestation and delivered by c-section 5 days after injections. Maternal and fetal plasma were sampled for cytokine measurements. Chorio-decidua was analyzed for immune cell populations by flow cytometry. Fetal cerebellum was sampled for histology and molecular analysis by single-nuclei RNA-sequencing (snRNA-seq) on a 10× chromium platform. snRNA-seq data were analyzed for differences in cell populations, cell-type specific gene expression, and inferred cellular communications. Results We leveraged snRNA-seq of the cerebellum in a clinically relevant rhesus macaque model of chorioamnionitis and preterm birth, to show that chorioamnionitis leads to Purkinje cell loss and disrupted maturation of granule cells and oligodendrocytes in the fetal cerebellum at late gestation. Purkinje cell loss is accompanied by decreased sonic hedgehog signaling from Purkinje cells to granule cells, which show an accelerated maturation, and to oligodendrocytes, which show accelerated maturation from pre-oligodendrocytes into myelinating oligodendrocytes. Conclusion These findings suggest a role of chorioamnionitis on disrupted cerebellar maturation associated with preterm birth and on the pathogenesis of neurodevelopmental disorders among preterm infants. Supplementary Information The online version contains supplementary material available at 10.1186/s12974-024-03012-y.
Background
The incidence of neurodevelopmental disorders is increasing among children, in part due to improved survival of at-risk newborns, such as extreme preterm infants [1,2].Preterm infants who are exposed to inflammation prenatally, most commonly due to inflammation of the amniotic membranes [3], or chorioamnionitis, are at particularly high risk of a wide range of neurological impairment and neurodevelopmental disorders, including cerebral palsy, attention deficit and hyperactivity disorder, cognitive impairment, and autism spectrum disorders (ASD) [4,5].While much of the research on brain injury and neurodevelopmental outcomes in preterm infants has focused on motor disabilities associated with supratentorial injury, disruption of cerebellar development is often seen in preterm infants, particularly underdevelopment [6,7].Our understanding of the cerebellar function has shifted from a largely motor and coordination role to a broader understanding of its importance in the regulation of cognitive function, including attention, memory, and executive functioning [8,9].In fact, the cerebellum is one of the most consistently abnormal brain areas in individuals with autism [10].
The cerebellum undergoes rapid expansion in the third trimester of gestation with peak proliferation of the granule cell precursors (GCP) in the external granule layer.These differentiate into granule cells (GC) and migrate to the internal granule layer (IGL) [11].Insults associated with preterm birth take place during the critical time of neuronal expansion of the cerebellum.Reduction of cerebellar volume among preterm infants has been shown to persist at least into adolescence and occurs even in the absence of prior injury being detected on brain imaging [12][13][14].These findings suggest that events associated with prematurity and inflammation can lead to impaired development of the cerebellum.Considering the importance of the cerebellum and of prenatal inflammation to the pathogenesis of neurodevelopmental disorders [15], such as ASD, and their association with preterm birth [16,17], our knowledge of the effects of chorioamnionitis on the cerebellum in this critical stage of development and their potential contribution to the pathogenesis of neurodevelopmental disorders remains limited.
To fill this gap in knowledge, we performed singlenucleus RNA-sequencing (snRNA-seq) of the cerebellum in a nonhuman primate model of preterm chorioamnionitis.We demonstrate that exposure to chorioamnionitis decreases the number of Purkinje cells in the cerebellum and impairs proliferation signaling from Purkinje cells to GCs in the EGL by decreased sonic hedgehog (SHH) signaling.Chorioamnionitis also led to early maturation and myelination of oligodendrocytes in the cerebellum.These findings are consistent with histopathological findings of individuals with autism and of autopsy reports of preterm infants [10,18,19].Our findings provide new insight into the mechanisms through which inflammation contributes to the disruption of brain development in preterm infants.
Animal experiments
Animal protocols were reviewed and approved by the institutional IACUC at the University of California Davis.Time-mated pregnant rhesus macaques at 127 days of gestation (80% of term gestation) were treated with intra-amniotic (IA) lipopolysaccharide (LPS) 1 mg (E. coli O55:B5, Sigma-Aldrich) diluted in 1 ml of sterile saline (LPS).Controls received no intervention.Animals were delivered en cul by cesarean section at 5 days after intra-amniotic LPS (Fig. 1a).No spontaneous labor or fetal losses were observed in the intervention or control groups.Fetuses were humanely euthanized with pentobarbital.After euthanasia, the cerebellum was dissected, and the hemispheres were separated along the midline of the vermis.The left hemisphere with the left side of the vermis was frozen for molecular analysis and the (See figure on next page.)Fig. 1 Intra-amniotic (IA) LPS induces chorio-decidual and fetal inflammation and leads to altered cerebellar cell composition.a Experimental design.Rhesus macaques' fetuses at 127 days of gestation (80% gestation) received ultrasound-guided intra-amniotic injections (IA) of 1 mg of LPS from E. coli O55:B5 as a model of chorioamnionitis; n = 7/group.b Representative images of low power magnification of cerebellum (tile imaging, 5 ×), showing similar foliation pattern between groups.c Flow cytometry of the chorio-decidua cells showed increased neutrophils, T cells and NKT cells at 5 days in LPS animals (n = 5/group).d Multiplex ELISA for cytokines in the maternal and fetal plasma from controls and LPS animals shows increased IL-6 as well as increased IL-17 and IL-1ra in the fetal plasma, with maternal increase in IL-17 only.(n = 5/group).e UMAP visualization of single-nuclei RNA-seq from 30,711 cerebellar nuclei.Clustering using the Seurat package revealed 24 unique cell clusters in the developing cerebellum; n = 2/group.f Dotplot of top cell type markers based on the top differentially expressed genes in each cluster and on cell-specific and cell-enriched markers reported in the literature.g Proportion of each identified cell cluster by condition showing predominance of GCs in control and LPS animals.h Differential proportion of cell populations in each cluster between control and LPS-exposed fetuses by permutation test.LPS decreased number of Purkinje cells and increased number of oligodendrocytes, unipolar brush border cells (UBC), and GCP/GC clusters 9 and 6 right hemisphere was fixed in 10% formalin and embedded in paraffin for histological analyses.Data of animals included in the study are shown in Table 1.
Chorion-amnion-decidua dissection and flow cytometry of chorio-decidua cells
At delivery, extra-placental membranes were collected and dissected away from the placenta as previously described [20] .The cells of the chorio-decidua were scraped and the amnion and remaining chorion tissue were separated with a forceps.The chorio-decidua cells were washed and digested with Dispase II (Life Technologies, Gran Island, NY, USA) plus collagenase A (Roche, Indianapolis, IN, USA) followed by DNase I (Roche) treatment.Cells suspensions were filtered, red blood cells lysed, and the suspension was prepared for flow cytometry.Cell viability was > 90% by trypan blue exclusion test.Multiparameter flow cytometry and gating strategy of the difference leukocyte subpopulations was done as previously described [20].Monoclonal antibodies used are listed in Table 2. Cells were treated with 20 μg/ml of human immunoglobulin G (IgG) to block Fc receptors, stained for surface markers for 30 min at 4 °C in PBS, washed, and fixed in fixative stabilizing buffer (BD Bioscience).All antibodies were titrated for optimal detection of positive populations and similar mean fluorescence intensity.At least 500,000 events were recorded for each sample.Doublets were excluded based on forward scatter properties, and dead cells were excluded using LIVE/ DEAD Fixable Aqua dead cell stain (Life Technologies).Unstained and negative biological population were used to determine positive staining for each marker.Data were analyzed using FlowJo version 9.5.2 software (TreeStar Inc., Ashland, OR, USA).
Cytokine concentration measurement
We measured cytokine concentration in maternal and fetal plasma by Luminex technology using multiplex kits for nonhuman primate (Millipore, Burlington, MA, USA).Each 96-well filter plate was blocked with 100 μl of blocking buffer for 30 min, followed by vacuum filtration at 2 psi.The 25 × bead mix was vortexed for 1 min and sonicated for 30 s. prior to diluting.The bead mix was diluted in wash buffer, and 50 μl added to each well followed by vacuum filtration.The standard was dissolved in the supplied medium.Samples and standards were added to the plates in triplicate (plasma samples were diluted 1:2).The plates were placed on a shaker at 4 °C overnight.The following day, the medium was vacuumfiltered, and 50 μl of detection antibody was added to each well.The wells were washed four times with 100 μl wash buffer and the plates were incubated on a shaker at room temperature for 1 h in the dark.The plates were washed again four times with 100 μl wash buffer.Streptavidin-PE (50 μl) was added to each well and the plate was placed on a shaker for 15 min at room temperature in the dark.The wells were washed four times with 100 μl wash buffer and the beads were resuspended in 150 μl wash buffer for analysis.Immediately prior to analysis, the
Immunofluorescence and immunohistochemistry
Paraffin-embedded tissue sections underwent heatassisted antigen retrieval with citrate buffer (pH 6.0).For immunohistochemistry endogenous peroxidase activity was reduced with H 2 O 2 treatment.Nonspecific binding sites were blocked with 4% bovine serum diluted in PBS followed by incubation with primary antibodies overnight at 4 °C (Table 3).The following day, tissue sections were incubated with the appropriate species-specific biotinylated or Alexa Fluor-conjugated secondary antibody diluted in 1:200 in 4% bovine serum for 2 h at room temperature.For immunohistochemistry with DAB, tissue sections incubated with biotinylated secondary antibody were washed and antigen/antibody complexes were visualized using Vectastain ABS peroxidase kit (Vector Laboratories, Burlingame, CA, USA) followed by counterstaining with Harris hematoxylin.For immunofluorescence, sections incubated with Alexa Fluor-conjugated secondary antibody were washed and mounted with VectaShield Hardset Mounting Media (Vector Laboratories).Histological analyses and cell counts were performed in the entire cerebellum.
Western blot analyses
Protein concentrations of tissue homogenates or EVs were measured by BCA protein assay using a commercial kit (Pierce Biotechnology Inc., Rockford, IL, USA).Total proteins (20 µg/sample) were fractionated by SDS-PAGE on 4-12% Tris-glycine precast gradient gels (ThermoFisher, Waltham, MA, USA) and then transferred to nitrocellulose membranes (Amersham, Piscataway, NJ, USA).The membranes were incubated overnight at 4 °C with the respective primary antibodies and then incubated for 1 h at room temperature with HRP-conjugated secondary antibodies.Antibody bound proteins were detected using ECL chemiluminescence methodology (Amersham, Piscataway, NJ, USA).The intensities of protein bands were quantified by ImageJ [16].Band density was then corrected for beta-actin band density in the same lane.Values were then divided for the average control value to represent fold change relative to control.
Statistical analyses
GraphPad Prism (GraphPad Software, La Jolla, CA, USA) was used to graph and analyze data.Statistical differences between groups were analyzed using Mann-Whitney U-tests for cytokine concentration measurements and cellular composition from flow cytometry results.Results were considered significantly different for p values ≤ 0.05.However, due to the limited number of samples per group, we also report trends (p values between 0.05 and 0.1).
Single-nucleus RNA isolation, sequencing, and mapping
Nuclei isolation and single-nuclei RNA sequencing (sn-RNAseq) were performed by Singulomics Corporation (Singulomics.com).Flash frozen cerebellar hemispheres and half of the cerebellar vermis from 2 rhesus macaque fetuses per group were homogenized and lysed with Triton X-100 in RNase-free water for nuclei isolation.Isolated nuclei were purified, centrifuged, and resuspended in PBS with RNase inhibitor, and diluted to 700 nuclei/ µl for standardized 10 × capture and library preparation protocol using 10 × Genomics Chromium Next GEM 3' Single Cell Reagent kits v3.1 (10 × Genomics, Pleasanton, CA, USA).The libraries were sequenced with an Illumina NovaSeq 6000 (Illumina, San Diego, CA, USA).Raw sequencing files were processed with CellRanger 5.0 (10 × Genomics) and mapped to the rhesus macaque
SnRNA-seq processing, clustering, and differential expression
Downstream analyses were performed on the Seurat package version 3.1.0for R [21].Each sample was processed separately and then integrated for cell type identification and comparison analyses.After importing the CellRanger data using the Read10x function and creating a Seurat object for each sample, samples were normalized, scaled, and the top 2000 variable features were identified followed by nuclei clustering.After clustering, ambient RNA was removed with SoupX [22].Cleaned samples were then filtered removing nuclei with < 500 genes or > 5% mitochondrial RNA.The number of nuclei and genes per sample is shown in Table 4.After filtering, samples were integrated using the FindIntegrationAnchors function with the parameter dims = 1:30 followed by IntegrateData function with dims = 1:30.The integrated data were scaled, principal component analysis (PCA) was performed with npcs = 30, and nuclei were clustered with resolution = 0.5.Conserved cell types for each cluster were identified using the FindAllMarkers function with default parameters.Cell types were identified based on known cell markers and previously published datasets of cerebellum single-cell sequencing [23][24][25].Differences in the proportion of cells in clusters between control and IA LPS was performed using the permutation test on the scProportionTest package for R [26].Global and cluster subset differential expression analyses between Control and IA LPS were performed using the FindMarkers function with default parameters.Overrepresentation analysis of differentially expressed genes was performed for Gene Ontology terms and KEGG Pathways on ToppCluster [27] and using the DEenrichRPlot function on Seurat [28,29].
Pseudotime analyses
To determine lineage trajectories and determine differences across conditions, we performed pseudotime analyses with Monocle3 [30][31][32] and Slingshot [33] followed by condiments [34].For analysis on Monocle3 [30], celldataset objects were created from the individual Seurat objects and combined using the combine_cds function and preprocessed.We then performed dimensionality reduction using UMAP [35] followed by cell clustering.Specific cell types of interest were subset, clustered and a principal graph was fit within each partition using the learn_graph function.Cells were then ordered in pseudotime using the plot_cells function.For analysis with Slingshot [33] a Seurat object for each condition was created and converted to a single cell experiment object followed by conversion to slingshot object using the slingshot function with reducedDim = "PCA".Objects were then filtered and normalized following the standard protocol.We then performed the topologyTest on condiments [34] to determine differences in lineage trajectories between conditions.
Cell communication analyses
Cell-cell interactions were inferred with CellChat [36] based on known ligand-receptor pairs in difference cell types.To identify perturbed cell-cell communication networks in chorioamnionitis we loaded the condition specific Seurat objects into cell chat using the createCell-Chat function, followed by the preprocessing functions indentifyOverEpressedGenes, identifyOverExpressedinteractions, and projectData with default settings.For comparison between Control and IA LPS, we applied the computeCommunProb, computeCommunProbPathway, and aggregateNet functions using standard parameters and fixed randomization seeds.To determine signal senders and receivers we used the netAnalysis_signaling-Role function on the netP data slot.
Intra-amniotic LPS induces persistent fetal inflammation after 5 days
To model chorioamnionitis in a preterm nonhuman primate we performed ultrasound-guided IA injection of 1 mg of LPS (E. coli O55:B5) on gestational day 127 (80% gestation, term is 165 days) (Fig. 1a, Table 1).This model has been previously shown to induce inflammation at the maternal-fetal interface, fetal systemic inflammatory response, and fetal neuroinflammation up to 48 h after injection [37,38].Flow cytometry of the chorio-decidua showed that, at 5 days after IA LPS injection, there was a predominantly neutrophilic infiltrate in the choriodecidua (Fig. 1c, Additional file 1: Fig. S1).Inflammatory infiltrate in the membranes was associated with elevation of IL-6, IL-17, and IL-1ra in the fetal plasma by multiplex ELISA, without elevations of cytokines in the maternal plasma at 5 days (Additional file 1: Fig. S1d, Additional file 2: Fig. S2).There was no alteration in foliation pattern between groups (Additional file 1: Fig. S1b).
snRNA-seq reveals major developing cell types in the cerebellum at late gestation
To determine the effects of chorioamnionitis on the developing cerebellum we used unbiased high-throughput snRNA-seq to examine transcriptional populations of the cerebellum from preterm rhesus macaque fetuses exposed to IA LPS or placebo.We then analyzed the transcriptome from 30,711 single nuclei (17,614 nuclei from 2 controls, 13,097 nuclei from 2 chorioamnionitis) to an average depth of 25,192 to 45,124 reads per nuclei (Table 2).Nuclei were clustered based on their expression profile, and we identified 24 cell clusters that were annotated based on published cell markers [23][24][25] (Fig. 1e, f; Additional file 6: Table S1).We then analyzed the cell type composition in the cerebellum of controls and LPSexposed fetuses (Fig. 1g).Permutation test [26] demonstrated a decrease in the proportion of Purkinje cells and an increase in oligodendrocytes, unipolar brush border cells (UBC), choroid cells, and GCs clusters 9 and 6 in LPS-exposed fetuses (Fig. 1h), showing that chorioamnionitis alters the cell composition of the fetal cerebellum.
Chorioamnionitis accelerates cerebellar GC maturation
Cerebellar GCs are the most abundant neurons in the brain and were the most abundant cell type identified in our dataset.Given the increased proportion of two GC clusters in LPS-exposed fetuses, we re-clustered the GC populations for further analyses.Re-clustering of the GC identified seven distinct clusters based on their expression profile (Fig. 2a, b).Analysis of the top cluster gene markers and established markers for neurons at different developmental stages identified GCs at five stages of development in our dataset: proliferating GCPs, committing GCs, migrating GCs, maturing GCs, and mature GCs (Fig. 2c, d, Additional file 7: Table S2).Differential expression analysis of gene expression between LPS-exposed and control cerebellum identified 317 differentially expressed genes using a threshold of log foldchange > 0.25 and FDR < 0.1 (Fig. 2e, Additional file 8: Table S3).Overrepresentation analysis of differentially expressed genes was performed on ToppCluster [27] for genes induced and suppressed in chorioamnionitis.Genes induced in GCs of LPS-exposed fetuses were associated with synapse transmission and hippo signaling pathways, while suppressed genes were associated with cerebellum development, and Purkinje cell-GCP signaling involved in GCP proliferation (Fig. 2f ).To further explore the effect of chorioamnionitis on GCP proliferation and GC maturation, we performed pseudotime analyses on Monocle3 [30] and Slingshot [33] followed by differential topology between conditions [34] (Fig. 2g).Both pseudotime analyses methods showed an increased proportion of the most mature GCs, suggesting that chorioamnionitis accelerates cerebellar GC maturation.
Chorioamnionitis decreases the relative abundance of Purkinje cells in the developing cerebellum
Since IA LPS decreased the proportion of Purkinje cells, we sought to further determine the effect of chorioamnionitis on Purkinje cell numbers and gene expression.Sub-setting and re-clustering of the Purkinje cells revealed 2 clusters (Fig. 3a) with distinct cell markers (Fig. 3b, Additional file 6: Table S1).Overrepresentation analysis for KEGG Pathways of genes differentially expressed in cluster 1 compared to cluster 0 of Purkinje cells showed that genes in cluster 0 were associated with Hippo signaling, cellular senescence, and SHH signaling while genes in cluster 1 were associated with glutamatergic synapse, Rap1 signaling pathway, and Ras signaling pathway, suggesting distinct developmental functions of these subpopulations (Fig. 3c).Differential gene expression of genes expressed in Purkinje cells in LPS-exposed compared to control fetuses resulted in 302 regulated genes (Fig. 3d, Additional file 9: Table S4).Pathway overrepresentation analysis of genes differentially expressed in LPS-exposed fetuses compared to control showed that genes suppressed by chorioamnionitis were associated with cholinergic synapse, dopaminergic synapse, and Wnt signaling, and genes induced by LPS were associated with ErbB signaling pathway and glutamatergic synapse suggesting that LPS suppressed developmental pathways and synapse formation in Purkinje cells (Fig. 3e).We then performed immunofluorescence for calbindin-1 and measured the Purkinje cell linear density by dividing the number of Purkinje cells by the length of the Purkinje cell folia in 3 cerebellar folia per animal [39].There was a nonsignificant trend towards decreased Purkinje cell linear density in LPS-exposed fetuses consistent with the finding of decrease proportion of Purkinje cells in the singlenuclei analysis (Fig. 3e).
Chorioamnionitis accelerates oligodendrocyte maturation in the cerebellum
Given the increased proportion of oligodendrocytes in LPS-exposed fetuses relative to control on the singlenuclei data, we sought to further determine the effect of chorioamnionitis on oligodendrocyte proliferation and maturation.Sub-setting and re-clustering of oligodendrocytes revealed four distinct clusters (Fig. 4a, b) which were identified based on the top cell markers (Additional file 6: Table S1) and genes known to be differentially expressed during oligodendrocyte maturation, specifically: topoisomerase IIα (TOP2A) in proliferating oligodendrocytes, platelet-derived growth factor receptor α (PDGFRA) in oligodendrocyte progenitor cell (OPC), and myelin basic protein (MBP) and myelin-associated glycoprotein (MAG) in premyelinating oligodendrocytes and mature oligodendrocytes (Fig. 4a, c).We then performed differential gene expression analysis in the oligodendrocyte cluster of LPS compared to control fetuses.Overrepresentation analysis of genes differentially expressed between LPS and control revealed that genes suppressed by LPS were associated with oligodendrocyte differentiation and genes induced by chorioamnionitis were associated with myelination (Fig. 4d, Additional file 10: Table S5).We then analyzed cerebellar myelination by Western blotting for MBP, which showed increased MBP in the cerebellum of LPS-exposed fetuses (Fig. 4e).To further validate those findings we performed pseudotime analysis on Monocle3 [30] and Slingshot [33] followed by differential topology between conditions [34].On both analyses we observed an increased proportion of mature oligodendrocytes in the chorioamnionitis animals (Fig. 4f ).
Chorioamnionitis induces formation of a new cluster of UBCs
Following the initial analysis demonstrating an increase in UBCs induced by chorioamnionitis, we performed a subset analysis of these cells.Clustering of UBCs revealed two clusters (Fig. 5a), with top identified based on specific cell markers differentially expressed between clusters (Fig. 5b, Additional file 6: Table S1).Interestingly, while cluster 0 was comprised both from controls and LPS-exposed fetuses, cluster 1 only had cells from LPSexposed fetuses (Fig. 5c).Overrepresentation analysis of genes differentially expressed between the two UBS clusters (Fig. 5d) showed that genes differentially expressed in cluster 0 were positively associated with anterograde trans-synaptic signaling, chemical synaptic transmission, and regulation of neurotransmitter secretion (Fig. 5e, left), and negatively associated with cell differentiation, engulfment of apoptotic cell, and contact inhibition (Fig. 5e, right).Considering the observed reduction in Purkinje cells in LPS-exposed animals, UBCs in those animals would have decreased synaptic connections and cell-cell contact which could explain the suppression of terms associated with neurotransmission and induction of genes associated with contact inhibition and negative regulation of cell junction in cluster 1 comprised LPSexposed fetuses only compared to cluster 0. These findings, however, do not explain the increase in UBC as we did not identify signaling.
sn-RNAseq reveals microglial and astrocyte diversity in the developing cerebellum
To determine the effect of chorioamnionitis on microglial and astrocyte activation in the cerebellum, we individually analyzed the microglia and astrocyte clusters.We found 3 distinct microglia clusters based on expression profile (Additional file 3: Fig. S3a, b).There was no difference in cluster distribution between control and LPS (Additional file 3: Fig. S3c).Top cell markers for microglial clusters (Additional file 6: Table S1) revealed canonical microglia markers triggering receptor expressed on myeloid cells 2 (TREM2) and C-X3-C Motif Chemokine Receptor (CX3CR1) and cluster-specific microglial markers, Plexin domain containing 2 (PLXDC2), disable 2 (DAB2), and cell adhesion molecule 2 (CADM2) (Additional file 3: Fig. S3d).Overrepresentation analysis of the top differentially expressed genes in each cluster revealed specific microglial functions.Genes differentially regulated in cluster 0 were associated with mononuclear cell differentiation, microglia migration, and regulation of cell adhesion (Additional file 3: Fig. S3e), cluster 1 was associated with ERK1 and ERK2 cascade, and receptor mediated endocytosis (Additional file 3: Fig. S3f ), and cluster 2 was associated with cell adhesion, channel activity, and synapse organization (Additional file 3: Fig. S3g), suggesting cluster-specific microglia roles in the developing cerebellum.
Sub-setting and re-clustering of astrocytes revealed 4 distinct cell clusters based on gene expression pattern (Additional file 4: Fig. S4a, b) that showed similar distribution between control and LPS-exposed fetuses (Additional file 4: Fig. S4c).Top cell cluster markers for the four astrocyte clusters identified revealed cluster-specific and cluster enriched gene markers (Additional file 4: Fig. S4d; Additional file 6: Table S1), which were associated with specific astrocyte biological functions on overrepresentation analysis.Cluster 0 was associated with epidermal growth factor signaling which is crucial for astrocyte transition from quiescent to activated state (Additional file 4: Fig. S4e).Cluster 1 was associated with inositol lipid-mediated signaling, regulation of gene expression, and endothelial proliferation (Additional file 4: Fig. S4f ).Cluster 2 was associated with extracellular matrix organization, of synapse potentiation, and axonogenesis (Additional file 4: Fig. S4g).Cluster 3 was associated with synaptic transmission and signaling, dendrite selfavoidance, and glutamate signaling (Additional file 4: Fig. S4h).These findings support the presence of functionally diverse population of astrocytes in the late gestation cerebellum.
Cell-cell communication in the cerebellum is disturbed by chorioamnionitis
Initial gene differential expression analysis suggested suppression of Purkinje cell-GCP signaling.To further determine the effect of chorioamnionitis on the intercellular communication networks in the developing cerebellum, we analyzed our data on CellChat [36].CellChat uses established ligand-receptor knowledge for quantitative inference and quantitation of intercellular communication networks.Quantification of the global number of interactions and interaction strength among the cell clusters identified showed that LPS increased the number and strength of inferred interactions among the cell clusters (Fig. 6a, b).LPS particularly decreased the incoming interaction strength for oligodendrocytes, Purkinje cells, and endothelial mural cells, and increased the incoming interaction strength for proliferating GCs, Bergman glia, and Golgi cells (Fig. 6c).Moreover, LPS decreased the number and strength of interaction between UBCs and Purkinje cells (Fig. 6b).
We then investigated how the cell-cell communication architecture changed in chorioamnionitis by projecting the inferred cell communication networks from control and LPS in a shared two-dimensional space based on their functional similarity (Fig. 6d).We identified 5 major signaling groups based on functional similarity of which some were unique to control such as Hedgehog (HH, cluster 5), and others unique to LPS such as EphrinB (EPHB) and Desmosome in cluster 5, and periostin and MAG in cluster 4. In addition, other pathways such as WNT, bone morphogenetic protein (BMP), and netrin-F ligand (NGL) were classified into different groups in control and LPS, suggesting that these pathways changed their cell-cell communication architecture in chorioamnionitis (Fig. 6d, Additional file 5: Fig. S5).
Next, we determined the conserved and contextspecific signaling pathways in control and LPS by comparing the information flow for each signaling pathway (Fig. 6e).We identified HH as a control-specific pathway, while desmosome, periostin, MAG, and EPHB as LPS-specific pathways, albeit with small absolute information flow.Moreover, we identified significant differences in information with both increases in LPS (such as WNT and PDGF) as well as decreases in LPS (such as NOTCH, BMP, and FGF) relative to control.We then analyzed the cell-specific incoming and outgoing signaling patterns for the pathways identified above (Fig. 6f, g).Purkinje cells were the unique source of HH signaling in controls targeting multiple cell types and HH signaling was not present in LPS.Oligodendrocytes were the unique source and target of MAG in LPS, and MAG signaling was not present in controls.
Chorioamnionitis impairs SHH signaling from Purkinje cells to proliferating GCs
Given the finding of suppressed HH signaling from Purkinje cells in LPS-exposed fetuses, we further analyzed the differential interactions from Purkinje cells to proliferating GCPs/GCs between LPS and control.We found several increased inferred interactions in LPS, such as NRG2-ERBB4 which is involved in the formation and maturation of synapses, neural cell adhesion molecule (NCAM) interactions that are involved in the final stages of axonal growth and synaptic stabilization, and FGF-FGFR interactions that inhibit SHHmediated proliferation (Fig. 7a).Furthermore, we found decreased SHH-PTCH signaling in LPS, which is necessary for maintenance of GCP proliferation in the external granule layer (Fig. 7a).Analysis of the expression of the different molecules in the HH pathways revealed decreased expression of SHH in Purkinje cells (Fig. 7b).Detailed analysis of senders, receivers, and modulators of HH signaling in control animals showed Purkinje cells as the sole sender and proliferating GCPs as the major receiver.We then confirmed localization and expression of SHH at the protein level in the cerebellum (Fig. 7d).Immunohistochemistry showed SHH localized to Purkinje cell and GCPs/GCs with overall decreased signal in LPS.Western blot confirmed decreased SHH in LPS animals relative to control, corroborating that SHH is secreted by Purkinje cells and that SHH production and secretion of SHH is decreased in LPS (Fig. 7d).
Chorioamnionitis promote maturation signaling in oligodendrocytes
We then delved into the signaling pathways involved in oligodendrocyte differentiation.During transition from oligodendrocyte precursor cell to myelinating oligodendrocyte there is decreased expression of PDG-FRA and increased expression of myelin associated genes MBP, MAG, and myelin oligodendrocyte protein (MOG).We found that LPS increased MAG signaling and decreased PDGFC-PDGFRA in oligodendrocytes (Fig. 8a).Analysis of senders, receivers, and modulators showed that PDGF signaling from proliferating glia, endothelial stalk cells, and UBC was decreased in LPS compared to controls; while MAG signaling was exclusive to LPS-exposed fetuses (Fig. 8b).At the gene level, chorioamnionitis decreased the expression of PDGFC and PDGFRA and increased the expression of MBP, MAG, and MOG (Fig. 8c), consistent with our finding of decreased MBP at the protein level (Fig. 4d).
Discussion
In this study we demonstrate, in a clinically relevant model of prematurity and chorioamnionitis using preterm Rhesus macaques, that antenatal inflammation disrupts cerebellar development by impairing hedgehog signaling from Purkinje cells.Impaired hedgehog signaling was associated with accelerated maturation of GCss and oligodendrocytes on single-nuclei analyses confirmed by translational analyses we show that chorioamnionitis leads to a decrease in the number of Purkinje cells with accelerated maturation of GCss and oligodendrocytes in the developing cerebellum.We also determine that decreased Purkinje cell-derived SHH signaling to GCs and pre-oligodendrocytes is driving the disrupted cerebellar development.These findings are consistent with autopsy of individuals with ASD [10].
The improved survival of extreme preterm infants is obviating the association of preterm birth and ASD.A large cohort study of over 4 million singleton births showed that prevalence of autism in extremely preterm births was as high as 6.1% [40].A recent metanalysis including 18 studies examining the prevalence of ASD in preterm infants found a similar prevalence rate of 7% [2].Development of ASD is also linked to prenatal inflammation.The presence of histological chorioamnionitis carries an increased risk of development of ASD [4].Follow-up from a large cohort of preterm infants showed an increased risk for ASD when histological chorioamnionitis was present [41].The clinical link between antenatal inflammation and neurodevelopmental disorders with ASD-like phenotype is corroborated by animal studies.In mice, maternal immune activation disrupts the integrated stress response in the fetal brain leading to neurobehavioral abnormalities [15].In pregnant rats injected with intraperitoneal Group B Streptococcus there was histological evidence of chorioamnionitis and the offspring was more likely to show abnormal social interaction, impaired communication, and hyperactivity [42].
In the preterm rhesus macaque model IA LPS has been shown to induce robust cell neutrophilic recruitment and the chorio-decidual interface, which is associated with systemic fetal inflammatory response [20,38].In this model, we have observed early neuroinflammation of the cerebellum with increased expression of cytokines and increased IL-6 in the CSF [37].At 5 days we found a more subtle but persistent neutrophilic infiltrate and inflammation of the amniotic membranes.The fetal systemic inflammation induced by IA LPS was associated with decreased number of Purkinje cells by snRNA-seq and a trend of decreased density of Purkinje cells on histology.Loss of cerebellar Purkinje cells is among the most observed histopathological findings in autism [10,43,44].Experimental data from Lurcher mice, which are genetically programmed to have variable degrees of Purkinje cell death after birth, show a correlation between loss of Purkinje cells and deficits on serial reversal-learning task, which measures low order behavioral flexibility [43].The mechanisms of Purkinje cell loss need to be further explored at earlier timepoints.
Purkinje cells are master regulators of cerebellar development [45].In rodents, there is a critical window of cerebellar development during the first 2 weeks of life which corresponds to third trimester cerebellar development in primates.During this period, different perinatal insults can lead to Purkinje cell loss and dysfunction.In a rodent model of neonatal brain injury induced by hypoxia, there is delayed Purkinje cell arborization and reduction in firing associated with long-term cerebellar learning deficits that can be partially restored by GABA reuptake inhibitors [46,47].Exposure to systemic LPS in the second week of life of rodents induces prostaglandin E2 production in the cerebellum and administration of either LPS or prostaglandin E2 at this impairs growth of the Purkinje cell dendritic tree and reduces social play behavior in males, suggesting a developmental-specific role of prostaglandins in cerebellar injury.In a similar model, IA LPS increases cerebellar COX-2 mRNA in the cerebellum of rhesus macaques at 130 days of gestation [37].Moreover, genes that regulate Purkinje cell development have been implicated in the pathogenesis of ASD.The autism susceptibility candidate 2 (AUTS2) was identified as a risk gene for ASD in human genetic studies [48].Interestingly, AUTS2 is selectively expressed in Purkinje cells and Golgi cells during postnatal development in rodents and conditional deletion of AUTS2 results in an underdeveloped cerebellum with immature Purkinje cells and impaired motor learning and vocal communication [49].
Purkinje cells regulate cerebellar GC maturation by maintaining their proliferative status through SHH signaling [50].We found that chorioamnionitis led to decreased SHH signaling in Purkinje cells and accelerated maturation of GCs, with decreased SHH expression in chorioamnionitis confirmed at the protein level.A recent study of preterm baboons without exposure to prenatal inflammation showed that preterm birth was associated with structural and functional changes in Purkinje cells, including decreased synaptic input and abnormal action potentials firing and adaptation [51].Autopsy of preterm infants showed decreased thickness of the internal and external granule layers with decreased expression of SHH in the Purkinje cell layer compared to term stillborn infants [19].Our findings show that prenatal inflammation leads to Purkinje cell loss and disrupted development of GCs in the developing cerebellum of preterm fetuses that may have important implications on long-term neurodevelopmental outcomes.In the changes we observed, in addition to disrupted neuronal maturation, we found that chorioamnionitis also accelerated pre-oligodendrocyte maturation into myelinating oligodendrocytes, evidenced by an increase in myelinassociated genes including MBP and MAG.This finding was consistent with increased expression of MBP on protein analysis and appears to be unique to the cerebellum.A postmortem study from patients with autism showed that in the cerebral white matter myelin-related proteins are decreased, but in the cerebellum they are increased [52].In addition, a mouse model of placental endocrine deficiency showed sex-specific effects in cerebellar myelination, with increased myelin in male, which was associated with neurobehavioral abnormalities consistent with autism [53].While SHH signaling is not a classic mechanism of oligodendrocyte maturation, in vitro studies with cerebellar organotypic cultures have shown that SHH produced by Purkinje cells stimulates the proliferation of oligodendrocyte precursor cells and the decreased SHH production during postnatal development is associated with maturation of oligodendrocytes in the cerebellum [53].This data suggests that the loss of Purkinje cells with decreased SHH signaling in LPS-exposed fetuses may be a common mechanism to accelerated GC and oligodendrocyte maturation.
While our study relied primarily on bioinformatics, which was limited by the number of animals used in the analyses, we used histopathological assessment to confirm our findings in a larger number of animals.Despite this limitation, the use of a nonhuman primate model provides highly translational insight into the effects of antenatal inflammation on cerebellar development.We also were not able to assess the cause of Purkinje cell loss, which would require additional studies at earlier timepoint to assess for mechanisms of cell injury and death as these cells are already differentiated at the time of the insult.In addition, we were not able to confirm histologically the increase in UBCs identified on single cell.Finally, postnatal behavioral studies would also provide additional insight into the functional significance of our findings.
Conclusions
Overall, our findings in a preterm nonhuman primate model of chorioamnionitis support the role of prenatal inflammation in disrupted cerebellar development through reduction of Purkinje cells associated with accelerated maturation of GCs and oligodendrocytes.The link between preterm birth and development of ASD is well-established, but the specific mechanisms are unclear.Our results suggest that prenatal inflammation, a common cause of preterm birth, contributes to disrupted cerebellar development leading to changes like the histopathological findings of ASD in the cerebellum.The mechanisms of chorioamnionitis leading to disrupted cerebellar development and ASD may be common to other inflammatory exposures during pregnancy including congenital infections and viral illnesses such as influenza.
AFS), the National Institute of Environmental Health Services (U01ES029234, CAC), and the Batchelor Research Foundation (Fellow Award, AFS).
Fig. 2
Fig.2Prenatal inflammation disrupts cerebellar GCP/GC development increasing the number of mature GCs.a UMAP plot of re-clustered GCPs/GCs identified 7 distinct populations of GCPs/GCs in the developing cerebellum.b Heatmap of the top 10 differentially expressed genes in each GCP/GC cluster identified shows similarities between some of the clusters which were regrouped for classification based on gene expression patterns.c Regrouped clusters based on gene expression patterns shows the presence of GCPs/GCs at different developmental stages from proliferating to mature GCs.d Feature plots of identified cell markers for proliferative GCPs (MKI67 and TOP2A), GCPs/GCs exiting cell cycle and committing to differentiation (DCC and HES6), migrating GCPs (DCX and ATXN1), GCs with increased mature cell markers and markers that the GCs have exited the EGL (CNTN2 and CLSTN1), and mature GCs (RBFOX3 and GABRB2).e Dotplot of top genes differentially expressed in LPS-exposed fetuses vs. control in all clusters of the cerebellar GCPs/GCs.f Overrepresentation analysis of differentially expressed genes in LPS-exposed fetuses vs controls.Genes induced by LPS were associated with biological processes for synapse function and hippo signaling pathway.Genes suppressed by LPS were associated with Purkinje-GCP signaling involved in GCP proliferation, cerebellum development, and cell morphogenesis involved in neuron differentiation.g Pseudotime analysis on monocle3 and slingshot using the condiments package to identify differences in proportion of cells along pseudotime shows increased GC maturation LPS-exposed fetuses.h Number of Ki67 + cells per high power field (hpf ) in the cortex of LPS-exposed compared to control animals, showing decreased cell proliferation, counts done on high power (20x) and low power shown in picture (5 ×).i Neurod1 staining in the cerebellum showing increased number of Neurod + cells in the cerebellar cortex in LPS-exposed fetuses (20 ×) (See figure on next page.)
Fig. 3
Fig. 3 Prenatal inflammation decreases Purkinje cell density in the cerebellum.a UMAP of the cell subset identified as Purkinje cells.Re-clustering of Purkinje cells identified 2 subpopulations.b UMAP of Purkinje cells by condition.c Heatmap of top differentially expressed genes in each Purkinje cell cluster.d Immunofluorescence (magnification 20X) for Calbindin-1 and Purkinje cell density analysis showing decreased density of Purkinje cells in LPS-exposed fetuses (n = 6 animals/group).e Gene set enrichment analysis for KEGG Pathways of genes differentially expressed in cluster 1 compared to cluster 0 of Purkinje cells
Fig. 4
Fig.4 Prenatal inflammation disrupts oligodendrocyte development with increased expression of myelination-associated genes.a UMAP plot of the oligodendrocyte cluster.Re-clustering showed 4 subpopulations of oligodendrocytes that were identified as stages of oligodendrocyte development based on gene expression.b Heatmap of top differentially expressed gene in each oligodendrocyte cell cluster.c Expression of markers of oligodendrocyte development in proliferating oligodendrocytes (TOP2A), oligodendrocyte precursor cells (OPC) (PDGFRA), and myelinating/mature oligodendrocytes (MBP and MAG).d Gene set enrichment analysis for biological processes of genes differentially expressed in the oligodendrocyte cluster in LPS compared to control fetuses with blue as upregulated in control and red as downregulated in control.e Western blot and immunofluorescence for myelin basic protein (MBP) in the fetal cerebellum, p value = 0.03 (n = 5 animals/group).f Pseudotime analysis on monocle3 and slingshot using the condiments package to identify differences in proportion of cells along pseudotime shows accelerated oligodendrocyte maturation LPS-exposed fetuses
Fig. 5
Fig. 5 Chorioamnionitis disrupts UBC homeostasis.a UMAP plot of the UBC cluster.Re-clustering showed 2 subpopulations of UBCs.b Heatmap of top differentially expressed genes in each UBC cluster.c Distribution of cells in each cluster by condition showing that cells in cluster 1 comprised exclusively UBCs from LPS-exposed fetuses.d Gene set enrichment analysis for biological processes of genes differentially expressed in the UBC cluster 1 compared to cluster 0 showing changes of genes associated with synaptic signaling and cell junction
Fig. 6
Fig. 6 Cell-cell communication is disrupted by antenatal inflammation.a Global number of inferred interactions (left) and global strength of interaction (right) in the cerebellum of control and LPS-exposed fetuses.b Differential number of interactions and interactions strength between each cell type identified on clustering.Red represents increased, blue represents decreased.The top colored barplot represents the sum of column values displayed in the heatmap (incoming interactions), the right colored barplot represents the sum of row values (outgoing interactions).c Scatterplot of outgoing and incoming interaction strength for cell cluster in two dimensions.d Scatterplot and classification of communication networks based on their functional similarity.e Bar plots of relative (right) and absolute information flow showing conserved and context-specific signaling pathways in control and LPS.f Heatmap of incoming signaling patterns in each cluster in control (right) and LPS (left).g Heatmap of incoming signaling patterns in each cluster in control (right) and LPS (left)
Fig. 8
Fig. 8 Chorioamnionitis impairs oligodendrocyte maturation signaling.a Bubble plots of communication probability of ligand-receptors from/ to oligodendrocytes upregulated (right) and downregulated (left) by LPS compared to control.b Heatmap of senders, receivers, mediators, and influencers of PDGF and MAG signaling showing decreased PDGF signaling outgoing and incoming from oligodendrocytes in controls and exclusive MAG signaling in LPS.c Violin plot of signaling genes associated related to PDGF-PDGFRA signaling
Table 1
Characteristics of animals included in the study
Table 2
Antibodies used in flow cytometry experiments
Table 3
Antibodies used in flow immunohistochemistry and Western blots IHC immunohistochemistry, IF immunofluorescence, MAb monoclonal antibody, MBP myelin basic protein, SHH sonic hedgehog, WB Western blotting
Table 4
Characteristics of samples included in snRNA-seq | 9,623.6 | 2024-01-10T00:00:00.000 | [
"Medicine",
"Biology"
] |
Adiabatic preparation of fractional Chern insulators from an effective thin-torus limit
We explore the quasi one-dimensional (thin torus, or TT) limit of fractional Chern insulators (FCIs) as a starting point for their adiabatic preparation in quantum simulators. Our approach is based on tuning the hopping amplitude in one direction as an experimentally amenable knob to dynamically change the effective aspect ratio of the system. Similar to the TT limit of fractional quantum Hall (FQH) systems in the continuum, we find that the hopping-induced TT limit adiabatically connects the FCI state to a trivial charge density wave (CDW) ground state. This adiabatic path may be harnessed for state preparation schemes relying on the initialization of a CDW state followed by the adiabatic decrease of a hopping anisotropy. Our findings are based on the calculation of the excitation gap in a number of FCI models, both on a lattice and consisting of coupled wires. By analytical calculation of the gap in the limit of strongly anisotropic hopping, we show that its scaling is compatible with the preparation of large size FCIs for sufficiently large hopping anisotropy. Our numerical simulations in the framework of exact diagonalization explore the full anisotropy range to corroborate these results.
We explore the quasi one-dimensional (thin torus, or TT) limit of fractional Chern insulators (FCIs) as a starting point for their adiabatic preparation in quantum simulators. Our approach is based on tuning the hopping amplitude in one direction as an experimentally amenable knob to dynamically change the effective aspect ratio of the system. Similar to the TT limit of fractional quantum Hall (FQH) systems in the continuum, we find that the hopping-induced TT limit adiabatically connects the FCI state to a trivial charge density wave (CDW) ground state. This adiabatic path may be harnessed for state preparation schemes relying on the initialization of a CDW state followed by the adiabatic decrease of a hopping anisotropy. Our findings are based on the calculation of the excitation gap in a number of FCI models, both on a lattice and consisting of coupled wires. By analytical calculation of the gap in the limit of strongly anisotropic hopping, we show that its scaling is compatible with the preparation of large size FCIs for sufficiently large hopping anisotropy, where the amenable system sizes are only limited by the maximal hopping amplitude. Our numerical simulations in the framework of exact diagonalization explore the full anisotropy range to corroborate these results.
I. INTRODUCTION
Topologically ordered systems exhibit fascinating phenomena, such as fractionalized excitations with exchange statistics beyond bosons and fermions. Their definining feature is the absence of any adiabatic path connecting them to conventional phases of matter. In the field of quantum simulation, this renders the preparation of paradigmatic topologically ordered states, e.g. fractional quantum Hall (FQH) states [1][2][3][4][5][6], a profound and salient challenge. There, a common strategy is the quasiadiabatic preparation of a FQH state from a well controlled initial state through coherent time-evolution [7][8][9][10][11][12][13]. However, this approach relies on a finite-size gap opening at the phase transition between the trivial and the topological state, and is therefore fundamentally limited to small systems.
Interestingly, considering a change of the spatial dimension enables an adiabatic path between a twodimensional FQH phase and a one-dimensional charge density wave (CDW). Specifically, when continuously decreasing the length of the system along one direction, a FQH ground state may continuously evolve into a CDW while maintaining a finite energy gap [14][15][16][17][18][19][20][21]. In this one-dimensional limit of the FQH problem known as the thin-torus (TT) limit, the CDW has no topological order, and is well approximated by a product state (along the long direction) of single-particle plane waves (in the short direction). FQH states also exist in lattice systems under the name fractional Chern insulators (FCI) [22][23][24], and so do CDWs in the TT limit [25][26][27]. Yet, a FIG. 1. Illustration of the adiabatic path from an effectively one-dimensinal charge density wave state (top) to a two-dimensional fractional Chern insulator state (bottom) in an array of quantum wires coupled by a tunable hopping Jx(t, y) = J(t)e iφy (see Sec. III B). In particular, a ν = 1 2 Laughlin phase may adiabatically form in the presence of contact interactions upon reducing J from a large value towards a moderate value J/ER ≈ 1. The magnetic recoil energy ER and J(t) represent the kinetic energy scale along the continuous and discrete direction, respectively. arXiv:2212.11294v2 [cond-mat.quant-gas] 18 May 2023 potential adiabatic connection between FCI and CDW is not guaranteed and may depend on the underlying lattice model.
In this work, we propose and investigate a preparation scheme of FCI states that is based on their adiabatic connection to a CDW in an effective TT limit (see Fig. 1 for an illustration). The key principle of our approach is to effectively modify the spatial dimension of the system without changing its actual physical geometry. Concretely, we tune the ratio of inter-site couplings (kinetic energy) along x and y direction, which acts as a proxy for the system's aspect ratio [27,28]. To gauge the practicability of this general approach, we apply it to a number of different models that are accessible to state-of-the-art experimental platforms. Using numerical exact diagonalization (ED), we show the existence of an adiabatic path between a one-dimensional CDW and the ν = 1/2 bosonic Laughlin state in the semi-discrete coupled wire [29,30] model, as well as the Harper-Hofstadter-Hubbard [31,32] model in well-chosen geometries. Importantly, the many-body gap always increases along this path; its minimal value is reached in the CDW phase and does not depend on system size, as shown by our analytical calculations of the gap in the TT limit. Our results provide a generic recipe for the preparation of FCI states in quantum simulators, where the platform-dependent limiting factor regarding the amenable system sizes is given by the range in which the stronger coupling can be tuned experimentally.
The remainder of this paper is structured as follows. In Sec. II, we briefly review the TT limit of the continuum FQH problem. In Sec. III, we provide an asymptotic treatment of the TT limit as reached through strongly anisotropic coupling in various models with at least one discrete spatial direction. Specifically, we demonstrate the formation of a CDW and derive analytical estimates for the excitation gap in the interacting semidiscrete coupled wire [29,30] and Harper-Hofstadter models. We also extend the Kapit-Mueller model [33] to incorporate anisotropic couplings, and show that it undergoes a band gap closing upon increasing the anisotropy, which precludes an adiabatic connection between FCI and CDW ground states. In Sec. IV, we corroborate our analytical results by demonstrating the adiabatic transition between the FCI and the CDW phases through ED simulations using the full coupling anisotropy range. Finally, we present a concluding discussion in Sec. V.
II. SYNOPSIS OF THE THIN TORUS LIMIT IN THE CONTINUUM
We start with a brief review of the TT limit of the fractional quantum Hall effect in the continuum. The FQH effect originates from the effect of a magnetic field on a two-dimensional gas of interacting charged particles. It is observed in 2D electron gases in solid state physics, but may also emerge in a rotating ultracold gas of neutral bosons, where the effect of a magnetic field is emulated by the Coriolis force [34][35][36][37]. Although this paper focuses on bosonic systems, this section applies equally well to fermions; we adopt the traditional notations of solid state physics for simplicity.
The single particle eigenstates of a free particle gas in a perpendicular magnetic field B form extensively degenerate Landau levels that are separated by a gap ∆ B = eB m . Their degeneracy per area is the number of flux quanta N φ piercing that area. In the Landau gauge A = (By, 0, 0), the eigenstates of the lowest Landau Level (LLL) take the form Ψ LLL kx (x, y) = e ikxx e −(y+kxl 2 with the magnetic length l B = eB and k x the momentum along the translation invariant x-direction. To avoid boundary considerations in a finite system, we consider the torus geometry with dimensions L x × L y and magnetoperiodic boundary conditions (MPBC) Ψ(x+m x L x , y+ m y L y ) = e iφmyLyx Ψ(x, y) ∀m x , m y ∈ Z. The consistency of the MPBC requires an integer number of flux quanta N φ = BLxLy 2π . Then, the LLL is spanned by N φ single particle states Ψ LLL n , n = 0, 1, ..., N φ − 1, which are still localized with a Gaussian decay length l B along the y-direction, and possess a well defined momentum k x proportional to the orbital index n [38,39].
We consider a generic two-body interaction V (|r|), which falls off with increasing distance |r| between a pair of particles, where r is their relative coordinate. As long as the gap ∆ B between Landau levels is large enough compared to the interaction strength, one may treat the FQH problem entirely in the lowest Landau level (LLL) by projecting the interaction. The following discussion is most intuitive for the case of an infinite cylinder, i.e. finite L x and L y → ∞, but it can be generalized to the torus [17,20]. For the infinite cylinder, the LLL eigenfunctions can be taken as the same Ψ LLL kx as for the free system, but with discretized momenta k x ∈ (2π/L x )Z, such that the orbitals of the lowest band are arranged in discrete steps of a = 2πl 2 B Lx along y-direction. The projected interaction (which we indicate from now on by a tilde on the operator) can be brought to the form where the field operators c j are labeled by the orbital index j of the lowest band, yielding a one-dimensional problem with lattice constant a [40]. The V m i−j describe the pair hopping amplitude for two particles hopping m obitals to the left and right, respectively, and depend on the distance i − j of the target orbitals and the hopping distance m. Using the Fourier components Lx x of the interaction potential, they can be written as Note that the conservation of the orbital index i + j in Eq. (1) amounts to total x-momentum conservation due to translation invariance [40].
In the limit L x → 0 of a thin cylinder, the m = 0 terms in Eq. (1) are exponentially damped such that we only retain the electrostatic repulsion terms V 0 i−j between the orbitals of the LLL, which decay with orbital distance i − j due to the finite range of V (|r|). The projected interaction thus reduces to a repulsive electrostatic potential that decays rapidly with distance for L x → 0, and a FQH ground state at fractional filling of the LLL will generally evolve into a CDW minimizing the electrostatic repulsion [5,40].
For the torus case where L y is finite, the TT limit is defined as the limit of small aspect ratio L x /L y , taken for constant B and area L x L y to preserve the integer number of flux quanta N φ . Since the LLL eigenfunctions on the torus are still strongly localized along y-direction, the projected interaction reduces to an electrostatic repulsion in the TT limit as well and the ground state at fractional filling of the LLL transitions into a CDW [17,20], whose excitation gap has been calculated analytically [41]. This transition has been conjectured to be adiabatic, as is supported by all numerical evidence so far [14][15][16][17][18][19][20]. In the following, we will investigate how the counterpart to this well studied TT limit may be effectively achieved experimentally in systems with at least one discrete spatial direction by tuning the anisotropy of the kinetic energies rather than changing the geometry of the system.
III. FRACTIONAL CHERN INSULATORS IN THE THIN TORUS LIMIT FROM HOPPING ANISOTROPY
Fractional Chern insulators (FCI) are lattice analogs of FQH states, whose stability in numerous lattice models has been demonstrated analytically and numerically [22,23]. They represent a promising alternative to the continuum model presented in the previous section in view of preparing FQH states in engineered quantum platforms. CDWs also emerge in the one-dimensional limit of FCI models [25][26][27], obtained by reducing the number of lattice legs, but the behavior of the manybody gap across this dimensional transition is unknown. In this section, we reach the TT limit of three discrete or semi-discrete models by tuning the ratio of coupling energies along the x and y directions. Thanks to an asymptotic treatment, we demonstrate the emergence of a CDW ground state and derive an analytical expression for the many-body gap in the TT limit for two of them.
A. Roadmap
We consider three different lattice models: the coupled wire [29], Harper-Hofstadter [31], and Kapit-Mueller [33] models, which all have a lowest band with a Chern num- (3,5)). Tuning the amplitude J of the interwire coupling between moderate values compareable to the magnetic recoil energy ER and very large values J ER induces a phase transition between a topological FCI phase (red) and a trivial CDW (grey). J and ER represent the kinetic energy scale along x and y direction, respectively. The Harper-Hofstadter model (Eqs. (8,9)) yields a similar phase diagram for certain geometries, where the hoppings Jx, Jy take the roles of J, ER, respectively.
ber C = 1. We fix the number of bosons such that the filling fraction in the lowest band is ν = 1/2, and turn on contact two-body interactions of amplitude U . This type of interaction is relevant in ultracold atom experiments, since bosons experience s-wave scattering. In the isotropic limit, with well chosen kinetic parameters, these conditions lead to the emergence of a FCI ground state akin to the Laughin 1/2 state in all three models, as shown in previous numerical studies [30,32,33,[42][43][44].
To reach the effective TT limit of a lattice model, we tune the anisotropy of the kinetic energies following the intuition that in a nearest-neighbor tight-binding model, the ratio Jx Jy of hopping constants scales with the ratio of lattice constants as ay ax ∝ Jx Jy (see Appendix A for a first-principles derivation). In turn, when tuning the ratio Jx Jy externally, the effective aspect ratio of the system should scales as Ly Lx ∝ Jx Jy . In our setting the kinetic energy scale in the y direction is fixed, whereas the hopping strength J x in x-direction is assumed to be tunable to adiabatically change the effective aspect ratio of the lattice setup by changing the effective distance between the legs. In state of the art experiments on ultracold atoms trapped in optical potentials, the hopping J x may be realized as a photon-assisted tunneling process, and is thus naturally tunable [45,46].
In the effective TT limit of J x J y , we expect a strong analogy between lattice and continuum models. Namely, we expect the single-particle orbitals of the lowest band to experience an increasingly tight Gaussian localization along the weak coupling direction, accompanied by the suppression of the overlap between neighboring orbitals and in turn the emergence of a CDW ground state.
The coupled wire and the Harper-Hofstadter model are found to behave this way and permit an approach similar to the continuum approach reviewed in Sec. II: after projecting the interaction Hamiltonian onto the lowest band, we derive the effective 1D Hamiltonian in the TT limit, explicitly show the emergence of a CDW ground state, and calculate its excitation gap analytically. Fig. 2 summarizes these results in a qualitative phase diagram.
B. Coupled wires with tunable hopping
An array of coupled quantum wires with a synthetic perpendicular magnetic field provides a semidiscrete setup [29] to realize FCI phases and is within reach of current experimental methods using cold atoms in optical lattices [30]. The synthetic magnetic field is realized by the Peierl's phase e iφy of the interwire hopping J x (y) = Je iφy , such that a system of N x discrete wires is pierced by a number N x N y of magnetic flux quanta. This defines the magnetic length l B = 2π φ as the relevant length scale, and the wire length is N y l B . The non-interacting Hamiltonian for atoms of mass m takes the form where a is the distance between wires. The relevant energy scale of the problem is given by the magnetic recoil energy E R = 2 φ 2 2m . We use periodic boundary conditions (PBC), which imposes N y ∈ N. Starting from the decoupled limit and increasing the coupling J results in changes to the single-particle properties, which favor the emergence of a FCI ground state: the lowest band flattens, the associated Berry curvature becomes more homogenous, and the band gap increases. Consequently, around J/E R ≈ 1, the many-body ground state in the presence of contact interactions is a FCI in the Laughlin phase, as numerically confirmed in Ref. 30. Further increasing the interwire coupling, the limit J/E R 1 corresponds to the effective TT limit outlined in Sec. III A.
We now project the contact interaction onto the lowest band of the coupled wire model H CW 0 , assuming that the interaction strength U is small compared to J. This approximation becomes increasingly accurate in the TT limit, and a gap closing above the lowest band only occurs for J = 0.
We start by deriving the eigenfunctions of Eq. (3) in the limit of large J. Generally, the separation ansatz Ef (y) for the y-component of the wavefunction. For large J, the cosine potential can be approximately treated like a harmonic potential by expanding it up to quadratic order, the lowest energy eigenfunctions are then Gaussians centered in the minima. This yields a mixed real-momentum space Wannier basis [47] for the lowest band of Eq. (3) spanned by the discretized versions of the LLL wavefunctions in the Landau gauge, i.e. up to normalization with y n = (2n − 1) π φ , n = 1, 2, ..., N y , with a constant spacing of ∆ y = l B Nx between the centers of neighboring orbitals. Before projection, the contact interaction in this semi-discrete setup reads The projection of H CW I follows immediately from the above eigenfunctions ϕ n,kx . Indeed, a single field operator is projected through its expansion in the singleparticle basis, dropping all terms but those in the lowest band, such thatΨ † x,y = n,kx ϕ n,kx (x, y)c † n,kx in the limit of large J. Additionally, the projection of a normal ordered string of field operators is just the string of the individual projections. Carrying out the sum over x by using the orthogonality relation of plane waves, and performing the Gaussian integral over y, we obtain (see Appendix B for details) where τ = 2 J/E R (π/N x ) 2 is a dimensionless measure of the hopping anisotropy. The field operators c n,kx have been relabelled with the index j ∈ {1, 2, ...N x N y } reflecting the arrangement of the orbitals along y direction, similar to the continuum FQH problem (cf. Eq. (1)), leaving us with a one-dimensional problem of lattice constant ∆ y . Eq. (6) is generally justified for J large enough to localize the eigenstates in the valleys of the cosine potential, which happens independently of the number of wires N x . If now J is increased further to yield large values of τ , we can truncateH CW I to first order in e −τ . The projected Hamiltonian thus reduces to a 1D lattice model with nearest-neighbour density-density interaction, leading to the emergence of a CDW at half filling ν = 1/2. The excitation gap above this ground state can also be inferred from the truncated Hamiltonian in Eq. (6). The low-energy excitations consist of configurations with two particles on neighboring sites, the excitation gap is thus The expression of the gap depends explicitly on the number of wires N x and on the coupling strength J through the dimensionless parameter τ , yet the gap in the TT limit does not depend on system size at a given τ . Indeed, according to our expansion of the projected Hamiltonian H CW I (cf. Eq. (6)), the transition to the TT regime is controlled by the value of τ , independently of system size.
In conclusion, we expect the formation of a CDW in the coupled wire model at a fixed value τ TT of the anisotropy τTT π e −τTT independent of system size. In Sec. IV, we will present numerical calculations confirming this intuition, and indicating that the transition happens around e −τTT ≈ 0.2, with a finite excitation gap of
C. The anisotropic Harper-Hofstadter model
We now turn to the fully discrete Harper-Hofstadter-Hubbard (HH) model [31,32], which was implemented in cold atom experiments [48,49], and is considered as a candidate for the realization of FCI states of cold atoms [9-12, 32, 42-44, 49]. The HH model consists of a square lattice with nearest-neighbour hopping and a uniform magnetic flux per plaquette 2πφ implemented through Peierl's substitution. The kinetic part of the Hamiltonian reads where c † m,n creates a boson on site (m, n), and the amplitude of hopping terms along the x, y-direction J x , J y are tuned to navigate between the isotropic (J x = J y ) and TT (J x J y ) limits as explained in Sec. III A. We focus on fluxes φ = 1 n with n ∈ N and periodic boundary conditions (PBC), such that the magnetic unit cell consists of 1 φ lattice sites along the y direction. We call N x , N y the number of unit cells along the x, y direction. The contact interaction of strength U writes In the isotropic limit, for sufficiently low flux φ 1/3, numerical simulations [32,[42][43][44] have established that the HH model with strong contact interactions hosts a FCI at bosonic filling factor ν = 1/2. Ref. [10] noted the existence of a continuous phase transition to a trivial state in the limit of decoupled wires (J y = 0), accompanied by a gap closing and reopening at intermediary J x /J y . As a fundamental difference to the approach presented here, this scheme relies on a finite-size gap. Without contradicting the findings of Ref. [10], we find that for some well-chosen geometries, it is possible to reach the TT limit continuously without closing the many-body gap, as we explain below.
We first notice an important single-particle property of the HH model: in contrast to the CW model, the lowest band does not necessarily become perfectly flat upon reaching the TT limit J y /J x → 0. Indeed, in the TT limit, the HH model reduces to a set of decoupled wires with dispersion E n (k x ) = −2J x cos(k x − n2πφ) on the n-th wire. The bandwidth of the lowest band then depends on the discretization of the momenta k x , and perfect flatness is only achieved for a system length N x = 1 φ in units of the lattice spacing (or any divisor of 1 φ ). In a generic geometry, the finite kinetic energy in the lowest band thus competes with the interaction, which can give rise to additional phase transitions and many-body gap closings, as we will show in the numerical section Sec. IV. To avoid these complications, we restrict our analytical treatment of the HH model's TT limit to N For a lattice geometry N x = 1 φ , N y , the gap to the second band of the HH model in the TT limit is see Appendix C for details. For a large enough band gap ∆ band , the interaction Hamiltonian H HH I can be projected to the lowest band of the single-particle Hamiltonian H HH 0 . For this, we expand the field operators in the Bloch basis as c j,α = 1 √ NxNy k e ikj β u α,β (k)γ k,β , where u α,β (k) is the unitary matrix that contains the eigenvectors of the Bloch Hamiltonian H HH 0 (k). The general expression of the projected Hamiltonian is then obtained by normal-ordering and dropping all terms but those with β = 1 as where we have dropped the subscript β.
We now conduct a perturbative analysis of the pro-jected Hamiltonian Eq. (11) in the TT limit for geometries where N x = 1 φ , N y = 1, which corresponds to a square lattice of 1 φ × 1 φ individual sites. For Jy Jx → 0, the HH model reduces to a set of decoupled wires and therefore u α (k) = δ α,nx with n x the nearest integer to kx 2πφ (see Appendix C for more details). As a result, Eq. (11) reduces to an on-site density-density interaction independently of the system geometry. For increasing Jy Jx , we expect the eigenstates u α (k) to spread out over more wires, so that longer-range terms will gradually appear in Eq. (11). To investigate this behavior for the geometry of N x = 1 φ , N y = 1, we use the hopping ratio Jy Jx as a perturbative parameter and express u α (k x ) using non-degenerate perturbation theory up to linear order This allows for an expansion of Eq. (11) asH HH We can further simplify the projected Hamiltonian using degenerate perturbation theory in the TT limit. For J y = 0 (and half filling ν = 1/2 of the lowest HH band), the degenerate ground state manifold ofH HH I consists of all configurations where at most one particle sits in each orbital. It is separated by a gap of order one from the lowest energy excited states, consisting of all combinations with two particles in one of the orbitals. Since this gap is large compared to In conclusion, the ground state of the HH model in the TT limit is a CDW, with an excitation gap where A 1 and A 2 are the dimensionless parameters defined in Eq. (12). Following a reasoning similar to Sec. III B, we expect the formation of the CDW state at a fixed value of J 2 y J 2 x (A 2 ) 2 independent of system size (cf. Eq. (13)). Our numerical data (see Sec. IV) indicates that this happens around a hopping ratio of Jy Jx ≈ π 2 φ 2 . The corresponding excitation gap will be finite with a value evaluated from Eq. (14) as ∆ HH TT ≈ U φ 4 . For further details on the derivations of the results in this section, please refer to Appendix C.
D. The anisotropic Kapit-Mueller model
Finally, we consider an anisotropic version of the Kapit-Mueller (KM) model [33]. In the isotropic limit, the KM model with bosons at filling ν = 1/2 interacting through a contact interaction, provides an exact parent Hamiltonian to the Laughlin wavefunction [33]. Here, we demonstrate how the hopping amplitudes can be manipulated to tune the effective aspect ratio of the system while keeping Laughlin's wavefunction as the exact many-body ground state. However, in our anisotropic KM model, the closing of the band gap prevents the adiabatic connection between the Laughlin state and a CDW state, in contrast to the coupled wire and the HH model discussed above.
The KM model takes the form with complex notation z j = x j + iy j , x j ∈ N, y j ∈ N, and where z = z k − z j is the distance between the connected sites and t = 1 in the following. For any flux 0 < φ < 1, the single particle eigenstates of the lowest band can be chosen as the LLL wavefunctions in the symmetric gauge Ψ LLL sym,n (z) = (z) n e −πφ 2 |z| 2 , n ∈ N, discretized to the lattice, and the lowest band will be exactly flat with energy = −1 [33]. Since the Laughlin wavefunction is composed of LLL single particle wavefunctions and vanishes if two particles are at the same position, it is the ground state of Eq. (15) if any contact interaction is added.
The KM Hamiltonian is readily extended to magnetoperiodic conditions in a finite geometry of N x × N y sites. This can be done by replacing J(z j , z k ) in Eq. (16) with where the sum runs over all R = (nN x + imN y ) with n, m ∈ Z. The purpose of the phase factor in Eq. (17) is to compensate the phase factor resulting from a magnetic translation. As a result, LLL single-particle wavefunctions satisfy MPBC Ψ LLL Nx,Ny (z + nN x + imN y ) = e iπφ(nNxy−mNyx) Ψ LLL Nx,Ny (z) and the KM's lowest band is still exactly flat and spanned by these wavefunctions, provided that there is an integer number of flux quanta N φ = φN x N y . Therefore, in the torus geometry, the torus generalization of the Laughlin wavefunction remains the many-body ground state of the KM model in the presence of contact interactions.
We now introduce an anisotropic extension of the KM model, through a parameter α > 0. We want our anisotropic model to preserve the key property of the KM model, i.e. the lowest band single-particle wave functions should be LLL single-particle wave functions. This can be achieved by transforming W (z) from Eq. (16) into while leaving the rest of the model invariant. The singleparticle eigenstates in the lowest band of the anisotropic KM Hamiltonian will then be LLL wavefunctions living on a torus of size αN x × Ny α such that the aspect ratio scales as α 2 . This follows from the fact that a rescaled LLL wavefunction of the form obeys the same boundary conditions as Ψ LLL Nx,Ny , namely Ψ α Nx,Ny (z + nN x + imN y ) = e iπφ(nNxy−mNyx) Ψ α Nx,Ny (z). This phase factor still cancels with the phase from the MPBC extension in Eq. (17) and the term W α from Eq. (18) is chosen such that the eigenstates of the lowest band can be constructed by evaluating the rescaled LLL wavefunctions Ψ α Nx,Ny at the lattice coordinates. The lowest band contains N φ = φN x N y states and remains exactly flat at an energy of α = − R G(R)e −π 2 (1−φ)|R α | 2 with the sum running over all R = nLx + imN y with n, m ∈ Z and R α = nαLx + imN y /α. This can be shown in analogy to the original KM model, see Appendix D for a detailed calculation. As a consequence, the Laughlin state on a torus of αN x × Ny α is the exact many-body GS of the anisotropic KM model with a contact interaction at any α.
Importantly, in our anisotropic KM model, the singleparticle gap above the lowest band appears to close very fast with α in the TT limit (see Appendix D), such that a competing GS involving orbitals from higher bands may form. Therefore, the many-body gap may also close before the CDW regime is reached. This analytical observation is confirmed by numerical ED simulations on the full (unprojected) lattice system that we present in Sec. IV. Upon tuning α continuously, we find that the many-body gap closes long before the formation of a CDW can be identified, thereby rendering this generalization of the KM model not suitable for adiabatic FCI state preparation.
IV. NUMERICAL SIMULATIONS
FIG. 3. ED data for the coupled wire model as described by Eqs. (3,5) with Nx = 6, Ny = 3, p = 9 particles, and interaction strength U = ERlB. All energies are measured in units of the magnetic recoil energy ER. Top panel: Gaps ∆1,2, ∆2,3 between the first and second and second and third eigenstate as a function of the interwire coupling J along with the perturbative prediction ∆ CW (cf. Eq. (7)) for the excitation gap. The zoom-ins show the low-energy spectrum at J = ER and J = 19ER, respectively. Bottom panel: Particle entanglement spectrum of each twofold degenerate ground state, as obtained by tracing out NB = 5 particles. The colors indicate the number of states expected from Eqs. (20,21) for the FCI and CDW phases. The point at which the transition to the CDW is complete is marked by a red line in both plots, which corresponds to the critical value τTT = ln(5) of the dimensionless anisotropy parameter.
To study the full parameter range between the analytically tractable TT limit and the isotropic FCI regime, we now present exact diagonalization (ED) data for the coupled wire model (see Sec. III B) and the HH model (see Sec. III C). For all numerical data, the filling factor is ν = 1/2, and periodic boundary conditions are imposed. To facilitate the calculations, we project the con-tact interaction to the lowest band, without performing any additional truncation. For the coupled wire model, the projection for general interwire coupling J was calculated using the lowest band eigenfunctions derived in Ref. [50] and for the HH model the projection was calculated using Eq. (11). Additionally, we present a dataset for the anisotropic KM model, which confirms that the excitation gap closes before the ground state reaches a CDW configuration. There, the interaction term is not projected due to the narrowing single-particle band gap.
FIG. 4. The excitation gap ∆2,3 above the twofold degenerate ground state of the coupled wire model (Eqs. (3,5)) showing a collapse of different system sizes as a function of the dimensionless anisotropy parameter τ = 2 J/ER(π/Nx) 2 . The legend entries indicate the system size as ∆2,3(Nx, Ny), and the particle number is p = NxNy 2 . All data points fall onto the same curve and are almost undistinguishable, except for the smallest system size (red points, p = 6 bosons). For comparison, the analytical prediction ∆ CW (Eq. (7)) is shown as a continuous line. The value τTT = ln (5), which roughly marks the CDW transition for all studied system sizes, is indicated as a red line.
We first focus on the coupled wire model defined in Eqs. (3) and (5). In Fig. 3, we show the ED results for a system of N x = 6 wires of length N y = 3 in units of the magnetic length l B . In the thermodynamic limit, both the FCI and the CDW are twofold degenerate on the torus, due to the the topological order of the FCI, and to the broken translation symmetry of the CDW. In finite-size numerical data, the degeneracy is not exact, but there may be a small lifting ∆ 1,2 between the first and second eigenstate. The many-body gap is the energy difference ∆ 2,3 between the second and third eigenstates. ∆ 1,2 and ∆ 2,3 are shown in the upper panel of Fig. 3. The twofold quasidegeneracy of the GS is unbroken throughout the whole parameter range (i.e. ∆ 1,2 ≈ 0). Moreover, the excitation gap ∆ 2,3 remains finite along the path from the CDW to the FCI phase, with no minimum indicating any phase transition. Finally, the numerically obtained ∆ 2,3 matches the analytical excitation gap estimate ∆ CW (Eq. (7)) in the limit of large interwire coupling J (TT limit). This confirms the mechanism for the formation of a CDW in the TT-limit proposed in Sec. III B.
In addition to the energy gap, we establish the transition between the FCI and the CDW phase through the particle entanglement spectrum (PES) [51]. The PES is |GS j GS j | associated with the d-fold degenerate ground state. For a FQH or FCI system in a Laughlin phase, the PES features a topological entanglement gap, where the number of eigenvalues below the gap essentially counts the number of quasihole states that would be created by removing N B particles from the system [51]. If the lowest band is filled to a fraction of ν = 1 m by p particles of which N B are traced out, the counting can be inferred from a generalized Pauli exclusion principle [52,53] as The PES of an m-fold degenerate CDW state at filling 1 m also features an entanglement gap, but the number of eigenvalues below the gap is lower [25]. It is given by In the TT limit, the entanglement gap becomes infinite, and the eigenvalues below the gap become exactly degenerate, since all CDW configurations are orthogonal Slater determinant or permanent states, such that the counting is simply the number of ways to remove N B particles out of p times the degeneracy m. Throughout the paper, we use m = 2 since we work at half filling of the lowest band (hence the degeneracy of the Laughlin state and CDW on the torus is d = 2).
The lower panel of Fig. 3 shows the PES of the twofold degenerate ground state of the interacting CW model for the same parameter regime as the upper panel, using a color code to represent the counting. The first N CDW states are red, the ones above that are purple until N Laugh is reached, and the rest is grey. The PES features the expected quasihole count N Laugh below the entanglement gap in the FCI phase at moderate coupling J, and the expected CDW count N CDW in the TT limit of large J. This confirms the respective FCI and CDW nature of the ground state in these two regimes, and provides additional evidence for the adiabatic phase transition between the two. Note that the accumulation of data points at the top of the panel starting at J/E R ≈ 13 is an artifact of limited machine precision.
In general, we consider the transition to the CDW complete once there is a significant gap in the PES above the first N CDW eigenvalues, these N CDW eigenvalues are exactly degenerate, and the analytical prediction for the CDW excitation gap matches the numerically obtained FIG. 7. ED data for the anisotropic KM model at half filling as described in Sec. III D with φ = 1 2 , p = 4 particles, a system size of Nx = 4, Ny = 4 sites, and hardcore interaction. The anisotropy parameter α 2 is proportional to the geometric aspect ratio, such that the TT limit (α 1) appears on the left while the isotropic limit (α = 1) appears on the right hand side of the figure. Top panel: Ground state degeneracy lifting ∆1,2 and excitation gap ∆2,3. The energy is measured in units of t (cf. Eq. (16)). Bottom panel: Particle entanglement spectrum of the twofold degenerate ground state obtained by tracing out NB = 2 particles. The colors indicate the number of states expected from Eqs. (20,21) for the FCI and CDW phases.
value, which is indicated as a red line in Fig. 3. Our numerical data indicates that this happens for a fixed value τ TT = ln(5) of the dimensionless anisotropy parameter τ in the CW model, regardless of system size in agreement with the previous analytical analysis.
The numerically obtained excitation gaps for all system sizes in the CW model are summarized in Fig. 4 as a function of τ along with the analytical prediction from Eq. (7). Asymptotically, all datasets collapse onto the analytically obtained curve and the transition point to the CDW is consistently located around τ TT = ln(5), indicated again by a red line. This corroborates our analytical treatment. The only curve with a slight deviation from the analytical prediction belongs to the smallest system size of N x = 4 wires. There, even larger values of τ correspond to moderate values of J such that the eigenfunction approximation from Eq. (4 ) is not as good.
In Fig. 5, we present similar ED data on the HH model (Eqs. (8) and (9)) at half filling for a system of size N x = φ −1 , N y = 1 at φ = 1 18 as a function of the hopping ratio Jx Jy . As a reminder, N x × N y is the number of 1 × φ −1 magnetic unit cells, so that the total number of lattice sites is φ −1 × φ −1 . The upper panel shows the numerically obtained energy gaps between the first three eigenstates. The analytical estimate for the excitation gap ∆ HH (Eq. (14)) matches with the numerics in the TT limit ( Jx Jy 1). As for the coupled wire model, the twofold GS quasidegeneracy remains unbroken and the excitation gap remains open along the way from the FCI regime to the CDW in the TT limit. The associated PES (with a similar color code as Fig. 3) in the lower panel of Fig. 5 further confirms the phase transition.
To complement the analysis of the HH model, we present data for a geometry of N x = 8, N y = 1 and φ = 1 6 in Fig. 6. This geometry does not satisfy the TT limit exact flat band requirement derived in the analytical section Sec. III C, since N x is not a divisor of φ −1 . As a result, we do not necessarily expect an adiabatic path between FCI and CDW ground states. We indeed find that the ground state does not transition into a CDW in the TT limit, but instead the ground state degeneracy is broken along the way, as the top panel shows. The PES data further illustrates the breakdown of the FCI phase, and the absence of a CDW phase in the TT limit Jx Jy 1. Finally, we show data for the anisotropic KM model with φ = 1 2 , p = 4 particles, a system size of N x = 4, N y = 4 sites, and hardcore interaction in Fig. 7. Our ED results are obtained without projecting the interaction to the lowest band, since the closing of the band gap makes the projection a poor approximation. Top and bottom panel show the energy gaps and the PES, respectively, as a function of the anisotropy parameter α 2 , which is proportional to the effective physical aspect ratio of the system. Around α 1, the twofold degenerate ground state is a FCI, with a finite excitation gap. Upon decreasing α to approach the TT limit, the excitation gap closes around α 0.15. This value of α is too large (too far from the TT limit) to permit the emergence of a CDW ground state, as shown by our numerical data. This is consistent with our analytical analysis presented in Sec. III D.
V. SUMMARY AND OUTLOOK
We have demonstrated how an effective TT limit of various (semi-)discrete FCI models can be achieved through a strong anisotropy in the kinetic energy that is practically realized by a tuning of hopping amplitudes. In particular, both for the coupled wire model and the HH model, we find that the Wannier functions of the lowest Chern band localize so as to decrease their overlap with increasing hopping anisotropy. That way, the projection of a local interaction term to the lowest band continuously reduces to a density-density interaction of an effective one-dimensional system. In this effective TT limit, the projected problem becomes exactly solvable and its groundstate at fractional filling is a CDW, analoguous to the TT limit of the continuum FQH effect achieved by changing the geometry of the system. The formation of the CDW in the effective TT limit happens adiabatically for all system sizes amenable to numerical study, and we expect it to remain adiabatic for arbitrary system sizes based on a finite size scaling analysis. This situation is different for the KM model, which we extend by introducing a parameter α that modifies the hoppings such that Laughlin's wavefunction remains the exact GS while the effective aspect ratio is tuned as α 2 . There, we do not find room for adiabatic state preparation, as the single particle gap above the lowest band closes quickly for anisotropic aspect ratios such that competing GSs form in the other bands.
In contrast to the conventional TT limit, where the system size L x is changed in a gedankenexperiment, our present analysis of an effective TT limit leaves the physical geometry of the system unchanged, and instead relies on the practical knob of tuning a hopping anisotropy. This comes at the price that the hopping anisotropy required to reach the trivial CDW regime scales with the physical size L x of the system. Very generally speaking, the effective TT limit may thus be seen as a physical mechanism to systematically amplify the finite size gap of a topological quantum phase transition that would necessarily occur in the two-dimensional thermodynamic limit (L x = L y → ∞) for any finite hopping parameters. In this sense, our results reveal a path for the adiabatic preparation of FCI states from trivial CDW states, where the main experimental challenge limiting the accessible system sizes lies in the realization of a wide range of hopping amplitudes.
We note that the possibility of inducing a CDW regime in an FCI system through the hopping amplitude t ⊥ between the chains of a two-dimensional flux ladder has been considered in an earlier work [27]. There, the case of a thin cylinder of few chains is studied, where already small values of t ⊥ can induce a CDW. Subsequently, t ⊥ is used as a perturbative parameter to show that the CDW amplitude decreases with increasing number of chains at fixed t ⊥ , which is in qualitative agreement with our findings. The main goal of Ref. [27] is to study the fractionally charged pretopological excitations that emerge at the domain walls between different CDW configurations on a thin cylinder of two coupled chains.
While we have focused on models with contact interactions and half filling of the lowest Chern band, we expect that our results for the coupled wire and the HH model could be directly generalized to FCI states at different filling fractions that would require longer ranged interactions. This is because the formation of the CDW in the TT limit seems to rely mainly on the localization of the single particle orbitals, which is independent of the filling fraction within the lowest band.
ACKNOWLEDGMENTS
Acknowledgments.-We acknowledge financial support from the German Research Foundation (DFG) through the Collaborative Research Centre SFB 1143, the Cluster of Excellence ct.qmat, and the DFG Project 419241108. Our numerical calculations were performed on resources at the TU Dresden Center for Information Services and High Performance Computing (ZIH).
Assuming a system length of L x , L y in x, y direction, we perform a discretization using N x , N y sites in the respective directions. The integrals become sums and we obtain We use the finite difference version of the second derivative x to treat the ∆φ = ( d 2 dx 2 + d 2 dy 2 )φ term in Eq. (A2), leading to Assuming periodic boundary conditions or considering the fact that there is no x i −∆ x for x i = 0 and open boundaries (same goes for all other boundary terms), we may shift the sums by one ∆ x , ∆ y , respectively, and arrive at where we dropped the constant potential term as it purpose is to shift the energy minimum to zero. The ratio of the hopping is now Assuming a fixed number of discretization steps N x , N y , corresponding to a fixed number of atoms in our Hofstadter model, the physical aspect ratio should scale as J x /J y as we claimed. with y n = (2n − 1) π φ , n = 1, 2, ..., N y and q = J/E R φ 2 . Writing the projections of the field operators to the lowest band asΨ † x,y = n,kx ϕ n,kx (x, y)c † n,kx using these eigenfunctions, we can project the interaction term H CW I from Eq. (5) as This Hamiltonian contains pair hoppings between orbitals centered around position [(2n i −1)+k x,i a π ] l B 2 in y-direction. In total there are N x N y such orbitals with even spacing ∆ y = l B Nx , and we can assign them the integer index The position of orbital number l i is then l i ∆ y − l B 2 . We use the orthogonality relation x e ixkx = N x δ kx,0 and extend the limits of the integration to ±∞ (which is a negligible error since we work with PBC and assume J big enough to localize the orbitals much tighter than l B ) to obtaiñ δ kx,4,kx,1+kx,2−kx,3 e − q 2 ((y−l1∆y) 2 +(y−l2∆y) 2 +(y−l3∆y The orbitals being localized much tighter than l B implies that we only need to keep the terms where |l i − l j | << N x , such that δ kx,4,kx,1+kx,2−kx,3 can be taken as δ l4,l1+l2−l3 . This is incorporated by setting l 1 → i, l 2 → j, l 3 → j + m, and l 4 → i − m (with PBC on the indices). The integral can be calculated explicitly by completing the square and using ∞ −∞ e −α(x+β) 2 = π α to arrive at the expression from the main text.
Due to the finite energy gap above the lowest band for J y = 0, we may employ non-degenerate perturbation theory and use λ = Jy Jx as a perturbative parameter to expand the lowest eigenvector in a power series in λ. For a non-degenerate system H 0 + λV , the first-order correction to an eigenstate |n 0 of H 0 is given by where |l 0 and E 0 l are the eigenvectors and eigenenergies of H 0 . By setting V = D y and considering that the components of the lowest eigenvector of D x are simply u α (k x , λ = 0) = δ α,nx with n x = kx 2πφ and the eigenenergies are E 0 n = −2J x cos(2|n − n x |πφ), it follows immediately that where A 1 (λ) = (1 + 2λ 2 (A 2 ) 2 ) − 1 2 is a normalization factor and A 2 = (2[1 − cos(2πφ)]) −1 = (4 sin(πφ) 2 ) −1 . With this result, we can expand Eq. (11) up to second order in λ to arrive at Eq. (13).
the hoppings along y-direction are switched off while the hoppings along x-direction do not decay at all anymore. In that sense, the KM model breaks down into a number of isolated wires similar to the HH model (cf. Appendix C), but this time with very long ranged and slowly decaying hoppings which should lead to a flat dispersion. From the analysis of the HH model in a N x = 1 φ geometry, we saw that the wire dispersion is essential for the finite gap in the TT limit. Thus, we expect the gap of the KM model in the TT limit α → 0 to close, although the precise functional dependence on α is not clear.
Numerical calculations show that the closing happens very quickly with decreasing α such that no sufficient change of the aspect ratio is possible before the gap closes. As an example, we provide data for the single particle gap above the lowest band of the KM model at flux Φ = 1 2 and a system size of 4 × 4 and 8 × 8 sites in Fig. 8. The reopening of the single particle gap to a small value for the 4 × 4 system appears to be a finite size effect that vanishes at larger system sizes. Other values of the flux Φ yield similar results. | 12,237.4 | 2022-12-21T00:00:00.000 | [
"Physics"
] |
Correlation Filter of 2D Laser Scans For Indoor Environment
Modern laser SLAM (simultaneous localization and mapping) and structure from motion algorithms face the problem of processing redundant data. Even if a sensor does not move, it still continues to capture scans that should be processed. This paper presents the novel filter that allows dropping 2D scans that bring no new information to the system. Experiments on MIT and TUM datasets show that it is possible to drop more than half of the scans. Moreover thepaper describes the formulas that enable filter adaptation to a particular robot with known speed and characteristics of lidar. In addition, the indoor corridor detector is introduced that also can be applied to any specific shape of a corridor and sensor.
Introduction
Nowadays there are various application of lidars, laser scanners, rangefinders. Algorithms that use these sensors, such as SLAM, structure from motion and others, are highly demanded in modern robotics. Such sensors have a common disadvantage -they collect simultaneously too few and too much data. On the one hand there is too few data because it is impossible to smooth or approximate this data without significant loss of accuracy. At the same time there is too much data because it requires a lot of memory to store and process scans that appear every 30 milliseconds. This paper presents an algorithm of filtering 2D laser scans in application to indoor SLAM algorithm. The development is relevant because modern laser scanners collect data more than 30 times per second. There is no need to capture laser scans so frequently unless such laser scanner is mounted on a car moving with 60 km/h. In this case the capturing environment might extremely change in 0.03 sec. On the contrary, if a robot moves in indoor environment and it has an average speed of about 0.5-1 m/s such amount of dense point clouds from laser rangefinder is excess.
The suggested algorithm is applicable to indoor environments. Moreover, it is useful in situation when a robot that processes lidar data has limited computational resources. In this case, even if the robot stays at one place, it should not process every successive scan and therefore it can save its resources. Also, if a robot moves slowly and smoothly, it is also possible to drop the scans that bring no new information.
The idea is based on storing several successive scans in a window and comparing each upcoming scan to the scans from this window. If that scan strongly correlates to every scan from the window -it should be dropped. Experimental testing on MIT Stata Dataset [1] shows that it is possible to drop more than half of the scans for needs of SLAM algorithm without loss of accuracy.
Another important feature of this algorithm is that the process of filtering takes less calculation time than scan matching in SLAM. Otherwise, it would be redundant to use this filtering. Every SLAM algorithm has its own unique scan matcher and it was impossible to compete with every one of them so the reference value of scan matching complexity was set to O(100 * points in scan) iterations. In average, a scan matcher that processes scans in real time should work with approximately this speed. The suggested filtering method works significantly faster than that and it is proved both mathematically and on experiments.
The limitations of the suggested algorithm are the following. The algorithm is assumed to be launched in indoor environment. The robot should be equipped with a lidar than captures laser scans more than 30 times per second. The scan capturing rate should be known. The robot should move no faster than on several centimeters per scan capturing. The average robot speed should be known. The robot can run any 2D SLAM algorithm, since the filtering process is inserted before SLAM activity.
The paper structure is as follows: the known approaches for scan filtration and calculation of scan correlation are described in section 2. The description of the suggested method and it's modification for featureless corridors is presented in section 3. Experimental testing on a real data can be found in section 4.
Related work
The idea of filtering scans presented in this paper comes from the problem of reducing computation time for scan matching. The common problem is to reduce the amount of data that should be processed instead of iterating through every point of a laser scan. There are several well-known techniques for reducing dimension of a laser scan to decrease the time of scan analysis. On the one hand, it is possible to drop some parts of scans or even the whole scans if they for some reason bring no new information. On the other hand, a raw scan can be transformed into another more lightweight form that whould be processed faster.
Authors of [2], [3], [4] suggest extracting feature points from laser scans. This technique significantly decreases the dimension of input data. So further features might be considered, for instance, in Kalman filter. The disadvantage of such method is that the process of feature detection might take valuable time. Also, descriptors of such features should be robust to rotation and shift which is hard to achieve.
Another technique for reducing dimension is construction of histogram of a scan. For instance it was presented in [5], [6]. In [6] this technique was used as a core feature of a scan matcher is SLAM algorithm. A histogram is more flexible than a set of features since it is easy to tune the size of a histogram, length of columns etc. However, it requires accuracy in these parameters or otherwise a histogram would not work.
After the dimension of input data is reduced it is required to detect whether a new piece of data is the same as the previous one or not. The easiest way appears to be a scan matching, as in [2] or [7]. But the focus of this paper is on avoidong redundant matching if it is possible. Even for little dimensions of data scan matching process might take unreasonably much time.
Another known idea is to calculate correlation of histograms as it is described in [5]. In this approach the histogram is considered as a random variable with statistical characteristics and therefore it is possible to apply classic approaches of Pearson, Kendall and others [8].
Authors of [9] present the approach that is based on the similar mathematical apparatus. However their focus is to classify lidar data using spatial correlations. The approach that is described in that paper can be successfully applied to detect featureless areas of a single laser scan. However the focus of the suggested algorithm is to decide if the whole scan should be processed or not.
Also, there exists a completely different approach to estimate laser scan closeness to one another. The idea is based on loop closure from graph-based SLAM algorithm. There are various well-known works in this area, such as [10], [11], [12]. However, these approaches usually require more computation time than they can save. And their improvement in general requires reducing dimension of input data as well.
To sum up, the problem of making filtering faster than scan matching leads to the idea that it is necessary to reduce the amount of input data. After that, it is necessary to find out if the upcoming scan is valuable or not. Trying to extract more details from the scan would inevitably increase the calculation time. By details the specific parts of a scan or a shift between a current scan and a previous one are ment. If a scan is valuable, the filter should transfer an original scan to a core of a SLAM algorithm.
Correlation filter for 2D laser scans
The core idea of the suggested algorithm is to compare the current incoming laser scan to the previous one. If the latter is 'similar' than the current scan should not be processed. To avoid the noise in observations it is better to compare an incoming scan to several previous scans. Hence, a sliding window of scans appears that plays a role of reference for new incoming scans.
Laser scan representation
In general, a laser scan consists of several thousands points. The brute force calculation of scans correlation takes O(n 2 ) operations, that is greater than a million. To decrease this dimension it is possible to extract feature points, as was described in section 2. However, this might still take enormous amount of time.
Hence, instead of using raw laser scan data for calculating correlation, it is suggested to create a histogram for each scan. There are several approaches for creating a histogram from a laser scan. One of them is based on a division by ranges, and another one -by angles.
For each scan, the highest and the lowest values of a range are known. Therefore, it is possible to divide this range dispersion in several intervals, and then calculate the number of points that fits each interval.
Two successive scans should not differ significantly, so the range histograms should be close to each another. In practice, if a robot does not rotate, the difference between two scans is insignificant and the values of each column of histogram varies in few units. If a robot rotates the difference is more considerable. However, it is possible to update the approach of range histogram. Instead of calculating the quantity of points in each column, it is possible to calculate the average range of column. Hence, two successive histograms become more diverse, allowing to avoid excess dropping of scans.
There is another approach to create a histogram that is opposite to a division by range. Every laser scan is captured in polar coordinates where an angle is the second degree of freedom in addition to a range. Thus, it is possible to divide every scan into several intervals of angles and calculate the average ranges in these intervals. It is impossible to calculate the number of ranges because it is the same for each interval. It is also possible to calculate dispersion instead of average value.
Both approaches for creating a histogram decrease the amount of calculations. Instead of processing thousand points of laser scan for further calculation of correlation, it is possible to process several dozens of columns in histograms. It is important to mention that a laser scan is not replaced with a histogram in SLAM algorithm. The histogram is created only to calculate the correlation of scans.
In addition, experiments show that histograms correlation in every window are calculated faster than a scan matcher finds the best position for a scan. Therefore it is possible to filter excess scans and gain extra time resources for other routines. The specific numbers are presented in section 4.
Criteria of correlation
The next step after the histograms of each scan have been created, is to calculate their correlation. Methods of mathematical statistics can be applied here, considering the histogram of a scan as a random variable with unknown distribution. Since every histogram from a window in general is similar to each other, it is possible to assume that the distribution is the same.
There are several known approaches to calculate correlation of random variables: Pearson correlation coefficient [13], Kendall coefficient [14] and Spearman coefficient [15]. Pearson coefficient is calculated to a pair of random variables X, Y using the formula If random variables are observed n times and x i is the observation of variable X than the Pearson correlation coefficient is calculated with the following formula The value of this coefficient is between -1 and 1. A value of +1 means total positive linear correlation, 0 means no linear correlation, and -1 means total negative linear correlation. This correlation is called linear because of the following interpretation. The value of the first variable is put in abscissa of a plot, the value of the second variable -in ordinate. If the points with resulting coordinates belong to one line with positive derivative, then their correlation is equal to 1. If the derivative is negative, then the Pearson coefficient is equal to -1. If there is no way to draw any line -then the coefficient is 0.
Kendall and Spearman coefficients are used to measure the ordinal association between two measured quantities. It is a measure of rank correlation: the similarity of the data orderings when ranked by each of the quantities. The Spearman coefficient is defined as the Pearson coefficient between the rank variables. In other words, if X and Y are random variables with ranks rg X , rg Y the correlation coefficient is calculated as The calculation of Kendall coefficient is associated with the number of concordant pairs of random values. Let (x i , y i ) be a set of observations of random variables X and Y. Pairs (x i , y i ) and (x j , y j ) where i ¡ j are said to be concordant if either both x i > x j and y i > y j holds or both x i < x j and y i < y j . Otherwise they are called discordant. The Kendall tau coefficient is defined as To sum up, there are three well known approaches to calculate correlation. The main disadvantage of Kendall coefficient is the algorithmic complexity. It requires calculating the rank of random variable and then calculating the number of concordant pairs. In the worst case this might take Nlog(N) operations. Spearman coefficient is more lightweight, but it also requires ranking. Since the correlation is calculated for histograms of successive scans, it is a challenging problem to rank values in histograms in the correct way. This means that the histograms are, in general, similar, and it is necessary to capture every little fluctuation of values. Therefore, the rank function should be sensitive for these fluctuations and simultaneously be equitable.
Therefore, the Pearson correlation coefficient is the most suitable for the considered algorithm. Its complexity is O(n), it does not require ranking function, and it is sensitive enough to fluctuations of values in histograms.
Parameters and constants
At the moment, there are four places, where parameters adjustment are to be considered: • amount of columns in histogram -and also amount of points in each column; • size of a window; • threshold of intra window correlation -how much a new upcoming scan should be correlated to each scan in window. It is indicated below by P pair ; • dropping correlation threshold -combined value of scan similarity to scans in a window (P common ); These four parameters influence the amount and nature of dropped scans. First of them is the amount of columns in histogram, that was described in 3.1. All considered histograms have a common feature: the more columns are contained in a histogram, the more details of each scan are processed. Consider lidar used in MIT dataset, that captures scans consisting of approximately 1000 points. Then the division of these points in 50 columns means that each group of points contains in average 20 points. The division in 10 columns brings the groups with 100 points in each.
To determine the influence of columns number on the quantity of information, that is possible to be received from histogram two border situations can be considered. First situation implies, that each point of scan is contained in a separate column. Therefore, the histogram consists of 1000 columns. Correlation between two such histograms is very sensitive to every point and even noise, but it can show the true similarity of laser scans. On the other hand all can be gpoints in one column. In this case, all scans become similar, and there appears no way to distinguish one from another.
This example shows, that the larger amount of columns leads to the larger sensitivity of scan correlation to a minor difference of scans. Despite the intuitive opinion, that the higher sensitivity leads to the higher accuracy, in real data the high sensitivity might be, on the contrary, unpleasant. For instance, if a moving object appears in the area of laser scan view, it inevitably brings difference in two successive scans. Moreover every sensor brings noise in observations and sometimes this noise can be falsely interpreted for little distances as a difference between scans. Experiments results presented in section 4, show that 1000 points in laser scan, that are spread almost in the field of view of 3π/4, should be grouped in 15 or 30 columns.
Another important constant is the size of the window, that contains previous laser scans. The score of a current scan is equal to the correlation value multiplication of each scan from this window to a current scan. By correlation value the Pearson correlation coefficient is assumed. It is obvious, that if a robot with a laser scanner moves quickly, then large amount of scans in the window brings little final correlation coefficient. That means, that the higher robot's speed the lower number of scans in the window is appropriate.
Experimentally the formula appeared that links a windows size to an average speed of robot. Under the average speed here, the average distance in centimeters, that a robot passes between two scan captures, is assumed. This formula is heuristic and allows to link the property of scan capturing -speed -and the property of filter -amount of information, that should be in cache.
window size = 27 avg speed 2 (2) In order to clarify the influence of a window size it is necessary to determine two parameters, that are strongly related to each other and to a window size. The first parameter is the threshold for Pearson correlation coefficient of each pair of scans, and the second parameter is the common correlation coefficient, that is equal to the multiplication of coefficients. Since the Pearson correlation coefficient is calculated for two successive scans, that are captured with a little time difference, it is obvious, that they are, in average, very strongly correlated. That is why a threshold for a pair of scans should be no lower than 0.95, or even better -0.98. After calculating correlation coefficient of new scan to each scan in a window, it is necessary to combine all coefficients. The obvious and the well-known way to do that is to multiply them: This formula allows estimating the common threshold according to a chosen window size and a pair correlation coefficient. For instance, a threshold for a window containing 5 scans and pair correlation 0.98 is equal to 0.98 5 = 0.904.
Corridor detector
The general concept of the filtering method is based on the idea, that similar scans might be dropped. To achieve that, a histogram is built for each scan, and the correspondence of these histograms is calculated. If a robot stays at one place, it is obvious that it captures the same scans, and all of them except one might be dropped. But, if a robot moves, it is unlikely, that it receives the same scans. However, it is possible in featureless environments.
In case of indoor environment, the outstanding featureless environment is a corridor. When a robot moves through a corridor along the walls, there are only a few points that differ in successive scans. The example of such corridor scan can be found in fig 1. Obviously, corridors are the most challenging pieces of environment for scan matcher, and, therefore, scans of corridors should not be dropped at all. Otherwise, there is a probability to loose useful information. For such scans, it is easy to expect, that their correlation is very close to 1 and they will inevitably be dropped. Therefore, it is required to detect corridor scans before filtering and not drop them. The suggested method is based on an assumption, that the corridor walls are straight and featureless. Hence, the ranges in a laser scan change monotonically. The easiest way to check this is to look through every point of a laser scan and to calculate the sign of range difference of the current point and the next one. It is clear, that it is necessary to consider such differences in every quarter of the field of view independently. The explanation of this fact is presented in fig 2. If a robot is placed along the corridor, the difference between a 1 and b 1 should be positive. At the same time, the difference between a 2 and b 2 should be negative, despite all these four points are located in the same wall. Also, the difference between a 3 and b 3 should be positive, and the difference between a 4 and b 4negative. If a robot is placed perpendicularly to the corridor, these signs are inverted, but their relation stays the same.
The general algorithm of corridor detection is presented below If the resulting score is greater than a threshold, then the corridor is found. The threshold depends on The last but not least thing to discuss is the point next in the algorithm above. The obvious way is to take the truly next point to the current one. Nevertheless, this choice fails if the ranges of successive points are close to each other and differ no greater than the error of a laser scanner. A problem appears of detecting the minimal angle between laser beams hitting one wall. The laser beams contracting this angle should just have the ranges, that are not sensitive to a laser scan error. Fig 3 presents the geometrical illustration of this task. Expressing x and y with k, alpha and gamma, the expression can be drawn: Hence, The same expression might be considered for each quarter of a laser scan, and they will be the same except of the sign of fraction. Therefore, it can be assumed, that gamma varies from 0 to π/2, where the cosine function is monotonous. So, it is enough to consider gamma, that is equal to 0 and π/2, to find out the border value for alpha. Hence, the expression is obtained, showind that the closer scan point is to the perpendicular to the corridor, the stronger is the range sensitivity to scanner error.
To sum up, the point next should be as far from current point as the following requirement is satisfied: cos(α) = r(r + 2∆) −1 , where r -is the range to a wall of corridor, and ∆ -is a possible noise of such observation. These parameters can be obtained from the characteristics of a particular lidar.
Evaluation
In order to test the scan filter, it was inserted in three SLAM algorithms: vinySLAM [16], Gmapping [17] and Google Cartographer [10]. The filter decides if the scan should be processed or dropped before it is passed to a scan matcher. Hence, if a scan should be processed, then the general processing time consists of both filtering time and scan matching time. That is why it is necessary to estimate the complexity of filtering process. The first part of this paragraph is focused on this problem. The second part presents the results of such filtering application on real datasets, such as MIT stata dataset and TUM dataset. It is necessary to estimate the percentage of scans, that might be dropped without loss of accuracy. These datasets were chosen, because they contain laser scan data and are provided with arbitrary ground truth.
It is important to mention that the effect of corridor detector cannot be measured quantitatively. It is only possible to say that without this algorithm almost every laser scan of a corridor is dropped, and this dramatically affects on the trajectory estimation. In other words, the robot gets lost in the corridors. When the filter is included the corridor scans are not dropped and the robot performs ordinary SLAM.
Filter algorithm complexity
The First, it is necessary to create a histogram by looking through every point of laser scan. This takes O(n) operations, where n -is the number of points. To be more specific, a histogram, that shows the average range for each angle interval, takes n multiplications, n additions and 1 division. Therefore, the process of histogram creation takes roughly 2n operations, without considering memory costs. The second step -the calculation of Pearson correlation according to the formula (1) -takes 5m additions and 3m multiplications, where m is the number of columns in histogram, which is usually more than ten times lower, than n. Since Pearson correlation should be calculated for each scan in a window, it takes 8mk operations where k is the size of a window.
To sum up, a very rough estimation, that considers only the most important parts of the algorithm, says that there are about 2n + 8mk operations. In the real experiment n is about 1000, m is about 30-50 and k is 5-10. Using this estimation, it is possible to say, that filtering process takes the same time as several iterations through all points of laser scan. At the same time, the scan matching process in vinySLAM represents almost random walk, that takes about 100 iterations through all points of laser scan. So, a rough estimation of scan matching is 100n operations. These estimations are not accurate, but they demonstrate, that the time for filtering should be significantly lower, than the time for scan matching.
It is also necessary to measure the real difference between scan matching time and filtering time. The experiment was made using the sequence mit-2011-01-25-06-29-26 on Ubuntu 18.04, ROS melodic, Intel Core I5-8500@ 3GHz, 16 Gb RAM. The average scan processing time for this sequence is measured for considered algorithms. Firstly, the time was measured without filtering. The result can be found in figure 5. In this figure the average processing time of scan with filtering is also presented. The result shows that the mean scan processing time is decreased by more than 40%. The average scan processing time in vinySLAM algorithm is 12.9 · 10 −3 seconds, and the average filtering time for 30 columns in histogram and 5 elements in window is 5.9 · 10 −5 seconds.
Quantitative estimation of the filter
The same hardware environment was used to calculate the accuracy of SLAM algorithms with filtering on MIT and TUM datasets. These datasets were chosen, because they are provided with ground truth, which allows estimation of the accuracy quantitatively. The test case is as follows: 1. Run pure SLAM algorithm on the sequences of these datasets, and measure its RMSE from the ground truth. 2. Run SLAM algorithm with filtering and estimate the accuracy with some dropped scans from the data sequences. 3. Estimate the percentage of dropped scans.
The results on MIT dataset are presented in tables 1 and 2. It is clear, that the accuracy of every SLAM with filter is the same as the accuracy of this algorithm without filter. At the same time, more than half of the scans is dropped. It is important to mention, that parameters for the first 7 sequences differ from parameters for the last 3. They are grouped by the average speed of robot.
The average speed of robot in first group is 0.022 m/quantum. Quantum is a period of time between capturing scans. For this particular laser scanner, it is equal to 0.025 sec. According to a formula (2), the window size should be equal to 5. At the same time, the correlation coefficient for the pair of scans is 0.96, since the robot moves fast. Therefore, according to formula (3), P common is equal to 0.8. The value of P p air is heuristic and makes P c ommon high enough. The last parameter to settle is the nu,ber of columns, which is also heuristic. It is strongly related to P p air. It should be 30 to make the histogram sensitive to possible changes of environment. For the last sequences, the average speed of robot is close to 0.017 m/quantum. This speed is lower, and therefore, window size is equal to 9, P pair = 0.99, P common =0.9, the number of columns is equal to 15.
The slight increase of ground truth quality on several sequences can be explained with statistical error, since all considered algorithms implement different variations of random walk in their scan matchers. The figure 6a presents the results visualization of vinySLAM with filter and without filter. It is clear, that they are the same, taking the error into account. The figure 6b shows these results for Gmapping and fig 6c provides the visualization of tab 2 (Google Cartographer) The TUM dataset contains few sequences with laser scan, and therefore, the results are not as indicative as on MIT. The results of vinySLAM are presented in table 3. The average speed is 0.024 m/quantum for the first sequence and 0.021m/quantum for the second one. They are launched on the following parameters: window size = 5, P pair = 0.98, P common = 0.9, number o f columns = 30.
Conclusion
This paper presents the novel filtering algorithm of laser scans from lidars, that is based on scan correspondence to each other. If an upcoming scan is similar to several previous one, then it might be dropped without loss of accuracy. The experiments were made on MIT and TUM dataset for vinySLAM, Gmapping and Cartographer. They show that it is possible to drop nearly half of the scans, and sometimes it is enough to have only one third of the total number of scans in the data sequence. Secondly, the algorithms of corridor detections is presented. Its accuracy varies depending on the corridor concept and might be different in specific circumstances. In MIT dataset, the corridor between office rooms is detected with this algorithm in every sequence.
Experiments show, that the suggested filtering algorithm works incomparably faster, than a scan matcher algorithm (5.9·10 −5 seconds for filtering, 12.9·10 −3 for scan matching of vinySLAM). Therefore, the filtering process, in general, saves computational resources, because the time saved from scan matching might be used for other possible needs of a robot. Application of this filter reduces the average scan processing time by more than 40%. This performance is achieved by reducing the dimension of a laser scan by creation a histogram. There are several types of histograms to be created, such as the number of points on a specific distance from scanner or an average range in a specific angle of view. Then, the histograms of successive scans are compared to each other with calculating a Pearson correlation coefficient. If the correlation is high the scan should not be processed.
Finally, the paper presents formulas and suggestions for parameters of filter that depends on a robot speed. The future work includes updating filtering mechanism with Transferable Belief Model, that is the feature of vinySLAM -the algorithm, that was chosen for testing. Also, it is possible to apply filtering for structure-from-motion algorithms to find out if it also reduces their computational cost. | 7,218.4 | 2021-05-27T00:00:00.000 | [
"Engineering",
"Computer Science",
"Environmental Science"
] |
Validity and Safety of Robot-Assisted Laparoscopic Radical Cystectomy for the Elderly: Results of Perioperative Outcomes in Patients Aged ≥80 Years
Objective: To improve perioperative outcomes, robot-assisted radical cystectomy has gained increasing interest. This study aimed to assess the detailed perioperative complications of robot-assisted radical cystectomy in elderly aged ≥80 years and compare them with those of non-elderly. Material and methods: We retrospectively analyzed the clinical features of 74 patients who underwent robot-assisted radical cystectomy for bladder cancer between September 2018 and September 2021. Perioperative complication was classified by the Clavien–Dindo classification and organ system-based categories. We assessed the relationship between age or Charlson comorbidity index score (≥3 or <3) and the incidence of perioperative complication or rehospitalization rate within 90 days postoperatively. Results: Of the 74 patients, perioperative complication of all grades and grade ≥IIIa occurred in 54 (73%) and 15 (20%) patients, respectively. The postoperative rehospitalization rate was 20%, and the perioperative mortality rate was 0%. Elderly (n = 20) showed no difference in the incidence of perioperative complication of all grades or grade ≥IIIa compared with non-elderly, and no organ system-based category had a higher incidence in elderly than that in non-elderly. Gastrointestinal tract-related perioperative complication incidence was higher in non-elderly and those with Charlson comorbidity index ≥3 (P = .044, .039, respectively); cardiovascular-related perioperative complication incidence was higher in those with Charlson comorbidity index ≥ 3 (P = .0068). Conclusion: The incidence perioperative complication of robot-assisted radical cystectomy in elderly was not different from those in non-elderly, suggesting that robot-assisted radical cystectomy may be an option for the treatment of bladder cancer in elderly as well as non-elderly.
Introduction
The incidence of bladder cancer increases proportionally with age, with approximately 1.7 cases per 1000 individuals aged ≥80 years in Japan. As the world population ages, the number of patients with bladder cancer, especially the elderly, is increasing. 1 Therefore, the demand for the treatment of bladder cancer is also increasing. Radical cystectomy (RC) is a curative treatment for localized muscleinvasive bladder cancer (MIBC) and high-risk non-muscle-invasive bladder cancer (NMIBC) and is one of the major surgical procedures in urology 2 ; however, it is associated with high perioperative complications (PCs) and perioperative mortality due to its high surgical invasiveness. 3 Particularly, the risk of perioperative mortality is high in the elderly population aged ≥80 years, and indications for surgery are difficult to determine in this age group. 3,4 demand for RARC in the elderly population, more detailed PCs are desirable.
In this study, we analyzed the detailed PC of RARC in elderly and compared them with those of non-elderly.
Patients
We retrospectively analyzed the clinical data of 74 consecutive patients who underwent RARC for localized MIBC and highrisk NMIBC at our institution between September 2018 and September 2021. All 74 cases were followed up for >90 days. Of the 74 patients, we defined the patients aged ≥80 years as elderly (n = 20) and the rest as non-elderly (n = 54). All patients underwent transurethral resection of bladder tumor prior to RC and were diagnosed with bladder cancer. Cisplatin-based chemotherapy was administered as a preoperative treatment for patients with MIBC. Before surgery, all patients underwent electrocardiography and echocardiography and subsequently visited a cardiologist for evaluation of their surgical tolerance. This retrospective study was approved by the National Cancer Center Research Ethics Review Committee (Research Project No. 2018-159). and complied with the principles of the Declaration of Helsinki. All patients provided wirtten informed consent preoperatively.
Surgical Procedure
Patients underwent RARC with bilateral pelvic lymph node resection as far as possible to the level of the common iliac artery, followed by urinary diversion. The procedure was performed by 3 surgeons, 2 of whom were novice surgeons with <20 RARCs. The 2 surgeons performed the procedure under the guidance of a senior surgeon. Urinary diversion was performed by ileal conduit (IC), ileal neobladder, or ureterocutaneostomy. The IC was created by the conventional extracorporeal urinary diversion (ECUD) until May 2020 and intracorporeal urinary diversion (ICUD) thereafter. The resection lengths of the terminal ileum were 20 and 55 cm for the IC and neobladder, respectively. Even in patients who underwent ICUD, intestinal anastomosis was performed by functional end-to-end anastomosis combined with laparotomy through a small incision. Ureteral stents were placed at the time of urinary diversion.
Perioperative Management
All patients were on the same clinical pathway. Patients consumed normal meals orally until the day before surgery and took stimulant laxatives orally the night before surgery. Intraoperatively, all patients received prophylactic antimicrobial agents (second-generation cephalosporins). Blood transfusions were administered for intraoperative and postoperative acute anemia. Ambulation was encouraged as much as possible on the first postoperative day. Patients began drinking water on the first postoperative day, and the physician in charge determined diet based on the patient's general condition and abdominal x-ray findings. Ureteral stents were removed or replaced one at a time approximately 2 weeks postoperatively. The day of discharge was decided by the physician in charge based on the patient's general condition.
Data Collection
The following variables were analyzed: age, gender, body mass index (BMI), Eastern Cooperative Oncology Group performance status (ECOG PS), American Society of Anesthesiology physical status, Charlson comorbidity index (CCI), 9 history of smoking, abdominal surgery, and abdominal radiotherapy, clinical staging, presence of neoadjuvant chemotherapy, preoperative anemia, and preoperative renal dysfunction, urine diversion method, operative time, the estimated blood loss (EBL), presence of perioperative blood transfusion, and pathological staging. Charlson comorbidity index was classified as 0-2 and ≥3. 10 The eighth edition of the International Union Against Cancer was used for staging. Blood samples were collected approximately 1 month before the date of surgery and the day before surgery. Anemia was defined as hemoglobin levels of <13.5 and <12.0 g/dL in males and females, respectively. 11 Renal dysfunction was defined as an estimated glomerular filtration rate of <60 mL/min/1.73 m 2 . 12
Perioperative Outcomes
Perioperative outcomes included the incidence of PC, rehospitalization within 90 days postoperatively, and mortality within 90 days postoperatively. Perioperative complications were defined as undesirable medical events occurring before hospital discharge. All PCs were graded according to the Clavien-Dindo classification (CDC) and classified as grades ≥IIIa and <IIIa. Moreover, all PCs were classified by organ system-based categories.
Statistical Analysis
We assessed the relationship between age or CCI score and perioperative outcomes. Differences between groups were analyzed using chi-square test or Fisher's exact test for categorical
Main Points
• This study aimed to assess the detailed perioperative complications of robot-assisted radical cystectomy in elderly aged ≥80 years and compare them with those of non-elderly.
• The incidence of perioperative complications of robot-assisted radical cystectomy in elderly aged ≥80 years was not different from those in non-elderly.
• Robot-assisted radical cystectomy may be an option for the treatment of bladder cancer in the elderly as well as non-elderly. variables and the Wilco xon-M ann-W hitne y U-test for continuous data. Hypothesis testing for the difference in the population proportions was used to evaluate the association between age or CCI score and the incidence of PC or the rehospitalization rate within 90 days postoperatively.
As a subgroup analysis, a similar analysis was performed between the ICUD and ECUD groups in patients who underwent IC.
Statistical significance was set at 2-tailed P values <.05. The statistical analysis methods were determined after consultation with the hospital's statistics department. All statistical analyses were performed using the JMP software (SAS Institute Inc., version 13.2).
Ethical Standards and Policies
The study protocol was approved by the institution's Ethics Committee and complied with the provisions of the Declaration of Helsinki (National Cancer Center Research Ethics Review Committee, Research Project No. 2018-159). All patients provided informed consent. Table 1 shows the clinical, operative, and pathological characteristics of the elderly and non-elderly patients, respectively. The overall median (range) age was 74 (39-89) years, and 60 (81%) patients were male. A total of 65 (88%) patients had a CCI of 0-2 and 9 (12%) had ≥3. A total of 63 (85%) patients had a history of smoking, 26 (35%) had a history of abdominal surgery, and 1 (1%) had a history of abdominal radiotherapy. A total of 37 (50%) patients received neoadjuvant chemotherapy. Elderly had higher ECOG PS (P = .0035), higher clinical N stage (P = .012), and shorter operative time (P = .0025) than non-elderly. There was no difference in pathological results between elderly and non-elderly.
Results
Overall, PC of all grades and ≥IIIa in CDC occurred in 54 (73%) and 15 (20%) patients, respectively ( Table 2). The highest incidence of PC in all grades in CDC was genitourinary PC (32%), followed by gastrointestinal tract-and bleeding-related PCs (27% and 22%, respectively). Only 1 patient developed cardi ovasc ular-relat ed PC, and no patient developed respiratoryrelated PC. .79 Table 1
. Clinical, Operative, and Pathological Characteristics of Patients in the Elderly and Non-Elderly Groups (Continued)
Tables 3-5 present the perioperative outcomes by age or CCI score. Table 3 shows the incidence of PC of all grades, grade ≥IIIa, and grade <IIIa in CDC and the rehospitalization rate within 90 days postoperatively. For the entire cohort, PC of all grades in CDC occurred in 54 (73%) patients, and the rehospitalization rate was 20%. Perioperative complication of grades ≥IIIa and <IIIa in CDC occurred in 15 (20%) and 51 (69%) patients, respectively, and 12 (16%) patients developed both grades≥IIIa and <IIIa PC. No relationship between age or CCI score and perioperative outcomes was observed. Table 4 shows the incidence of PC of all grades in CDC by organ system-based categories. Non-elderly and those with CCI ≥3 had a higher incidence of gastrointestinal tract-related PCs (P = .044, P = .039, respectively), and the patients with CCI ≥3 had a higher incidence of cardi ovasc ular-relat ed PCs (P = .0068). Table 5 displays the incidence of PC of grade ≥IIIa in CDC according to the PC type. Intraoperative complications were the most common PC of grade ≥IIIa in CDC (7%), followed by postoperative ileus and ureteral stricture (5% and 4%, respectively). The group with CCI ≥3 had a higher incidence of cardi ovasc ular-relat ed PCs (P = .0068). Table 6 shows the results of hypothesis testing for the difference in the population proportions between age or CCI score and PCs or postoperative rehospitalization rates within 90 days postoperatively. The patients with CCI ≥3 had higher population proportions of the incidence of all grade PCs than those with CCI <3 (P = .031). The elderly had lower population proportions of the incidence of gastrointestinal tract-related PCs of all grades in CDC than the non-elderly (P = .035). The population proportions of the incidence of cardi ovasc ular-relat ed PCs did not differ between the age and CCI score groups.
Regarding the PC of grade ≥IIIa in CDC according to the PC type, there was no difference in their respective population proportions.
No deaths within 90 days postoperatively were noted in this cohort.
In a subgroup analysis of only patients who underwent IC, those who underwent ICUD had a smaller proportion of the elderly and a lower incidence of bleeding-related PCs of all grades in CDC (P = .027 and .041, respectively). The incidence of other PCs or the population proportions of any type of PCs did not differ between the ICUD and ECUD groups (Supplementary Table 1-4).
Discussion
This study shows for the first time that RARC is safe for patients aged ≥80 years, especially in terms of no increase in PC. Noteworthy points are as follows: (1) No difference in the incidence of overall PC of all grades and grade ≥IIIa in CDC between elderly and non-elderly or between patients with CCI ≥3 and those with CCI <2 was noted, except that the incidence of PC of all grades in CDC was higher in patients with CCI ≥3. (2) No difference in the postoperative rehospitalization rate within 90 days postoperatively between elderly and non-elderly or between patients with CCI ≥3 and those with CCI <2 was observed.
(3) No difference in the incidence of organ systembased PC for all grades in CDC between elderly and non-elderly or between patients with CCI ≥3 and those with CCI <2 was noted, except that the incidence of gastrointestinal tract-related PCs was higher in non-elderly. (4) No difference in the incidence of PC of grade ≥IIIa for events between elderly and nonelderly or between patients with CCI ≥3 and those with CCI <2 was observed. In the case of ORC, the incidences of PC and perioperative mortality are increased in the elderly than in non-elderly. 3,13,14 According to a systematic review that focused on the PC of RC, including 90% ORC, the incidence of PC in all grades and grade ≥IIIa in CDC was 58.5% and 16.9% in the elderly, respectively. 15 Fairey et al 16 showed that the incidence of PC after RC was 47% and 24% for all PCs and major PCs, respectively, and reported that the incidence of PC increased with the severity of comorbidities. In a large study of 1264 patients who were divided according to aged <80 and ≥80 years, the incidence of PC was higher in the elderly, with a predominance of cardi ovasc ular-relat ed PCs and a trend toward higher incidences of respiratory-and gastrointestinal tract-related PCs. 14 Yamanaka et al 17 reported in a study of 629 patients who underwent ORC that no difference was observed in the overall incidence of PC in patients aged ≥80 years compared with non-elderly; however, the incidence of urinary tract infection was higher. Therefore, the indications for RC for MIBC and high-risk NMIBC in the elderly have been carefully determined. Robot-assisted radical cystectomy has been reported to have non-inferior cancer control ability 6,7 and be less invasive than ORC, with less EBL and PC, and shorter hospital stay. 18 In a systematic review using the Cochrane Database, Rai et al 19 reports comparing the perioperative outcomes of elderly and non-elderly who underwent RARC, and no conclusions have been drawn as to whether the indications for RARC in elderly are the same as those in non-elderly. A report of 99 cases comparing PC of grade ≥IIIa in CDC between elderly (≥70 years old, n = 38) and non-elderly (<69 years old, n = 61) who underwent RARC showed no difference between the 2 groups (34% vs. 36%, respectively) and concluded that RARC may be an option for bladder cancer in both elderly and non-elderly. 21 In this study, the incidence of grade ≥IIIa PC in CDC in 20% of elderly was almost the same as that reported in previous studies, and the results were comparable with those in non-elderly.
Although the incidence of PC of all grades in CDC was higher than that reported by other authors, this means that there are several minor PCs in this study cohort, a common trend seen in reports of the incidence of post-RC PC in Japan. 22,23 This may be due to the longer hospitalization period in the Japanese healthcare system than that in Western countries, which makes it easier to detect minor PCs.
Previous studies of PC by organ system-based categories reported a relatively high incidence of gastrointestinal tractrelated and genitourinary PCs, with cardiovascular-and respiratory-related PCs also being common. 8,16,24 In this study, the incidence of all grades in CDC of gastrointestinal tract-related and genitourinary PCs was also higher than that of other PCs at 27% and 32%, respectively. The chi square test showed that the incidence of gastrointestinal tract-related PCs tended to be higher in non-elderly and those with CCI ≥3 (P = .044 and .039, respectively); however, the difference in the population proportions was found only between ages. The incidence of cardiovascular-and respiratory-related PCs was significantly lower in this study than in previous ones. 8,16,24 Regarding cardi ovasc ular-relat ed PCs, one factor may be that all patients underwent screening tests of cardiac function and a visit to a cardiologist before surgery to assess their ability to tolerate the procedure. Although no patient had to discontinue the surgery due to the results of the evaluation of surgical tolerance, 2 (2%) patients were started on anticoagulants owing to angina pectoris. Regarding respiratory-related PCs, the significance of prognostic nutritional index (PNI) with a cut-off value of 45 on respiratory-related PC in RC was recently reported. 25 In this study, we did not find any respiratory-related PC. Preoperative spirometry showed ventilatory impairment in 21 patients. Of these patients, only 6 had a low PNI (<45), and even the 6 patients did not have a significant decrease in PNI with a mean value of 42.4 (individual data was not shown), suggesting that this may be the reason why respiratory tolerance was achieved.
In this study, PC of grade ≥IIIa in the CDC occurred in 20% of the patients overall, and the postoperative rehospitalization rate within 90 days was 20%. This was consistent with the RARC perioperative outcomes reported by other authors. 18,26 The elderly group had a higher ECOG PS and shorter operative time than the non-elderly group. These results were consistent with those reported by other authors comparing elderly and non-elderly who underwent RC, including ORC. 3,24 The absence of differences in the incidence of PC of grade ≥IIIa in CDC and postoperative rehospitalization rates between elderly and non-elderly in the cohort with no significant difference in clinical characteristics without ECOG PS, CCI, and clinical N stage between the 2 groups suggests that RARC may be a treatment option for bladder cancer in elderly and non-elderly. These results were generally similar in the subgroup analysis between the ICUD and ECUD groups in patients who had undergone IC only. The overall length of postoperative hospital stay was longer than that reported by other authors 3,26 ; however, this was attributed to the unique Japanese practice system.
Our study has some limitations. First, this was a retrospective cohort study with a small number of 74 cases. In addition, the small number of patients older than 80 years old reduced our statistical power. We plan to accumulate more cases and examine the results, including those from other institutions, in the future. Second, our patients underwent a specialized preoperative evaluation of their surgical tolerance by a cardiologist, which may have been influenced by the results of cardi ovasc ular-relat ed PCs in particular. Finally, there were multiple surgeons with varying years of experience. However, all surgeons used the same technique with little variability at our institution, and we believe that evaluating the results of multiple surgeons is realistic. | 4,305 | 2022-09-01T00:00:00.000 | [
"Medicine",
"Engineering"
] |
Flexible Mid-infrared Photonic Circuits for Real-time and Label-Free Hydroxyl Compound Detection
Chip-scale chemical detections were demonstrated by mid-Infrared (mid-IR) integrated optics made by aluminum nitride (AlN) waveguides on flexible borosilicate templates. The AlN film was deposited using sputtering at room temperature, and it exhibited a broad infrared transmittance up to λ = 9 µm. The AlN waveguide profile was created by microelectronic fabrication processes. The sensor is bendable because it has a thickness less than 30 µm that significantly decreases the strain. A bright fundamental mode was obtained at λ = 2.50–2.65 µm without mode distortion or scattering observed. By spectrum scanning at the -OH absorption band, the waveguide sensor was able to identify different hydroxyl compounds, such as water, methanol, and ethanol, and the concentrations of their mixtures. Real-time methanol monitoring was achieved by reading the intensity change of the waveguide mode at λ = 2.65 μm, which overlap with the stretch absorption of the hydroxyl bond. Due to the advantages of mechanical flexibility and broad mid-IR transparency, the AlN chemical sensor will enable microphotonic devices for wearables and remote biomedical and environmental detection.
high index material for photonic circuits due to its broad transmission window from ultraviolet to mid-IR at λ = 10 μm 20,21 . Meanwhile, high-quality AlN thin films have been deposited on numerous microelectronic templates, such Si, SiO 2 , or sapphire wafers by atomic layer deposition (ALD), chemical vapor deposition (CVD), or sputtering [22][23][24][25] . Furthermore, AlN reveals high optical nonlinearities ready to be applied in nonlinear photonic devices, including frequency up-and down-conversions 26 .
By means of the mid-IR and mechanical properties of AlN and borosilicate, the AlN waveguides with the flexible borosilicate were joined to build bendable photonic devices. Finite difference method (FDM) was applied to simulate the waveguide modes and their sensing effects. The AlN thin films were prepared by room temperature direct current (DC) sputtering. The optical properties of the deposited AlN were measured by infrared variable angle spectroscopic ellipsometry (IR-VASE). The AlN ridge waveguides were then developed on the borosilicate sheet through the CMOS processes. The waveguide mode profiles were recorded and examined at λ = 2.50-2.65 μm. To assess its label-free and real-time sensing performance, analytes including methanol, ethanol, water, and mixtures of these were monitored by measuring their respective -OH stretch characteristic absorptions. Therefore, we showed that our flexible AlN-on-borosilicate device enabled chip-scale, label-free, and in situ chemical detection. Figure 1 illustrates the device fabrication process. At first, the waveguide structure was defined on a 25 µm thick borosilicate template by photolithography. An AlN film was deposited on the borosilicate sheet by DC magnetron sputtering. Ar was pre-injected inside the chamber to clean the surface of the Al target. The working pressure was 10 mTorr and the sputtering power was 1 kW. The deposition rate was 1 µm/hour when the borosilicate template was 15 cm away from the Al target. During the lift-off step, the photoresist and the AlN on it were removed so that 2 µm tall AlN waveguides were left on the borosilicate template. At last, a 250 nm thick SiO 2 top-cladding layer with an open aperture in the waveguide center was created by RF sputtering. Figure 2a illustrates the test station built to measure the mid-IR device. The mid-IR light from a 10 ns pulsed laser with a 150 mW average power was butt-coupled to the AlN waveguide through a lens and a fluoride single mode fiber. Figure 2b shows the repositioning of the fiber and the AlN waveguide observed by a microscope. The light signals from the waveguide end facet were collected by a BaF 2 lens and then captured by a mid-IR camera, cooled with liquid nitrogen. The flexible AlN waveguide device was placed on a flat and then a curved sample holder to measure its performance with and without mechanical bending. During the chemical sensing test, a 0.5 mL chemical was dropped on the 0.5 cm 2 device so that the waveguides were completely wetted by the analyte. To perform blind testing, analytes consisting of water, ethanol, or methanol were transferred into numerous beakers without labelling. These unknown analytes were dropped on the device, and then the waveguide modes were measured at λ = 2.50-2.65 µm to identify their chemical compositions. The experiments were performed at 25 °C. Figure 3 illustrates the design and structure of the flexible AlN waveguide on the borosilicate template. In Fig. 3a, the SiO 2 top cladding was placed on the ends of the waveguide to anchor the edges of the waveguide onto the template. Meanwhile, the center of the waveguide was open so that it was exposed to the surrounding chemicals for sensing applications. The chemicals next to the waveguide absorb its evanescent field and attenuate the intensity of the waveguide mode. Therefore, the constituents and the concentrations of chemical mixtures can be identified in situ by examining the spectral change of the waveguide light. Figure 3b shows the cross-sectional structure of the AlN waveguide between the top SiO 2 layer and the lower borosilicate layer. Figure 3c illustrates the fabricated bendable waveguides. No detachment or discontinued sections were observed when the device was bended. This verified that the AlN waveguides were firmly joined with the borosilicate sheet and were able to stand high stress. Figure 3d displays the enlarged image of the device. In the left section of the image, a thin SiO 2 cladding covers the AlN waveguide and prevents it from detaching during bending or stretching. In the right section of the image, the waveguide is open to the environment for detection applications. The structure detail of the waveguides was further inspected by scanning electron microscopy (SEM). Figure 3e,f display the top and the cross-sectional SEM images of a 2 μm high and 10 μm wide AlN waveguide. The waveguide had a clear ridge profile without defects observed along the edges or on its surface. The smooth waveguide surface prevents propagation loss due to scattering, which is a critical factor to perform accurate waveguide sensing. In addition, the interface between the AlN waveguide and under-cladding borosilicate template is well-resolved. No depletion damage was found on the device surfaces or the interface since the AlN waveguides were prepared by the lift-off process instead of applying an aggressive etching process.
Results and Discussion
The material property of the deposited films was characterized by a Vis-NIR spectrometer. As shown in Fig. 4a, the film prepared at N 2 /Ar = 2:1 was not transparent in the visible or infrared regions due to the unreacted Al metal residues in the AlN film. On the other hand, the two AlN films that were prepared at higher N 2 / Ar ratios were fully transparent from λ = 0.5 to 2.5 µm because all of the sputtered Al atoms were reacted with the nitrogen molecules and no metallic Al was left in the film. The optical quality, including the transparency, of our AlN film is comparable to others prepared by high temperature sputtering. The optical constants of the deposited AlN, including its index of refraction n and extinction coefficient (imaginary refractive index) k, were further characterized by IR-VASE between λ = 2 µm and 13 µm. As shown in Fig. 4b, the n decreased slowly from 1.9 at λ = 2 μm to 1.6 at λ = 9 μm until a dispersion was observed after λ = 10 μm. The relatively low k observed before λ = 10 μm reveals its potential for application in broadband mid-IR photonic circuits. The absorption after λ = 10 µm was caused by the Al-N stretching absorption that corresponds to the longitudinal optical (LO) mode and the transverse optical (TO) E1 mode of the Al-N bond 27,28 .
The waveguide sensing performance was numerically studied by the two-dimensional finite element method (FEM). The optical modes of the AlN waveguide were calculated when it was exposed to a mid-IR absorptive chemical. The structure parameters utilized in the modelling were obtained from the SEM images shown in Fig. 3f, where the AlN waveguide had a 2 μm x 10 μm structure, and the refractive index of AlN and borosilicate were 1.97 and 1.46, respectively. The waveguide mode was excited at λ = 2.65 μm since it overlapped with the www.nature.com/scientificreports www.nature.com/scientificreports/ absorption band of -OH. The transverse magnetic (TM) polarization was utilized because the AlN ridge waveguide had a large y:z aspect ratio of 5:1 that created a strong evanescent field along the z-direction. A strong evanescent field is essential for achieving high sensitivity because the detection ability of the sensor improved when the interaction between the field and the chemicals approaching the waveguide increased. Figure 5a displays calculated mode images when the waveguide was surrounded by analytes with different concentrations of a mid-IR absorptive chemical. Here, the k of the analyte is proportional to the chemicals' concentration. A fundamental mode with an elliptical intensity distribution was found inside the AlN waveguide, and its evanescent field extended into both the surrounding chemicals (z > 2 µm) and the borosilicate layer (z < 0 µm). The waveguide mode faded quickly as the chemical concentration increased because the evanescent wave was considerably absorbed by the chemical moving close to the waveguide. To analyze the waveguide modes when mid-IR absorption increases, Fig. 5b displays the intensity profiles of the TM polarization modes along the z-axis as the concentration changes. The intensities of the guided wave (0 < z < 2 µm) and the evanescent wave (z > 2 µm and z < 0 µm) both decreased drastically when the concentration of the absorptive chemical increased. Yet, the www.nature.com/scientificreports www.nature.com/scientificreports/ waveguide mode remained a fundamental mode regardless of the concentration. The invariance of the mode profile is critical to achieve accurate waveguide sensing since the formation of modes in higher orders changes the mode structure and the evanescent wave intensity that consequently cause signal variation during the sensing measurements. Figure 5c plots the waveguide mode intensity when the analyte concentration was consistently increased from 0% to 20%. The mode intensity decreased monotonically as the chemical concentration increased. The results show that the mid-IR waveguide is able to perform accurate concentration analysis by reading the intensity variations of the waveguide mode.
The bending effect of the flexible device was evaluated by calculating the waveguide mode profile at various bending radii R. Figure 6a displays the modes of the 2 µm thin AlN waveguide when the flexible device was warped at R = 10 5 , 10 4 , 10 3 , 10 2 , and 50 µm, respectively. For a transverse electric (TE) polarization, the fundamental mode revealed the same elliptical profile when the R was changed considerably from 10 5 to 50 µm. This The n and k plots of the AlN thin film from IR-VASE measurement. The n has low dispersion up to λ = 9 μm, and negligible absorption is found before λ = 10 μm. The increase of k after λ = 10 μm was due to the absorption of the Al-N bond. Figure 5. (a) The calculated mode images of an AlN-on-borosilicate waveguide when it was exposed to an analyte containing a mid-IR absorptive chemical. The concentrations of 0%, 3%, 6%, 10%, 12.5%, and 20% were utilized in the modeling. The waveguide mode gradually vanished when the concentration increased. (b) The mode intensity profiles along the z-axis at y = 0 μm. Both the guided light (0 μm < z < 2 μm) and the evanescent field decrease when the chemical concentration increased. (c) The plot of waveguide mode intensity vs. analyte concentration. The relative mode intensity dropped from 1 to 0.1 as the concentration increased from 0% to 20%.
www.nature.com/scientificreports www.nature.com/scientificreports/ indicates that the structure deformation had a negligible impact on the waveguide properties since the majority of the optical field was still confined inside the high index AlN layer. For a TM polarization, the mode slightly shifted toward the air when the bending deformation was applied. Figure 6b illustrates the TM polarized optical fields calculated at different R. The center of the mode moved toward the lower refractive index of the air by only 70 nm. Figure 6c plots the TM optical field confinement factors inside the AlN waveguide, the top air, and the lower borosilicate cladding layer. The light field distribution into those three layers were stable and consistent when the R was larger than 10 3 µm, and the wave confined inside the AlN layer remained consistent at 41%. The results indicate that the light mode is able to tolerate intense structure deformation.
To evaluate the sensing performance of the AlN flexible waveguides, chemicals containing a hydroxyl group were chosen as analytes due to their strong -OH characteristic absorption between λ = 2.6 and 3.3 µm. The TM mode light was utilized since it had a strong evanescent field, enabling sensitive chemical detection. The wavelength of the light was tuned between λ = 2.50 and 2.65 µm, where the -OH absorption rose and the AlN waveguide was transparent. The image of the waveguide mode was captured with and without the presence of the analytes on the waveguide. Figure 7 shows a clear waveguide mode between λ = 2.50 and 2.65 µm when no chemical was applied. The same waveguide mode profile was observed when the flexible waveguide devices were mechanically bended. In addition, the mode profiles remained the same at different wavelengths without any scattering. No distortion was observed in the captured modes, indicating that the waveguides had a clear sidewall and a sharp interface between the AlN and borosilicate layers. The high refractive index difference of the AlN and the borosilicate also attributed to the efficient mid-IR confinement. The invariant shape of the mode over such a broad spectrum indicates that the waveguide has low dispersion in this region, which also agrees with the optical constant displayed in Fig. 4b. When dropping various analytes onto the waveguide, the light modes revealed dissimilar spectral intensity variations for different chemicals. For the methanol wetted waveguide, the mode became a lighter spot at λ = 2.55 µm and its intensity remained bright up to λ = 2.65 µm. The increase of mode intensity was due to the formation of a top cladding layer made by the dropped methanol. Meanwhile, the mode intensity of the waveguide wetted by methanol decreased instantaneously at the longer wavelength of 2.65 µm. As for water, the mode intensity diminished as the light shifted to longer wavelengths and no waveguide mode was found beyond λ = 2.65 µm. The strong intensity attenuation observed corresponds to the -OH absorption band of water. Hence, we show that the mid-IR waveguide sensor is able to differentiate water, ethanol, and methanol since they reveal different mid-IR absorption patterns. At λ = 2.65 µm, ethanol was transparent, methanol was partially transparent, and water was fully opaque. The results are consistent with previous FTIR measurements where the -OH absorption from water increased at the shorter mid-IR regime compared to that of ethanol and methanol. www.nature.com/scientificreports www.nature.com/scientificreports/ Quantitative analyses were performed by measuring water in ethanol mixtures at concentrations between 0% and 20%. The probe was set to λ = 2.65 µm light because water is absorptive at that wavelength while ethanol is transparent. As illustrated in Fig. 8, the mode intensity decreased rapidly when the concentration of water increased and eventually the mode vanished at concentration of 20%. At low concentrations, there was a 2% mode intensity variation when the water concentration changed by 1%. The water detection limit approached 1.0%. A 10% water in ethanol mixture had a 30% attenuation in the mode intensity. The waveguide light disappeared at 20% concentration showing a similar result from the simulation. Hence, the flexible mid-IR waveguide is capable of performing quantitative measurements using characteristic absorptions.
The in situ sensing was then conducted by reading the waveguide mode intensity when the waveguides were exposed to the chemicals. Figure 9a illustrates the waveguide mode before and after the waveguide sensor was wetted by methanol, and Fig. 9b plots the intensity variation. The wavelength was adjusted to 2.65 µm, overlapping with the characteristic hydroxyl absorption. Before dropping methanol, the waveguide mode was clear. When methanol was dropped on the waveguide at t = 35 s, the mode instantaneously disappeared since the methanol www.nature.com/scientificreports www.nature.com/scientificreports/ completely absorbed the light. The waveguide mode steadily recovered after t = 130 s and eventually reached its original level because the methanol evaporated from the waveguide surface.
Conclusions
Mid-IR flexible sensors were created by integrating AlN waveguides and an ultra-thin borosilicate template. From IR-VASE characterization, the room temperature deposited AlN film had a broad infrared transparency and low optical dispersion up to λ = 9 µm. The waveguides consisted of a 2 µm high AlN ridge structure adhered to the thin borosilicate sheet. Concentration measurements and label-free chemical identification were accomplished by waveguide mode scanning over the characteristic mid-IR absorption. The waveguide sensor can identify methanol, ethanol, and water due to the distinguishable -OH absorptions between λ = 2.50-2.65 µm. Furthermore, in-situ monitoring of chemicals was demonstrated. Therefore, the AlN waveguides enable a sensing device that can perform label-free and real-time chemical detection. Methanol is the analyte and the wavelength used was λ = 2.65 µm because it overlaps the -OH absorption of methanol. The waveguide mode intensity dropped instantaneously when the methanol wetted the waveguide surface and then recovered when the methanol evaporated. | 4,061.4 | 2019-03-11T00:00:00.000 | [
"Physics"
] |
Local and systemic inflammatory lipid profiling in a rat model of osteoarthritis with metabolic dysregulation
Objective Bioactive oxidised lipids (oxylipins) are important signalling mediators, capable of modulating the inflammatory state of the joint and anticipated to be of importance in joint homeostasis and status of osteoarthritis. The aim of this study was to quantify oxylipin levels in plasma and synovial fluid from rats with experimentally induced osteoarthritis to investigate the potential role of oxylipins as a marker in the disease process of early osteoarthritis. Design Forty rats were randomly allocated to a standard or high-fat diet group. After 12 weeks, local cartilage damage was induced in one knee joint in 14 rats of each diet group. The remaining 6 rats per group served as controls. At week 24, samples were collected. Oxylipin levels were quantified by liquid chromatography–mass spectrometry. Results Overall, 31 lipid-derived inflammatory mediators were detected in fasted plasma and synovial fluid. Principal component analysis identified four distinct clusters associated with histopathological changes. Diet induced differences were evident for 13 individual plasma oxylipins, as well as 5,6-EET in synovial fluid. Surgical-model induced differences were evident for three oxylipins in synovial fluid (15-HETE, 8,9-DHET and 17R-ResolvinD1) with a different response in lipid concentrations for synovial fluid and plasma. Conclusions We demonstrate the quantification of oxidised lipids in rat plasma and synovial fluid in a model of early experimental osteoarthritis. Oxylipins in the synovial fluid that were altered as consequence of the surgically induced osteoarthritis were not represented in the plasma. Our findings suggest differential roles of the oxylipins in the local versus peripheral compartment.
Conclusions
We demonstrate the quantification of oxidised lipids in rat plasma and synovial fluid in a model of early experimental osteoarthritis. Oxylipins in the synovial fluid that were altered as PLOS
Introduction
The presence of (low-grade) inflammation in osteoarthritis(OA) is well-known and is considered of relevance in the pathophysiological process of OA [1]. Many patients with OA have signs of mild inflammation such as local warmth, pain and joint effusion [2,3]. Synovial inflammation can be present in early, as well as late, phases of OA, and is associated with synovial-related molecules released into biological fluids [4]. Previously, we demonstrated that systemic metabolic and subsequent inflammatory mediators, combined with a mild surgical trigger of local cartilage damage in the rat contribute to the progression of OA [5]. One of the characteristics of the induced metabolic dysregulation in this model is dyslipidaemia, which is also linked to clinical OA pathophysiology [6,7]. In this model the progression of joint degeneration was driven mainly by the systemic and local inflammatory responses, as demonstrated by enhanced synovitis, osteophytosis and increased recruitment of macrophage lineage (CD68 expressing) cells [5]. Thus, this model mimics key aspects of the health status of human OA synovial joints, including increased inflammatory status and changes in synovial fluid lipid profiles [8,9]. Indeed, there is evidence for altered lipid metabolism contributing to OA pathology via promotion of inflammation, apoptosis, and angiogenesis [10]. Oxidised lipids (oxylipins) are important signalling mediators capable of modulating the inflammatory state of a joint and might have an important role in the OA pathogenesis [1]. Polyunsaturated fatty acids are classified as n-3 (omega-3) or n-6 (omega-6). [11] Oxylipins can origin from linolenic acid (octadecanoids), a n-6 fatty acid, and arachidonic acid which is a product of its elongation/desaturation producing eicosanoids. [12][13][14] Another origin of oxylipins is n-3 unsaturated fatty acids synthesized from α-linolenic acid, including eicosapentaenoic-(EPA) and docosahexaenoic-acid (DHA). [12,15] These omega-3 fatty acids have proven to be beneficial in modulating the inflammatory processes. [15,16] Specifically, bioactive eicosanoid oxylipins, have a crucial role in modulating physiological processes in both homeostatic and inflammatory conditions [17][18][19]. Eicosanoids are 20-carbon fatty acid derivatives, produced from arachidonic acid [20]. The production of pro-inflammatory and/or anti-inflammatory eicosanoids, that include prostaglandins, thromboxanes, leukotrienes, and lipoxins, as well as other bioactive lipids, increases during inflammation [21][22][23][24]. During inflammation, eicosanoids regulate cytokine production, antibody formation, cell proliferation, migration, and antigen presentation but also control the tissue repair process [22]. Bioactive eicosanoid oxylipins are considered a quantitative readout relating to the inflammatory and oxidative stress status, and so may provide an early diagnostic and prognostic biomarker of disease [25]. However, due to the potent biological signalling activity of enzymatically oxidised lipids, the active mediators are short-lived in systemic circulation where they are actively metabolised prior to excretion [26]. Plasma levels of bioactive oxylipins as a representative of the OA situation might therefore not be the most suitable approach to study the lipid profile in the process of OA. The local lipid profile from joint tissues is more likely a better representative of the current OA status of the joint. As the metabolite concentrations in synovial fluid can directly reflect the joint homeostatic conditions that are related to biological processes of articular cartilage and other joint tissues, possibly already in the early stages of the disease [27,28]. In humans with symptomatic knee OA changes in systemic levels of lipids have are associated with OA[29], and in end-stage OA patients, altered lipid levels and increased levels of pro-inflammatory cytokines have been detected in synovial fluid samples [30]. Potential alterations that may occur at onset or during early phases of OA may be more relevant to understand disease progression, before the joint is fully degenerated. At the moment, a validated biochemical biomarker to detect molecular events related to early disease activity in OA, either of systemic or local origin, is still lacking [31].
The aim of the current study was to identify the potential role of systemic and local inflammation in the OA disease process and to test if systemic oxylipin levels reflect the local status in synovial fluid. Therefore, we quantified the bioactive oxylipin levels in plasma and synovial fluid in an experimental rat model of early OA, with local cartilage damage in addition to a high-fat diet induced metabolic dysregulation.
Animal model
Forty Wistar rats (12 weeks old, male, Charles-River, Sulzfeld, Germany), housed two per cage in a 12:12 light-dark cycle, were randomly divided in two groups: twenty rats were fed a highfat diet (HFD; 60% of the kcal contained fat: D12492i, USA) while the other rats received a standard diet (9% of the kcal contained fat: 801730, SDS, Essex, UK) with access to food pellets and tap water ad libitum. After 12 weeks, cartilage damage was induced, under general anesthesia, on the femoral condyles by placement of five grooves without damaging the underlying subchondral bone, in one knee joint according to the rat groove model[32] in 14 rats of each diet group. Analgesia (Buprenorphine) was provided until 24 hours after surgery and all animals were immediately allowed to move freely.The remaining 6 rats in each group served as control group without sham surgery for each diet. Sham surgery was not performed as previous work demonstrated no difference in synovial inflammation or cartilage degeneration 12 weeks after sham surgery compared to non-operated control joints [33]. The study design is based on our published methods combining a HF diet and the rat groove model of OA [5]. At endpoint rats were euthanized in their home cage using carbon dioxide and dead was confirmed by respiratory arrest together with fixed and dilated pupils. Joint degeneration was assessed as previously described using the OARSI histopathology score specifically for rats according the guidelines [34]. The total OARSI score is based upon the sum of the following sub sections: cartilage matrix loss width (0-2), cartilage degeneration (0-5), cartilage degeneration width (0-4), osteophytes (0-4), calcified cartilage and subchondral bone damage (0-5) and synovial membrane inflammation (0-4).The study was approved by the Utrecht University Medical Ethical Committee for animal studies (DEC 2013.III.12.086) and ARRIVE guidelines were fully complied.
Sample collection
At week 24, all rats were fasted for 6 hours and subsequently blood was collected via the lateral tail vein. Blood samples were centrifuged at 3000 RCF for 15 minutes and plasma was stored at −80˚C until analysed. Subsequently rats were euthanized by carbon dioxide and the synovial fluid of the experimental knee joints collected immediately afterwards. To collect the synovial fluid, first the skin of the hind paw was removed and the M.quadriceps was dissected following the quadriceps reversing approach [35]. With the knee in flexion an Ahlstrom 226 filtration paper of 3mm section (PerkinElmer, USA) was introduced in the knee joint for maximum absorption of the synovial fluid as previously described [36][37][38] preventing contact with surrounding tissues. Subsequently, the absorbed synovial fluid on the filter papers were placed in a 2ml Eppendorf tube and directly snap frozen and stored at -80˚C upon analysis.
LC-MS/MS method
Concentrations of 34 oxylipins (see S1 Table) were quantified by liquid chromatography-mass spectrometry (LS-MS/MS) using a validated quantitative method based on that described by Wong et al. ). The HPLC system used was a Shimadzu series 10AD VP LC system (Shimadzu, Columbia, MD, USA) and the MS system used was an Applied Biosystem MDS SCIEX 4000 Q-Trap hybrid triple-quadrupole-linear ion trap mass spectrometer (Applied Biosystem, Foster City, CA, USA) equipped with an electrospray ionisation (ESI) interface. Quantification of the eicosanoids was calculated using fully extracted calibration standards for each of the analytes. Before quantification of lipid-derived inflammatory mediators of rat synovial fluid, blank filter papers were stained with equivalent concentrations of calibration standards (100pM, 500pM, 1nM, 2.5 nM, 5nM, 10nM) and were compared with actual standard calibration curves. Quantification was performed using Analyst 1.4.1. Identification of each compound in plasma samples was confirmed by LC retention times of each standard and precursor and product ion m/z ratios.
Statistical analysis
First a principal component analysis (PCA) was performed on all analytes from synovial tissue and plasma together. Subsequently, data of all analytes were normalised and the z-scores were presented as mean with standard deviation. To test for the differences between the four different study groups for the selected clusters, a 1-way ANOVA with a Bonferroni correction was used. To evaluate the effect of HF diet and groove surgery on the normalised average data for each cluster of lipids an independent samples t-test was performed.
Potential associations of lipids within the selected clusters, formed by PCA, with the histological joint degeneration, as determined by the OARSI score[34], were determined with a linear regression analysis. The outcome is presented as regression coefficient (B) for linear regression with 95% confidence interval. In parallel, to relate the selected clusters (as formed by PCA) with the individual components of the histological OARSI-score; synovial membrane inflammation, osteophyte formation, and cartilage degeneration, a logistic regression analysis was used. Data are presented as odds ratios (OR) with 95% confidence intervals. All individual lipid data from serum and synovial fluid are separately presented as absolute mean value with standard deviation distinguished by the type of diet or the performed surgical procedure. To evaluate the effect of HF diet feeding and groove surgery on each individual lipid an independent samples t-test was performed. Finally, the correlation between the individual lipids and the histological OARSI-score was performed by a Pearson correlation (SPSS statistics 21, SPSS inc., Chicago, IL, USA). For all tests p values <0.05 were considered statistically significant.
Association of systemic and local lipids with histological joint degeneration
Overall 31 of 34 oxylipins could be detected in rat fasted plasma and synovial fluid (obtained on filter papers). PCA identified four different clusters, ranging from 7-17 individual oxylipins ( Table 1). The individual oxylipins within each cluster were strongly associated. The clusters differentiated clearly between lipids from local and systemic origin; cluster 1 and 2 only containing lipids from the synovial fluid and cluster 3 and 4 only lipids from plasma. When considering the effect of HF diet (with and without groove surgery), a statistical significant increase in averaged normalised lipid value of cluster 3 was observed, compared to the standard diet fed rats (with and without groove surgery; z-score of 0,22 ± 0,62 vs. -0,29 ± 0,52; p = 0.013, Fig 1A). All other clusters did not show a difference between the HF diet and the standard diet fed rats. Also, no differences were observed in rats with mechanical induced cartilage damage (grooves) compared to non-surgically damaged rats for all clusters (Fig 1B).
Looking at the association of the normalized lipid values within the different clusters for the total joint degeneration (independent of the four groups), as determined by the total OARSI histopathology score, using linear regression analysis, cluster 1 and cluster 3 showed a non-significant positive association with the total joint degeneration score. Indicating that increased histological joint degeneration is association with increased lipid concentrations, from local and systemic origin ( Table 2). The other two clusters (cluster 2 and 4) showed a negative trend Table 1. Selected clusters of oxylipins as defined by the principal component analysis. with the histological OARSI score (Table 2). When considering the association of the four clusters with the individual parameters of the histological OARSI score (logistic regression analysis), the association was in line with the total OARSI score ( Table 2). A statistically significant positive association between systemic oxylipins in cluster 3 and the local synovial membrane inflammation (OR 54,78 [2.6-1170.9]; p = 0.010, Table 2) was found. The synovial fluid lipids in cluster 2 showed a statistically significant negative association with cartilage degeneration (OR 0.004 [0.0-0.5]; p = 0.024, Table 2).
Plasma and synovial fluid lipid profiles
Of the 31 detected lipids, 13 lipids present in the fasted plasma were significantly different between the HF diet and the standard diet group, independent for the performed groove surgery ( Table 3 and Fig 2A). Whereas, only one lipid (5,6-EET; p = 0.023) present in The outcome of logistic regression analysis from selected clusters in relation to selected individual histological parameters (Synovial membrane inflammation, Osteophyte formation and Cartilage degeneration) is shown. Data is presented as odds ratio (OR) with 95% confidence interval. In the right column outcome of linear regression analysis with histological joint degeneration score (total OARSI score) is shown. Data is presented as regression coefficient (B) with 95% confidence interval for the total OARSI score.
Discussion
The present study profiled oxylipin levels in both synovial fluid and plasma from a rat OA model, combining mechanically induced cartilage damage with a HF diet, using a highly sensitive LS-MS/MS method. Multiple clusters of oxylipins, as determined by PCA, were associated with histopathological changes by logistic regression analysis. Whereas, 4 local (5,6-EET, 15-HETE, 8,9-DHET and 17R-ResolvinD1) and 13 systemic oxylipins were clearly altered in this OA model as a result of groove surgery, HF diet feeding or a combination of both induced triggers. With distinct differences in synovial fluid and plasma concentrations of individual oxylipins, suggesting differential roles of the oxylipins in the local versus peripheral compartment.
Focusing on the individual lipids in the synovial fluid we observed a statistically significant decrease of 15-HETE levels in the synovial fluid of grooved knee joints compared to non-surgically damaged joints. 15-HETE is known to be secreted by adipocytes [39], but how this lipid is related in the process of OA is currently unknown. In mice, the absence of 15-HETE resulted in accelerated joint swelling and has an anti-inflammatory role [40]. On the other hand, increased levels of 15-HETE are present in knee joints of MIA induced rats [21], and a positive association of 15-HETE with incidence of human symptomatic knee OA is observed [29]. Besides 15-HETE, local changes were also observed for, 8,9-DHET, a pro-inflammatory lipid, 17R-Resolvin D1, a mediator of inflammatory responses, and 5,6-EET, which has a known role in pain mechanisms [41][42][43]. These altered lipid values were only observed in synovial fluid and not in plasma, except for 5,6-EET which is the only lipid increased in both the local and peripheral compartment. Moreover, the 13 plasma lipids that were sensitive to the HF diet in plasma were not reflected in the synovial fluid. This data suggest differential roles for oxylipins in the local versus peripheral compartment.
When looking at group level rather than looking at the individual lipid values, a strong positive association is detected between systemic oxylipins and the histological synovitis-score. This implies a direct effect of systemic circulating pro-inflammatory lipids on the local inflammatory state of the joint. On the other hand, the cluster with identical lipids from local origin did not show an association with knee joint synovitis. Another cluster of lipids originating from synovial fluid, showed a negative association with histological cartilage degeneration, indicative for a protective effect of these lipids on the articular cartilage. Previously we showed that osteophyte formation was an inflammatory driven process, in the selected model [5]. However, in the present study no association with osteophytes was observed in selected clusters of systemic and local lipids.
To better understand the complex inter-relationship between the different disease mechanisms involved in OA, animal models can help to elucidate the complex mechanistic aspects of OA [32,44]. The advantage of using synovial fluid samples is that it is in direct contact with the tissues of the knee joint and likely contain more specific biomarkers that reflect the primary joint related degeneration pathways [45]. However, for humans the availability of synovial fluid from healthy but also early-OA patients is limited[28] and challenging due to the difficulty of defining early-OA [46]. Comparative lipidomic analysis of synovial fluid in a canine model of OA and human early-OA revealed that the lipid profiles of dogs often reflect those of humans [8]. Whether also small animal models of OA reflect the human disease with respect to lipid profiles, is currently unknown. In a model of HF diet induced obesity with destabilization of the medial meniscus in mice, an association was found between serum and synovial fluid lipid levels with histological OA and synovitis [47]. Although these data are in line with previous studies using a HF diet in rats with changed systemic lipid values [48,49], disadvantage of destabilizing the meniscus is permanent joint instability and joint inflammation making the translation to the human OA situation questionable. This model could potentially be used to test medical interventions. Specifically the Cytochrome P450 system could be an interesting target to focus on, this system constitute a major metabolic pathway for arachidonic acid. [50] Besides, there is evidence that blocking of CYP enzymes with N-methylsulfonyl-6-(2-propargyloxyphenyl) hexanamide (MS-PPOH) represent a therapeutic target. [51] As prolonged MS-PPOH delivery result in attenuated effects in pulmonary hypertension [52], antidiuretic effects [53], and decreased coronary reactive hyperemia after ischemia due to inhibition of EETs synthesis. [54] This makes MS-PPOH a potential beneficial therapy for this model, where we can study the oxylipin profiles to better understand the metabolic changes associated with the inhibition of CYP epoxygenases.
During the initial phase of inflammatory responses in symptomatic knee OA, cyclooxygenase-2 is significantly up-regulated and acts on arachidonic acid to produce oxylipin mediators, specifically prostaglandins, prostacyclins, and thromboxanes [55,56]. Oxylipins of both the cyclooxygenase and lipoxygenase pathways have been produced in sufficient quantities by joint tissues to be reflected in plasma in patients with symptomatic knee OA, indicating an increased arachidonic acid metabolism in OA [29]. Also local levels of endocannabinoid lipids in human synovial fluid and the infrapatellar fat pad in relation to OA have previously been reported [21]. In the current study increased plasma dihydroxyeicosatrienoic acids (DHETs) levels and decreased corresponding epoxyeicosatrienoic acids (EETs) levels were observed. EETs are very unstable metabolites, it's rapidly hydrolyzed by soluble epoxide hydrolase to the less biologically active but more stable metabolites DHETs and EETs might reflect the state of inflammation. [57] This indicates that the observed decreased plasma diol/epoxy-ratios might be involved in the inflammatory reaction as seen in this OA model.
Besides regulating inflammation, oxylipins are also important mediators of inflammatory pain [56]. Especially the role of resolvin receptors in pain behaviour have been studied [58]. Inhibitory effects of a precursor of resolvin D1, 17(R)-HDoHE, were observed on established OA pain in rats [58], which is corroborated in our study showing decreased levels of systemic 17(R)-HDoHE in rats with a HF diet. In our study, pain-related outcome measures were not performed and therefore further research needs to support this.
Local molecular biomarkers from the knee joint in small animals are limited by the small volume and difficult accessibility of the synovial joints and therefore often not taken into account [37,59]. Often blood plasma samples are used as a representative of general inflammatory status with maybe some by-product that originates from the joint fluid. As such blood plasma oxylipins may be useful as biomarkers that can elucidate joint condition. The present study for the first time profiled local lipids originating from solely synovial fluid in rat knee joints. To access the synovial fluid, we selected the Whatman paper recovery method as previously designed for animals with small volumes of synovial fluid [36][37][38]. This specific and sensitive quantitative assessment method has the capacity to study pathway profiling of selected inflammatory related oxylipins, thereby providing a useful tool for the observation of biological differences and a readout for inflammation and oxidative stress in (experimental) early-OA. However, the statistical results have to be interpreted with caution as this experimental study had an exploratory nature and group size was not specifically designed for this research question. Irrespectively, specific changes in lipids related to inflammation as a consequence of a HF diet and induction of local cartilage damage by groove surgery could already be demonstrated and although the association between the selected histological outcome parameters and oxylipins do not necessarily reflect a causal relationship, they warrant further investigation of the role of the eicosanoid system in early OA mechanisms.
Here we present for the first time that it is possible to quantify (mainly eicosanoid) oxylipins in rat synovial fluid in an early experimental model of OA with local cartilage damage in addition to a HF diet induced metabolic dysregulation. It was demonstrated that both local and systemic bioactive oxylipins are responsive in early stages of the osteoarthritic process especially in the inflammatory responses involved and that local and systemic responses are not directly related. The HF diet induced metabolic dysregulation mainly influenced the systemic oxylipins of the fasted plasma. Whereas the mechanically induced cartilage damage with groove surgery had the most effect on the local oxylipins originating from the synovial fluid. Further understanding of the mechanisms by which the selected lipids play a role in the process of (early-)OA is necessary for its potential role as biomarker of disease.
Supporting information S1 | 5,199.6 | 2018-04-23T00:00:00.000 | [
"Biology",
"Medicine"
] |
Brute Past Presentism, Dynamic Presentism, and the Objection from Being-Supervenience
Presentism faces the following well-known dilemma: either the truth-value of past-tense claims depends on the non-existing past and cannot be said to supevene on being, or it supervenes on present reality and breaks our intuition which says that the true past-tense claims should not depend on any present aspect of reality. The paper shows that the solution to the dilemma offered by Kierland and Monton’s brute past presentism, the version of presentism according to which the past is supposed to be both a fundamental and present aspect of reality, is implausible and proposes how to cure presentism: the dillema can be avoided by taking a third road consisting of introducing dynamics into presentism in the form of the real passage of time. Dynamic presentism, which is constructed in such a way, can overcome the dilemma by providing an ontological basis for the past-tense propositions in the form of the real past. Dynamic presentism also offers a rationale for treating the future as being open.
Introduction
Presentism, the view that the way things are is the way things presently are, 1 faces the following well-known difficulty for all presentists: what is the ontological basis for true past-tense claims such as, for example, 'Socrates was a philosopher', if the past does not exist. This entails the following dilemma which Kierland and Monton (2007) tried to solve: Extended author information available on the last page of the article 1 I follow here Hinchliff (1996: 123) and Kierland and Monton (2007: 485). As noted by Kierland and Monton (2007: 485), this thesis entails the more popular formulation of presentism: the only things that exist are presently existing things. This more popular formulation of presentism will be used in the paper as well.
Dilemma: either truth-value of past-tense claims depends on the non-existing past and cannot be said to supevene on being, or it supervenes on present reality and breaks our intuition which says that the true past-tense claims should not depend on any present aspect of reality. 2 Solving this dilemma, the authors developed the view which is a version of presentism they termed brute past presentism (BPP). 3 According to this view, in addition to the standard thesis of presentism which claims that the only things that exist are presently existing things, it claims that the past is supposed to be a fundamental aspect of reality and-at the same time-a present aspect of reality. I would like to show that this is a strategy which amounts to the old adage of two steps forward, one step back: the authors interestingly push forward presentism when they claim that the past is supposed to be a fundamental aspect of reality, however, in the next step, they retreat by claiming that the past is a present aspect of reality. This diagnosis is put forward in the second section of the paper, while the third one introduces a potential remedy to this weakness in the form af modified versions of dynamic presentism (DP), 4 that may restore the patient to full health: not only does DP try to provide presentists with an ontological basis for the past-tense propositions in the form of the real past, but it also offers a rationale for treating the future as being open as an extra bonus. The paper ends with some conclusions.
The Problem: Diagnosis
According to BPP, 1. The way things are is the way things presently are. Which entails that 2. The only things that exist are presently existing things. As concerns the the truth-value of past-tense claims, refering to our intuition the authors claim that 3. 'The truth-value of past-tense claims is determined by the past. ' (2007: 485) But how this can be done? The authors answer: 4. 'The shape of the past is what makes past-tense claims true. ' (2007: 492) 5. 'This shape does not consist in a structure of things having properties and standing in relations to one another. ' (2007: 491) Then the essential question arises as to what is the past if-according to presentism-the past does not exist. Kierland and Monton (2007) give us a number of declarations clarifying how they understand the past: 6. The past is a fundamental aspect of reality different from things and how things are. 8. 'The past is an aspect of reality, even though no past things are. How can this be? There is no reductive explanatory answer to this question.' (2007: 491) However, according to all standard versions of presentism, only the present exists, so the authors felt to be forced to admit that: 9. 'The past is a present aspect of reality. ' (2007: 496) There is no contradiction between (6), saying that the past is a fundamental aspect of reality different from things and how things are, and (9) because Kierland and Monton assume that 10. Reality is not exhausted by things and how things are. (2007: 485, 491) Nonetheless, a more essential difficulty arises as a consequence of the proposed solution. Namely, the authors wanted to solve the Dilemma: either the truth-value of past-tense claims depends on the non-existing past and cannot be said to supevene on being, or it supervenes on the present reality and breaks our intuition which says that true past-tense claims should not depend on any present aspect of reality. Did they succeed? I claim that not at all because Kierland and Monton smuggle in through the back door of their ideology what they thought they had ruled out of the ontology, saying that the past is a present aspect of reality. In the solution preferred by them, to avoid the second horn of the Dilemma they assumed, firstly, that the truth-value of past-tense claims is determined by the past (3), and, secondly, that the past is a fundamental aspect of reality different from things and how things are (6), nevertheless then the first horn (if the truth-value of past-tense claims depends on the non-existing past, then it cannot be said to supevene on being) fought them off back to the second horn and forced them to reject the stipulation (let us call it after the authors P) that the past-tense claims do not depend on any present aspect of reality for their truth-value. The rationale for such a move was that reality is not exhausted by presently existing things and how things are (2007: 496). This is why, according to Kierland and Monton, 'P is not intuitively true ' (2007: 496).
The point is, however, that even if we agree to deny the existence of facts as the authors do (2007: 497), we will still believe that Socrates is part of the real past and not of a present aspect of reality, so P seems to be still intuitively true contrary to what Kierland and Monton claimed. It is hard to accept that the past is a present aspect of reality (9): it is not a present aspect of reality that Socrates was a philosopher and that he was convicted by the Athenians. Srictly speaking, we have, of course, a history of philosophy but it is not the history that made Socrates a philosopher but rather the other way around: that he was a philosopher made our history and us as we are. And we are resposible in no way for his conviction by the Athenians and it is no aspect of the present world; we are really responsible but only for our own faults. McFetridge, Keller, and by Sanson and Caplan noted that we do not want a mere correlation between what is true and what the world is like; rather, we want the truth of a proposition to be explained by how things are in the world. 5 Kierland and Monton do not offer us such an explanation. As long as a plausible explantion of the ontological status of the past is not offered, such a solution cannot be regarded as reasonable.
Of course, our knowledge is fallible and perhaps P is just the thesis that should be changed. I would like to show, however, that P can be saved in a plausible way which has some other virtues for the presentists and therefore there is no need to reject it. Before I introduce the proposed improvement of presentism, I would like to briefly analyse Kierland and Monton's second strategy of solving the Dilemma, one which is connected with a different reading of P. 6 According to this second strategy, in the intuition that pasttense claims do not depend on any present aspect of reality for their truth-value, the past should be understood as a one big event (let us call it 'the past e ' after the authors) which consists of all past events, such as that Socrates died, World War II occurred, and Mt. St Helens erupted. Then- Kierland and Monton (2007: 496) claim-'''The past e occurred' is a true claim about a past event,' and P is intuitively true. One can certainly agree with Kierland and Monton, but only when the past is not a present aspect of reality (and it is a real past which means that the truth-value of past-tense claims depends on the nonexisting past), because when the past is a present aspect of reality, the second horn of the Dilemma is encountered (that true past-tense claims should not depend on any present aspect of reality), contrary to what is claimed by the authors.
So, is the patient terminally ill without any chances of surviving? I claim that this is not the case at all. Kierland and Monton (2007) were on the right track when they claimed that the truth-value of past-tense claims is determined by the past (3); and that the past is a fundamental aspect of reality which is different from things and how things are (6); and that the past is what has happened: what things existed and how they were (7). However, it is hard to accept that the past is a present aspect of reality as it was argued above.
Where, then, did Kierland and Monton make a slip? The point is that the fundamental thesis of presentism (2) 'The only things that exist are presently existing things' seems, at first glance, to block any other understanding of the past than that which is offered by (9) ('the past is a present aspect of reality'). The thesis (8), let us recall, 'The past is an aspect of reality, even though no past things are. How can this be? There is no reductive explanatory answer to this question,' seems to confirm this diagnosis of the authors' approach. However, our intuition, which is so highly estimated by them, together with our everyday experience, gives us a simple answer to these exciting mysteries: the past is not a present aspect of reality, but by definition past. And it is the passage of time that is responsible for the fact that Socrates and his contemporaries do not exist, although they did exist, and this concerns all other past things as well.
Strictly speaking, Kierland and Monton maintain that the past 'is what has happened: what things existed and how they were' (7), however, they do not explain what this is and how it is possible that something has existed and does not exist? Nor have they explained what is the origin of the phenomena that something existed but no longer exists. They simply declare that there is no reductive explanatory answer to this question (8), and we are left in the dark as to how this is possible; the past is declared to be brute and that it is a present aspect of reality. Sider (2001: 39-41) claims that introducing 'primitive tensed properties of the world' as a solution to the grounding objection is a case of cheating. Answering this objection, Kierland and Monton compare brute past presentism to brute disposition and brute counterfactuals (2007: 494) and suggest that the latter 'can be reductively explained' so they can be accused of cheating. And they add: 'Maybe something similar can be said about a brute past, but that requires independent motivation.' So, let us show that such a motivation is known for philosophers and, what is more, enjoys a very long pedigree. Namely, let us look at the following passage from the 11th book of St. Augustine's Confessions: Boldly for all this dare I affirm myself to know thus much; that if nothing were passing, there would be no past time: and if nothing were coming, there should be no time to come: and if nothing were, there should now be no present time. Those two times therefore, past and to come, in what sort are they, seeing the past is now no longer, and that to come is not yet ? As for the present, should it always be present and never pass into times past, verily it should not be time but eternity. If then time present, to be time, only comes into existence because it passeth into time past; how can we say that also to be, whose cause of being is, that it shall not be: that we cannot, forsooth, affirm that time is, but only because it is tending not to be? (St. Augustine 1912: 239) This observation is simple but one which is hard to overestimate: if nothing were passing, there would be no past time, in other words, if there was no flow of time, there would be no past time. It means that every presentist who wants to speak seriously about past time, should accept the existence of the flow of time. 7 Then, undoubtedly, someone who claims, as Kierland and Monton do, that the past is a fundamental aspect of reality (6), should accept existence of the flow of time. And, naturally, St. Augustine offers us in this way an explanation of the origin of past time which is lacking: time present, to be time, only comes into existence because it passes into time past. What is also important, our intuition and our experience strongly confirm that it is the flow of time which is responsible for the fact that Socrates existed and was convicted, and that he does not exist any more. Unfortunately, conceptions such us the flow of time or becoming do not appear in Kierland and Monton's paper.
Thus the question arises as to why Kierland and Monton, as is the case with many other presentists, did not refer to the flow of time? There seem to be three reasons which are responsible for this: firstly, conceptual difficulties connected with the idea of the flow of time; secondly, the fact that the main thesis of presentism is introduced with only one ontological thesis (1 or 2), which says nothing about the flow of time; and thirdly, the fact that presentism makes use of the notion of existence which allows only the dichotomy of exist or do not exist, and which does not permit the introduction of the metaphysical category for objects that existed and do not exist. In other words, the notion of existence exploited by the presentists, and by Kierland and Monton as well, has a static character, that is, this is the notion of existence (or non-existence) at some fixed moment of time, and it does not make it possible to talk about ontological changes in time. 8 All these reasons deprive presentism of dynamics and make it impossible to find an ontological basis on which the truth-value of past claims can supervene.
So, in summary, the proposed diagnosis of the weakness of BPP and other static versions of presentism is the following: the lack of dynamics and-in consequence-the lack a plausible metaphysical base on which truth-value of past-tense sentences can supervene. The next section tries to find a remedy for these flaws.
A Remedy with an Extra Bonus
It was already suggested in the previous section, that every presentist who wants to speak seriously about past time should accept the existence of the flow of time and introduce it into her/his ontology. Kierland and Monton were close to the solution which is going to be proposed in this paper when they claimed that, for the presentist, the past is a fundamental aspect of reality (6) and that the past is what has happened: what things existed and how they were (7). Unfortunately, they also maintained that the past is brute and although the brute past is supposed to form a sui generis metaphysical category, an explanation of what the brute past is, according to the authors, unattainable: The brute past has an intrinsic nature. Given what we say next, we like to think of this intrinsic nature in terms of the past having a certain 'shape'. This shape does not consist in a structure of things having properties and standing in relations to one another. The past is an aspect of reality, even though no past things are. How can this be? There is no reductive explanatory answer to this question. The crucial feature of brute past presentism is that is postulates a sui generis metaphysical category, one independent of things and how they are. (Kierland and Monton 2007: 491) Of course, any opponent of the idea of the introduction of the flow of time into the ontology of presentism can object: not so fast, wait a moment: have you perhaps explained what is the flow of time, or perhaps you are trying to explain ignotum per ignotius? S/he may also object that a simple admixture of the thesis about the flow of time to her/his main thesis (1/2) does not change the situation of the presentist too much because, according to the main ontological thesis of presentism, only the present exists and thus there will still be missing a plausible metaphysical category of the past on which the truth-value of past-tense claims can supervene.
I would answer such doubts by saying that a deeper change in the ontological position of the presentist is indeed necessary. This is a change which introduces real dynamics into this view and allows us to say what did exist, however, does not exist. I would also add that a plausible explanation of what constitutes the flow of time was offered by Broad (1938) in terms of the absolute becoming of events, that is, their coming to pass, 9 and that this approach can be developed in the dynamic and full-blooded versions of presentism which deserve to be called dynamic presentism (DP). Let us briefly introduce two such presentist solutions developed by means of the notion becoming after Gołosz (2013Gołosz ( , 2017c, and by means of the notion of dynamic existence after Gołosz (2013Gołosz ( , 2015Gołosz ( , 2018. So, let us start with the first approach and introduce this version of DP in the following form (expressed in tensed language): Becoming: The events which our world consists of become (come to pass).
where becoming, as Broad's absolute becoming, is a primitive notion which cannot be further analysed in terms of a non-temporal copula and some kind of temporal predicate. 10 This thesis expresses, of course, the reality of the flow of time, however, it is easy to show that Becoming also leads precisely to the ontological thesis of presentism. 11 To show this, we should only notice that Becoming says that events become, that is, they come into being and then they pass, and recall that, according to the long presentist tradition, the present can be identified with what exists. 12 It means exactly that only present events exist. This formulation of presentism, however, avoids the triviality objection because neither the notion of the present nor the notion of time are involved in Becoming. 13 Now, what remains is to introduce three definitions: The present : The totality of events which become (come to pass). The past : The totality of events which became (came to pass). The future : The totality of events which will become (will come to pass).
The first of these definitions was adopted following the above mentioned presentist tradition of identifying the present with what exists, the second and the third ones were assumed by analogy. Such a version of presentism has some virtues which speak for themselves: 14 9 ''To 'become present' is, in fact, just to 'become', in an absolute sense; i.e., to 'come to pass' in the Biblical phraseology, or, most simply, to 'happen'. Christensen (1993: 168): 'To be present is simply to be, to exist, and to be present at a given time is just to exist at that time-no less and no more'; and Craig (1997: 37): 'Presentness is the act of temporal being.' 13 The triviality problem for presentism consists in this that when we examine its ontological thesis, saying that the only things that exist are presently existing things, it turns out that this thesis is trivially true or trivially false, depending on the way we understand the verb 'exists': in the tensed or in the tenseless way. See, for example, Merricks (1995: 523), Savitt (2006), Gołosz (2013), and discussions of the problem in Zimmerman (2004). 14 See Gołosz (2017c: 293).
(i) According to Becoming, the present is continuously changing, which means that it allows the expression of a dynamic character of reality, which presentism in the form of a single thesis of the form (1, 2) is not able to do. (ii) It avoids the question of the rate of time's passage because-as emphasized by Broad-the notion of becoming is primitive and unrelated to anything else, and especially it is not related to time. (iii) This formulation of presentism also avoids the triviality objection because the notion of the present is not involved in Becoming and thus this thesis is not trivial. (iv) This version of presentism provides us with the metaphysical category of the past which we have sought.
From the point of view of this paper, the last virtue is especially important: this version of presentism provides us with the metaphysical category of the real past which we have sought: the past consists of the totality of events which became (came to pass). Thanks to this, it allows us to differentiate between actual events, such as, for example, the case of Socrates, which did become, and fictions such as the capture of Cerberus by Heracles, which did not become. At this point, Kierland and Monton could oppose: Becoming cannot be treaed as a remedy for positions such as BPP because we rejected the ontology of facts and our ontology is based on things and the way they are; we emphasized that fact-talk is always parasitic on something which is metaphysically more fundamental. 15 And that is why we cannot accept such a solution to our Dilemma-they could add.
I would answer such an objection by claiming that the notion of becoming and the dynamic version of presentism presented can be further developed in such a way that things and the way they are would be included in ontology as fundamental objects. This is precisely the second version of presentism which was mentioned above and which is introduced in Gołosz (2013Gołosz ( , 2015Gołosz ( , 2018. It is based on the notion of the dynamic existence of things to emphasize a fundamental difference between things and events-while existence of both things and instantaneous events has a dynamic character, the former do not cease to be but persist by enduring, that is by keeping their strict (literal or numerical) identity over time: 16 Dynamic Reality: All of the objects that our world consists of exist dynamically. 15 Kierland and Monton (2007: 497): 'we deny the existence of facts altogether. As we explained in Sect. 3, we think fact-talk is always parasitic on something metaphysically more fundamental. In the case of talk of facts about present things, it's the present things and how they are that is more fundamental. In the case of talk of facts about the past, it's the past itself which is more fundamental.' 16 See Gołosz (2018: 404). There are two opposite views on persistence: endurantism and perdurantism. According to the latter, things perdure if they persist through time by having temporal parts, and persisting things are treated as mereological aggregates of temporal parts, none of which are strictly identical with one another. The enduring of things is usually defined as a persistence over time by being wholly present at each time but, as noticed by Merricks (1994: 182), '(…) the heart of the endurantist's ontology is expressed by claims like '[object] O at t is identical with [object] O at t*'.' That is why, for the author of this paper, this second condition alone suffices for the definition of endurantism and is a better criterion of endurance, so it will be used in what follows.
where Dynamic Reality (DR) is expressed in the tensed language and the notion of dynamic existence is a primitive notion (just as Broad's absolute becoming) which can be roughly characterised by the set of postulates: (i) the notion of dynamic existence is tensed; (ii) things that dynamically exist endure; (iii) events (which are acts of acquiring, losing or changing properties by dynamically existing things and their collections) dynamically exist in the sense of coming to pass.
The term ''objects'' is here used in such a way that it applies to things and events, however things are treated here as primary objects, while events are secondary. 17 DR is accompanied by the three definitions (similarly to Becoming): The present : The totality of objects that dynamically exist. The past : The totality of objects that dynamically existed.
The future : The totality of objects that will dynamically exist.
Again, as in the case of Becoming, DR expresses at the same time the reality of the flow of time and the ontological thesis of presentism in the form of one single thesis. DR has the same virtues (i-iv) as Becoming (with swapping Becoming for DR, of course) and once again, from the point of view of this paper, the last virtue is especially important: this version of presentism also provides us with the metaphysical category of the real past which we need so much: the past consists of the totality of objects that dynamically existed. DR, however, has two essential advantages not only over Becoming, but over every other version of presentism: first, the endurance of things is here a simple logical consequence of the dynamic existence of things, that is, it is a consequence of their way of existence proposed in this thesis. Contrary to what is commonly assumed by the presentists, the enduring of things is not a logical consequence of presentist theses of type (1/2) as shown by Brogaard. 18 The second advantage is even more important: the notion of dynamic existence which is applied in it is supposed to supersede the ordinary notion of existence which is standardly used by the presentists (and eternalists as well) and which has a static character, that is, it is a fixed existence in a fixed moment of time which is not appropriate for expressing the transitory character of the present. From this that I exist-in the tensed meaning of the standard term ''exist''-in no way follows that I will not exist, neither that I am changing. Similarly-when we use tensed language-the standard notion of existence does not explain how it is possible and what it really means that the past existed and that the future will exist although both do not exist (in the tensed meaning of the term ''exist'').
17 See Gołosz (2018: 403). 18 From the idea that the present exists while the past and the future do not exist, one cannot infer that the persisting object keeps its strict identity. It is possible, after all, that an object persists in such a way that it is four-dimensional and its temporal parts (or stages)-not strictly identical with themselves-are coming consecutively into being. See Brogaard (2000: Sect. 3) and Gołosz (2018: 406-407). This is very important for two reasons: first, because it means that it makes no sense to ask whether things that dynamically existed do (statically) exist or do not (statically) exist: the notion of dynamic existence supersedes the notion of (static) existence and introduces more metaphysical categories than the latter. While the latter introduces only two fixed metaphysical categories of what exists and what does not exist, the former introduces six metaphysical categories which are continuously changing: the past (things and events that dynamically existed); the present (things and events that dynamically exist); the future (things and events that will dynamically exist); and their complements, that is, the past' (things and events that did not dynamically exist); the present' (things and events that does not dynamically exist); and the future' (things and events that will not dynamically exist). So, for example, Socrates belongs to the past, while Zeus and Apollo belong to its complement, that is, the past'. They (Zeus and Apollo, of course) belong to the present' and to the future' as well. What should be emphasised, once again, is that all six categories are continuously changing, namely the past and the future' are growing, the future and the past' are shrinking, while the present-to say it meataphorically-is 'moving' toward the direction which we call the future (I would like to empahsize here that the term 'move' was not used in DR).
There is a second reason to be considered and which is mentioned above, namely that Kierland and Monton (2007: 492) complained about the lack of a metaphysically perspicuous language for describing the 'shape of the past.' 19 The last version of dynamic presentism equipped with the notion of dynamic existence provides us with a language which allows us to talk not only about Kierland and Monton's 'shape of the past,' but also about a structure of past things, their having properties and standing in relation to one another. Thus it allows us to say, for example, that Heraclitus (who dynamically existed) didn't like Pythagoras (who dynamically existed), or that Heraclitus (who dynamically existed) was a native of the city of Ephesus (which dynamically existed). The language of DP also enables us to talk about the past, the present and the future, as they are changing, and to differentiate between objects like Socrates-on the one hand-that did dynamically exist, and Zeus and Apollo-on the other-that did not dynamically exist. What this means, and is of fundamental importance, is that, in this way, the notion of dynamic existence and DP provide presentists with a rationale for introducing and making use of tensed language: 20 this is exactly the dynamic existence of the world which is resposible for this that it is continuously changing, and that although Socrates (dynamically) existed, does not (dynamically) exist any more, and we should speak about him using the past tense. And, of course, the same concerns all other past objects.
At the end of this section, I would like to mention an additional bonus provided by DP concerning the problem of being-supervenience (or truthmaking). Namely, the presentists who try to respond to the objection from being-supervenience, usually assume that they need not look for an ontological basis for (contingent) future-tense claims because the claims about the future are not determined and lack truth-value. 21 But what is the origin of this asymmetry between the fixed past and (probably) open future? We cannot change the past no matter how strongly we would like to do this. But we have traces of it in our memory and in the world around us. Conversely, the future seems to be open-our experience seems to suggest this openness and quantum mechanics confirms this conviction-and perhaps it depends on our actions. How it is possible? Physics is silent on this issue; the physical laws describing the electrodynamic, strong and gravitational interactions are invariant under time reversal and as such cannot distinguish any direction of time. In turn, weak interactions are not time reversal invariant, but they are not involved in the processes leading to the coming into being of the traces of the past which we observe in everyday life. 22 Presentism in its standard form (1 or 2) is also silent on this issue: no 'move' of the present and no asymmetry of time follows from (1 or 2). DP in both versions provides us with a simple metaphysical solution to this exciting mystery: the past has already become or dynamically existed, as such is fixed and directly unavailable, we can only get to know it by its traces. Contrary to the past, which dynamically existed (or became) and as such is fixed and cannot be changed, the future looks as if it were open: it does not dynamically exist yet, it will only come into (dynamic) existence and, for this reason, we can probably influence it, at least sometimes. 23 It should also be emphasized that while brute past presentism can be accused of being an ad hoc solution to the objection from being-supervenience, 24 DP cannot be: both versions of presentism, and the notions of becoming and dynamic existence which these versions of presentism are based on, were introduced as a solution to the difficulty with the explanation of what the flow of time consists in and the explanation of the ontological status of past, present and future objects is an additional bonus.
Conclusions
I have tried to show that Kierland and Monton interestingly extended our knowledge about presentism and its ontological basis for past-tense claims when they proposed that the past should be regarded as a fundamental aspect of reality. Unfortunately, after this move they retreat and assumed that this fundamental aspect of reality is a present aspect of reality: the rub is that the past is a past and not a present aspect of reality. It was also recalled-as noticed long ago by St. Augustine-that if there was no flow of time, there would be no past time. So, the cure proposed in this paper consists of including the flow of time into the ontology of presentism and making presentism a dynamic view of reality. The world in which we live-as we see it-is 21 See the famous Aristotle's problem of sea-battle tomorrow (De Interpretatione: ch. 9), and, for example, Kierland and Monton (2007: 486). 22 See, for example, Sklar (1974) and Gołosz (2017a, b). 23 This 'can' follows from the possibilty which cannot be a priori excluded that our world will turn out to be deterministic after all because, for example, the quantum gravity which we are looking for will be deterministic in accordance with Einstein's expectations. But even if it is determined and not open, it does not dynamically exist yet and will just come into (dynamic) existence. 24 See fn. 3. the world in statu nascendi, in which everything is changing and DP tries to describe such a world. The dynamic ontology of this view provides the presentists with the correct ontological basis for both present-and past-tense claims.
Two versions of DP were presented which are based on the notions of becoming and dynamic existence and which provide us with metaphysical categories of the past-real and dynamic past as we know from our experience-the past as the totality of events which became (came to pass in the first version of dynamic presentism), and the past as the totality of objects that dynamically existed (in the second version of dynamic presentism). Not only do they introduce the past as growing, as it should be expected, but also both introduce the asymmetry between the fixed past and the (probably) open future (events,which are acts of acquiring, losing or changing properties by dynamically existing things and their collections, dynamically exist in the sense of coming to pass).
The latter of these two versions (based on the notion of dynamic existence) seems to be more promising because it entails as its direct consequence the enduring of things which is commonly assumed by the presentists, and-what is even more important-it eliminates a potential tension between becoming and existence which is still present in the former version of DP because the notion of existence is not changed there. The version based on the notion of dynamic existence can eliminate this tension because the notion of dynamic existence which is applied in it is supposed to supersede the ordinary notion of existence which is standardly used by presentists (and eternalists as well), and which has a static character (it is a fixed existence in a fixed moment of time). Thanks to this, instead of only two fixed metaphysical categories of what exists and what dos not exist, we receive six metaphysical categories which are continuously changing: the past (things and events that dynamically existed); the present (things and events that dynamically exist); the future (things and events that will dynamically exist); and their complements, that is, the past' (things and events that did not dynamically exist); the present' (things and events that do not dynamically exist); and the future' (things and events that will not dynamically exist). The future defined in such a way is (probably) open, while the past defined in such a way is fixed and provides adherents of this version of DP with a missing ontological basis on which truth-value of past-tense claims can supervene.
For all these virtues, the versions of DP presented in this paper-and especially the one based on the notion of dynamic existence-deserve to be regarded as the potential successors to traditional presentism. | 8,750 | 2020-05-11T00:00:00.000 | [
"Philosophy"
] |
Riemann Integral on Fractal Structures
: In this work we start developing a Riemann-type integration theory on spaces which are equipped with a fractal structure. These topological structures have a recursive nature, which allows us to guarantee a good approximation to the true value of a certain integral with respect to some measure defined on the Borel σ -algebra of the space. We give the notion of Darboux sums and lower and upper Riemann integrals of a bounded function when given a measure and a fractal structure. Furthermore, we give the notion of a Riemann-integrable function in this context and prove that each µ -measurable function is Riemann-integrable with respect to µ . Moreover, if µ is the Lebesgue measure, then the Lebesgue integral on a bounded set of R n meets the Riemann integral with respect to the Lebesgue measure in the context of measures and fractal structures. Finally, we give some examples showing that we can calculate improper integrals and integrals on fractal
Introduction
Fractal structures were introduced in [1] to study non-Archimedean quasimetrization, although it is true that they have a wide range of applications.Some of them can be found in [2] and include metrization, topological and fractal dimension, filling curves, completeness, transitive quasi-uniformities, and inverse limits of partially ordered sets.
One of the most recent applications of fractal structures is to construct a probability measure by taking advantage of their recursive nature.For some reference on this topic, we refer the reader to [3,4].The idea of this construction is defining a pre-measure on the elements of a fractal structure or some topological structures induced by it, so that, from several sufficient conditions and characterizations, we have extended that pre-measure to a probability measure on the Borel σ-algebra of the space.Indeed, the authors proved that each probability measure defined in a space with a fractal structure can be constructed by following the procedure mentioned.
On the other hand, the classical theory of Riemann-type integration starts from a bounded function on a compact rectangle of R n and a collection of almost disjoint compact sets whose union is the said rectangle, which we refer to by the name partition.From the function and the partition, the lower and upper Darboux sums are defined, and by taking the supremum and the infimum of these sums over all the possible partitions, we get the lower and upper Riemann integrals, respectively.Section 2.3 recalls, in more detail, some different basic results and notions of this theory.Talking about a partition in the environment of the calculation of Riemann-type integrals suggests considering a fractal structure so that, based on each of its levels, we can obtain the lower and upper sums.Consequently, it will make sense to talk about a Riemann integral, although the volume can be replaced by a measure that is defined in the σ-algebra of the space in which we are working.Furthermore, it makes sense to think that considering a higher level of the fractal structure can guarantee a better approximation to the true value of the integral.Thus, interest arises in studying the application of fractal structures to the development of a Riemann-type integration theory, with respect to a certain measure defined in the space, and that is the main objective of this work.For this purpose, we first give the notion of Darboux sums with respect to a measure and a fractal structure in Section 3.After that, we introduce the notion of a Riemann-integrable function with respect to a measure and a fractal structure on a certain space in Section 4 and prove a Riemann theorem in this context (see Section 5).Moreover, in Section 6 we prove that if a bounded function is Riemannintegrable, its integral does not depend on the chosen fractal structure, so we just have to refer to the measure defined on the σ-algebra of the space.It is also shown that the integral introduced coincides with the Riemann integral in R n and with the Lebesgue integral with respect to the measure.In the last section, we show some examples to illustrate this theory.Finally, it is worth highlighting that in the literature there are already other works related to the calculation of Riemann-type integrals on other types of spaces.For example, in [5][6][7][8].
Fractal Structures
Despite being introduced in [1] for a topological space, fractal structures can be defined in a set, and this will be the definition we use in this work, as has been used previously in other works.
First, recall that a cover Γ 2 is a strong refinement of another cover Γ 1 , written as The definition of a fractal structure is as follows.
Definition 1.
A fractal structure on a set X is a countable family of coverings Γ = {Γ n : n ∈ N} such that Γ n+1 ≺≺ Γ n .The cover Γ n is called the level n of the fractal structure.
A fractal structure is said to be finite if each level is a finite covering.In what follows, we introduce two simple examples of fractal structures.The first is defined in [0, 1] and its levels are given by n ] : k = 0, . . ., 2 n − 1} for each n ∈ N. Note that the previous fractal structure is finite (since it has a finite number of elements at each level).However, if we consider the Euclidean space R, it is defined as the countable family of coverings Γ = {Γ n : n ∈ N}, where In both cases, Γ is known as the natural fractal structure.
A fractal structure induces (see [1]) a transitive base of a quasi-uniformity given by {U Γ n : n ∈ N}, where If Γ is a fractal structure on a set X and A ⊆ X, the fractal structure induced on A (see [1]) is defined as
Measure Theory
Now we recall some definitions related to measure theory from [9].Let X be a set, then there are several classes of sets of X.If R is a non-empty collection of subsets of X, we say that R is a ring if it is closed under complement and finite union.Furthermore, given Q is a non-empty collection of subsets of X, it is said to be an algebra if it is a ring such that X ∈ Q.Moreover, a non-empty collection of subsets of X, A, is a σ-algebra if it is closed under complement and countable union and X ∈ A. If A is a σ-algebra on X, then the pair (X, A) is called a measurable space.
For a given topological space, (X, τ), B = σ(τ) is the Borel σ-algebra of the space, that is, it is the σ-algebra generated by the open sets of X.
A set mapping is said to be σ Definition 2 ([9], Section 7).Given a measurable space (Ω, A), a measure µ is a non-negative and σ-additive set mapping defined in A such that µ(∅) = 0.The triple (Ω, A, µ) is called a measure space.
A measure is monotonic (which means that if A, B ∈ A are such that A ⊆ B, then µ(A) ≤ µ(B)).It is also continuous from below: if (A n ) is a monotonically non-decreasing sequence of sets (which means that
Riemann Integration Theory
In this subsection, we base on [10] in order to give a generalization of the n-dimensional Riemann integration theory.
A compact interval in the n-dimensional Euclidean space R n is a product Let f be a bounded function on an interval J and let D be a partition of J.The lower and upper Darboux sums of f in D are defined, respectively, by and |D| denotes the family of all sets of the partition D. Note that if D is a refinement of D ′ , then s( f ; D) ≥ s( f ; D ′ ) and S( f ; D) ≤ S( f ; D ′ ) and, if we consider a common refinement, it can be proved that s( f , D) ≤ S( f ; D ′ ) for each pair of partitions D and D ′ .Now, we recall the definition of the lower and upper Riemann integrals of f over J.
The next theorem is sometimes referred to as Riemann's theorem (see, for example, [Th.7.1.11] [11]).Theorem 2. A function f is Riemann-integrable if and only if there exists a number L ∈ R with the following property: for each ε > 0, there exists δ > 0 such that |S( f ; D; ξ) − L| < ε for each partition D with ||D|| < δ and for each selection ξ = (x D ) D∈D for D.Moreover, if f is Riemann-integrable, then L = f .
Darboux Sums with Respect to a Measure and a Fractal Structure
In this section, we see how to define the Darboux sums from a measure defined on a space with a fractal structure.This measure plays a similar role to that played by the Lebesgue measure in the classical theory of Riemann integrals when defining Darboux sums.For that purpose, we first need to give some conditions on the fractal structure we define on the space.Definition 4. Let (X, S, µ) be a measure space and Γ be a fractal structure on X. Γ is said to be µ-disjoint if the following conditions hold: 1.
Γ n ⊆ S is countable for each n ∈ N.
2.
µ(B ∩ J) = 0 for each B, J ∈ Γ n such that B ̸ = J and each n ∈ N.
Darboux sums are defined for each level of a fractal structure in a space as follows: Definition 5. Let (X, S, µ) be a measure space, Γ = {Γ n : n ∈ N} be a µ-disjoint fractal structure, and f : X → R be a bounded function.Then, for each J ∈ Γ n , we set so that the lower and upper Darboux sums with respect to µ for each level of the fractal structure are given by respectively, when the series are absolutely convergent.
Next, we see that the first condition in Definition 4 allows us to calculate both the Darboux sums and the measure of each element used in them, while the second condition means that overlapping is not a problem.
Proof.First, we prove that On the other hand, given A ∈ Γ n and x ∈ A, since Γ * n is a covering, there exists B ∈ Γ * n such that x ∈ B and therefore Now we see that Γ is a fractal structure on X, that is, it is a countable family of coverings of X such that Γ n+1 ≺≺ Γ n for each n ∈ N.
Let n ∈ N. Given x ∈ X, there exist B ∈ Γ n and J ∈ Γ * n such that x ∈ B and x ∈ J, since Γ n and Γ * n are both coverings of X.Hence, x ∈ B ∩ J ∈ Γ n , which means that Γ n is a covering of X.
On the other hand, let Finally, let and Γ is a fractal structure.Remark 1.Let (X, S, µ) be a measure space and Γ and Γ * be two µ-disjoint fractal structures on X.Then Γ ∨ Γ * is a µ-disjoint fractal structure on X.
= 0 by the monotonicity of the measure.Lemma 1.Let (X, S, µ) be a measure space and Γ be a µ-disjoint fractal structure on X. Then:
3.
Let {A i : i ∈ N} be a countable family of different elements of Γ n .Then Let A 1 , . . ., A k ∈ Γ n be such that they are all different.Reasoning by induction on k, we prove that µ that the equality holds for a certain k ∈ N. Let us see that it also holds for k + 1: First, we have . Moreover, the induction hypothesis lets us write The fact that µ is sub-σ-additive implies that and the fact that If we join all the previous equalities, we conclude that
3.
Let {A i : i ∈ N} be a countable family of different elements of Γ n .Since where we have taken into account the previous item in the first equality.
The next proposition gathers some relationships between both Darboux sums with respect to two µ-disjoint fractal structures defined on a space.Proposition 2. Let (X, S, µ) be a measure space and Γ = {Γ n : n ∈ N} and Γ * = {Γ * n : n ∈ N} be two µ-disjoint fractal structures on X.Let f : X → R be a bounded function and n, m ∈ N. Then: Then H ⊂ J 1 ∩ J 2 and since µ is monotonic and µ(J 1 ∩ J 2 ) = 0 (because the fractal structure is µ-disjoint), we have µ(H) = 0, a contradiction.Item (a) lets us write and, by item (b), it follows that Now, by item (c), and, by the first item, it holds that Now we use item (c), so that and item (b) lets us write
3.
Let Γ = Γ ∨ Γ * and p = max{n, m}.Then Γ p ≺≺ Γ n , Γ * m .Note that Γ is µ-disjoint by Remark 1.By the previous items, we have that We can also observe the next result from the previous proposition.
Remark 2. Under the hypothesis of the previous proposition, it follows that L( f ;
Riemann Integral with Respect to a Measure and a Fractal Structure
Once we know how to define the lower and upper Darboux sums when given a bounded function, a measure µ, and a µ-disjoint fractal structure on a space X, the next step is defining the lower and upper Riemann integrals with respect to the measure and the fractal structure.Moreover, we can give some conditions so that both integrals coincide.Definition 6.Let (X, S, µ) be a measure space, Γ = {Γ n : n ∈ N} be a µ-disjoint fractal structure on X, and f : X → R be a bounded function.We define the lower and upper Riemann integrals of f with respect to µ and Γ on X as follows: 1.
Upper Riemann integral of f with respect to µ and Γ:
2.
Lower Riemann integral of f with respect to µ and Γ: f .Definition 7. Let (X, S, µ) be a measure space, Γ = {Γ n : n ∈ N} be a µ-disjoint fractal structure on X and f : X → R be a bounded function.f is said to be Riemann-integrable with respect to µ and Γ on X if X (µ,Γ) f is finite and X If f is Riemann-integrable with respect to µ and Γ on X, we define the Riemann integral of f with respect to µ and Γ on X, We denote by R(X; µ; Γ) the set of Riemann-integrable functions with respect to µ and Γ on X.
Proposition 3. Let (X, S, µ) be a measure space, Γ = {Γ n : n ∈ N} be a µ-disjoint fractal structure on X and f : X → R be a bounded function.The following statements are equivalent: 1.
2.
For each ε > 0, there exists n ∈ N such that U( f For each ε > 0, there exists n 0 ∈ N such that U( f Proof.(1 ⇔ 3) By definition of Riemann integral, we have that what is equivalent, in terms of convergence, to claim that for each ε > 0, there exists n 0 ∈ N such that U( f ; Γ n ; µ) − L( f ; Γ n ; µ) ≤ ε for each n ≥ n 0 .
Note that the third condition in the previous proposition is equivalent to = 0 and hence, by Proposition 3 again, f ∈ R(X; µ, Γ * ).
Riemann Theorem for Fractal Structures
In what follows, we prove a theorem which is analogous to the Riemann theorem in R n , but for bounded functions defined on a space with a µ-disjoint fractal structure.This is one of the main results of this work.Definition 8. Let Γ be a fractal structure on a space X such that Γ n is countable for each n ∈ N. A selection for Γ n is a collection of points ξ := (x A ) A∈Γ n such that x A ∈ A for each A ∈ Γ n .Definition 9. Let (X, S, µ) be a measure space, Γ = {Γ n : n ∈ N} be a µ-disjoint fractal structure on X and f : X → R be a bounded function.Let n ∈ N and ξ = (x A ) A∈Γ n be a selection for Γ n .The Riemann sum for f relative to Γ n , ξ and µ is denoted by S( f ; Γ n ; ξ; µ) and is defined as follows: Theorem 3 (Riemann's Theorem).Let (X, S, µ) be a measure space, Γ = {Γ n : n ∈ N} be a µ-disjoint fractal structure on X, f : X → R be a bounded function and C ∈ R. The following statements are equivalent: Given ε > 0, there exists n 0 ∈ N such that |C − S( f ; Γ n ; ξ n ; µ)| < ε for each n ≥ n 0 and each selection for Γ n , ξ n .
3.
Given ε > 0, there exists n ∈ N such that |C − S( f ; Suppose that ξ m is a selection for Γ m and m ≥ n 0 .It follows that 2 for each ξ, selection for Γ n .We distinguish two cases: Then we have that Then we have that Hence, in both cases, we can write (2 ⇔ 4) It is immediate.
Riemann Integral with Respect to a Measure
The next result allows us to claim that the Riemann integral of a bounded function with respect to a measure and a fractal structure, in fact, does not depend on the fractal structure.Proposition 4. Let (X, S, µ) be a measure space, Γ = {Γ n : n ∈ N} and Γ * = {Γ * n : n ∈ N} be two µ-disjoint fractal structures on X and f : Therefore, if a bounded function is Riemann-integrable with respect to a measure, µ, and different µ-disjoint fractal structures, then all the integrals have the same value.Therefore, it makes sense to introduce the following concept: Definition 10.Let (X, S, µ) be a measure space and f : X → R be a bounded function.f is said to be µ-Riemann-integrable if there exists a µ-disjoint fractal structure Γ on X such that f is Riemann-integrable on X with respect to µ and Γ.Moreover, if so, the integral is defined as From now on, R(X; µ) will denote the set of all bounded functions that are µ-Riemann-integrable on X.
The proof of the following result is straightforward.Lemma 2. Let Γ be a fractal structure on a set Y, X be a set and f : X → Y be a map.Then Once we know that the Riemann integral does not depend on the chosen fractal structure, we give some sufficient conditions to ensure that a function is Riemann-integrable with respect to a measure.Proposition 5. Let (X, S, µ) be a finite measure space and f : X → R be a bounded measurable function.Then f ∈ R(X; µ) and µ X f = f dµ.
Proof.Let ∆ = {∆ n : n ∈ N} be the fractal structure in R given by 2 n [: k ∈ Z} for each n ∈ N, and let Γ = f −1 (∆).Note that Γ is a fractal structure by the previous lemma and it is µ-disjoint since f is measurable, X has finite measure and A ∩ B = ∅ for each A, B ∈ Γ n with A ̸ = B and each n ∈ N. Now, we prove that f is Riemann-integrable with respect to µ and Γ.
Given n ∈ N, it follows that l n dµ = ∑ i∈Z i and X has finite measure, then f is Lebesgue integrable and f dµ = lim n→∞ l n dµ = lim n→∞ u n dµ.It follows from Proposition 3 that f is integrable with respect to µ and Γ and µ The previous result states that, for bounded functions and finite measure spaces, the Riemann integral with respect to a measure is the same as the classic Lebesgue integral with respect to that measure.An open question is if this result is still true for non-finite measure spaces.
Another interesting interpretation of the previous result is that the Lebesgue integral with respect to a measure can be calculated by choosing some simple and easy fractal structure, since the calculation of the Riemann integral with respect to that fractal structure and the measure is easier since it only involves the calculation of the Darboux sums and some limits.This is particularly true when it is easy to calculate the measure of the elements of the fractal structure.
An obvious consequence of the previous proposition is that continuous maps are Riemann-integrable with respect to any measure on the Borel σ-algebra.Corollary 2. Let (X, τ) be a topological space, µ be a finite measure on the Borel σ-algebra and f : X → R be a bounded continuous map.Then f ∈ R(X; µ).
Riemann Integrability vs. Riemann Integrability with Respect to the Lebesgue Measure
Functions that are Riemann-integrable (in the classic sense) in a rectangle in R N are also Riemann-integrable with respect to the Lebesgue measure, and both integrals coincide.
n ∈ N} be the natural fractal structure on R N induced on X.Then f is Riemann-integrable (in the classic sense) if and only if it is Riemann-integrable on X with respect to the Lebesgue measure λ and Γ.Moreover, if f is Riemann-integrable on X, both integrals coincide.
Proof.On the one hand, suppose that f is Riemann-integrable (in the classic sense).Let ε > 0.Then, by Theorem 2, there exists δ > 0 such that |S( f ; D; ξ) − X f | < ε for each partition D with ||D|| < δ and for each selection ξ = (x D ) D∈D for D.
Let n ∈ N be such that 1 2 n < δ and ξ be a selection for Γ n .Then it is clear that 1 2 n , which is less than δ.It follows from Theorem 3 that f is Riemann-integrable on X with respect to λ and Γ and (λ,Γ) X f = X f .On the other hand, suppose that f is Riemann-integrable on X with respect to λ and Γ and let ε > 0. By Proposition 3 there exists n ∈ N such that U( f ; Γ n ; λ) − L( f ; Γ n ; λ) < ε.Since Γ n is a partition, it follows from Theorem 1 that f is Riemann-integrable (in the classic sense).
Finally, by definition, it is clear that
f .Hence, if f is Riemann-integrable on X, then it is Riemann-integrable with respect to λ and Γ and it follows that and f : X → R be a Riemann-integrable function (in the classic sense), then f is λ-Riemann-integrable and both integral coincide, where λ is the Lebesgue measure.
Examples
In the previous section, we have shown (Proposition 6 and Corollary 3) that the classic Riemann integral is a particular case of the theory, since it is the Riemann integral with respect to the natural fractal structure and the Lebesgue measure.
Also, we have shown (Proposition 5) that, for bounded functions on finite measure spaces, the classic Lebesgue integral with respect to the measure is a particular case of the theory, since it coincides with the Riemann integral with respect to a certain fractal structure and the measure.In this case, the fractal structure depends on the function, while in the classic Riemann integral, we can always use the natural fractal structure for any function.
In this section, we give three examples in which an integral is calculated according to the theory that has been developed before.
In Corollary 3 it was shown that each Riemann-integrable function (in the classic sense) is Riemann-integrable with respect to the Lebesgue measure.The first is an example of a function that is not Riemann-integrable (in the classic sense), but it is Riemann-integrable with respect to the Lebesgue measure. of the fractal structure.By the results in [3,4] you can prove that a pre-measure defined in the elements of the fractal structure can be extended to the Borel σ-algebra.But if you are only interested in the calculation of integrals, you do not need to bother about how the extension is done or how to calculate the measure of other sets, since you only need the measure of the elements of the fractal structure in order to calculate integrals.This is similar to the case of the classic Riemann integral where you only need to know the measure of an interval in order to calculate integrals.Next, we present a simple example.
The next example shows that there exist Riemann-integrable functions with respect to a certain measure on fractal sets.Indeed, in the following, we work on the Cantor set in order to calculate integrals.
Let f 0 , f 2 : [0, 1] → R be the functions given by f 0 (x) = x 3 and f 2 (x) = x 3 + 2 3 .Recall that the Cantor set, C, is defined as the unique compact subset on [0, 1] such that C = f 0 (C) ∪ f 2 (C).Now let g : [0, 1] → [0, 1] be a function defined by the following rule: given x ∈ [0, 1], we write it in base 3. Next, we truncate it by the first 1 (if it is not the case, we consider the whole expression of x in base 3).In the resulting expression, we exchange twos by ones.Then we have a number in base 2 whose decimal value is g(x) for some x ∈ [0, 1].This function is known as devil's staircase (see, for example, [12]), and its graph can be seen in Figure 1.We are interested in the integration of the restriction of this function to the Cantor set.Now, let Γ be the natural fractal structure as a self similar set (see [13]), defined by the following levels: Note that g(J) = [(0.x 1 . . .x n ) 2 , (0.x 1 . . .x n 1) 2 ], where a i = 2x i for each i = 1, . . ., n, and hence M(g| C ; J) = (0.x 1 . . .x n ) 2 + 1 2 n and m(g| C ; J) = (0.x 1 . . .x n ) 2 .Let J ∈ Γ n for some n ∈ N. We define the set function µ by µ(J) = 1 2 n .
By [4], it is known that µ can be extended to a measure on the Borel σ-algebra.
Definition 3 .f
Let f be a bounded function on an interval J = [a 1 , b 1 ] × . . .× [a n , b n ] and D be a partition of J. Then the lower and upper sums of f over J are defined, respectively, case that both values coincide, we refer to that number by the name of the Riemann integral of f over J and denote it by J Two of the most well-known theorems in the classical theory of Riemann integral are the following ones: Theorem 1.A function f is Riemann-integrable if and only if for each ε > 0, there exists a partition D such that S( f ; D) − s( f ; D) < ε A selection for a partition D is a collection of points ξ = (x D ) D∈D such that x D ∈ D for each D ∈ D. The Riemann sum for a function f relative to a partition D and a selection ξ | 6,591.2 | 2024-01-17T00:00:00.000 | [
"Mathematics"
] |
Stop Search in the Compressed Region via Semileptonic Decays
In supersymmetric extensions of the Standard Model, the superpartners of the top quark (stops) play the crucial role in addressing the naturalness problem. For direct pair-production of stops with each stop decaying into a top quark plus the lightest neutralino, the standard stop searches have difficulty finding the stop for a compressed spectrum where the mass difference between the stop and the lightest neutralino is close to the top quark mass, because the events look too similar to the large $t\bar{t}$ background. With an additional hard ISR jet, the two neutralinos from the stop decays are boosted in the opposite direction and they can give rise to some missing transverse energy. This may be used to distinguish the stop decays from the backgrounds. In this paper we study the semileptonic decay of such signal events for the compressed mass spectrum. Although the neutrino from the $W$ decay also produces some missing transverse energy, its momentum can be reconstructed from the kinematic assumptions and mass-shell conditions. It can then be subtracted from the total missing transverse momentum to obtain the neutralino contribution. Because it suffers from less backgrounds, we show that the semileptonic decay channel has a better discovery reach than the fully hadronic decay channel along the compressed line $m_{\tilde{t}} - m_{\tilde{\chi}}\approx m_t$. With 300 $\text{fb}^{-1}$, the 13 TeV LHC can discover the stop up to 500 GeV, covering the most natural parameter space region.
I. INTRODUCTION
The discovery of the Higgs boson [1,2] completes the Standard Model (SM), but also makes the hierarchy problem more eminent. The SM interactions of the Higgs field induce quadratically divergent contributions to its mass-squared, and the largest contribution comes from the top quark loop. In order to keep the electroweak symmetry breaking scale natural, new physics is expected to be present near the weak scale to cut off the divergent contributions. In supersymmetry (SUSY), the top quark loop is cancelled by the loops of its superpartners, the stops. It is hence natural that the stops belong to the most sought-after particles in new physics searches at the LHC.
After Run 1 and the initial 13 TeV run of the LHC, ATLAS and CMS experiments have put constraints on the stop mass up to ∼ 750 GeV, assuming the stop decays to a top quark and the lightest neutralinoχ 0 1 , which is assumed to be the lightest supersymmetric particle (LSP) and stable, with mχ0 1 200 GeV [3][4][5][6]. Similar but slightly weaker bounds were also obtained if some of the stops decay through a chargino or a heavier neutralino to the LSP. Run 2 is expected to extend the reach beyond 1 TeV, at which point SUSY as a solution to the hierarchy problem may be strongly questioned. However, the current searches leave some gaps in the lower stop mass region. In particular, if mt ≈ m t + mχ, the top quark and the neutralino from the stop decay are almost static in the stop rest frame. Consequently, in the lab frame the top and the neutralino will be collinear with pχ/pt ≈ mχ/mt. In such cases, the stop pair production events will look almost identical to the top quark pair production, as the two neutralinos tend to travel back to back, resulting in a cancellation of their momenta and leaving no trace ofχs. This is the reason why no experimental limit has been set upon this compressed region so far.
One possible way proposed in Refs. [7][8][9] to explore the compressed region is to consider events of stop pair production with a hard initial state radiation jet (J ISR ). From momentum conservation, therefore both neutralinos tend to be emitted in opposite direction to the ISR jet, resulting in a significant amount of missing transverse momentum ( / p T ). For the fully hadronic decay events, the / p T mainly comes from the neutralinos. Using Eq. (1), we see that the ratio between / p T and p T (J ISR ) (defined as R M in Ref. [8]) is roughly equal to the ratio between the neutralino and the stop masses, which is strictly between zero and one. It can be a useful kinematic variable to distinguish the stop events where R M should be close to mχ/mt from the SM top background events where R M is expected to be close to zero [7][8][9].
As for the semileptonic and dileptonic decays of the stops, R M becomes less informative if the neutrinos' contribution to / p T cannot be separated from that of neutralinos. However, for semileptonic events, if one exploits the kinematics unique to the compressed region, it is possible to reconstruct the top quark that decays leptonically, hence retrieving a relation similar to Eq. (2). Another benefit of requiring a lepton in the final states is that it vetoes QCD backgrounds, which suffer from large uncertainties under high jet multiplicities.
In this paper we demonstrate the reconstructions of semileptonic decays of the stop pair production in the compressed region, and show that it is very useful for stop searches. In Sec. II, we analyze the kinematics of the semileptonic events and discuss the reconstruction of missing transverse momenta from the neutrino and the neutralinos. In Sec. III we describe in detail a search done for the benchmark mass point mt = 400 GeV, m χ = 226.5 GeV as an example of our method. We also compare the significances obtained from our method and which we will assume to be the neutrino momentum component perpendicular to the ISR jet in the transverse plane for the following analyses. Once the J ISR is identified, p ⊥ T ν is uniquely determined from the experiment.
We first consider the case mt = m t + mχ. For the leptonically decaying top quark, there are three mass-shell equations in addition to Eq. (3): Given the measured momenta of the lepton and the b-jet, the three equations together with the p ⊥ T ν allow us to solve for p ν . Taking the differences of the 3 mass-shell equations followed by plugging in the p ⊥ T ν from Eq. (3), we can reduce them to one quadratic equation for E ν , the kinetic energy of the reconstructed neutrino.
The quadratic equation, if solvable, provides in general two different real solutions for E ν . We will discuss how we select the solution later. After E ν is determined, we substitute it back into the original mass-shell equations, then the full momentum of the reconstructed neutrino can be retrieved. Finally, with the knowledge of p T ν , the component of the neutrino momentum antiparallel to p T (J ISR ), we can subtract the neutrino contribution from / p T and get a relation similar to Eq. (2): where we define the variableR M as the modified R M adapted to the semileptonic decays.
With a set of proper kinematic cuts, a clear peak in theR M distribution for the stop pair production can be identified, as we will show later.
As we discussed above, the quadratic equation in general can give two possible solutions for E ν . To choose between them, we investigate the kinematics of semileptonic decays for tt + J ISR and its main background tt + J ISR . As an illustration, we generate these events at the parton level for a benchmark of mt = 400 GeV and m χ = 226.5 GeV. The charged lepton and the neutrino from the W decay on average have the same energy if the W boson is longitudinally polarized, because they tend to be emitted in directions perpendicular to the W momentum. On the other hand, for transverse W decays the neutrino tends to be more energetic than the charged lepton. Because W bosons coming from the top decays are dominantly longitudinally polarized, the energy distributions of the neutrino and the charged lepton are similar. We can see in Fig Details about these effects are beyond the scope of this paper. MET cut for both the stop and top pair production. All distributions are normalized to one. The minor asymmetry that appears in the log(E ν /E ) distribution without the MET cut is because of the spin correlation between the neutrino and the W boson.
III. A CASE STUDY
As an illustration of our method, we describe in details the search done for a point mt = 400 GeV, m χ = 226.5 GeV in the parameter space of the compressed region. The dominant SM backgrounds for the semileptonic decay oftt + J ISR are the semileptonic and dileptonic decays of tt + J ISR . The reason that the dileptonic decays are important is mainly due to the imperfect lepton isolation. Since the top and its decay products are highly boosted antiparallel to the hard ISR jet, the lepton tends to have a small ∆R separation from the b jet, therefore has a non-negligible probability of failing the lepton isolation criteria. Both of these backgrounds have a similar topology to the signal, consequently they have a good chance of solving Eqs. (3) to (6) and yielding a sensibleR M value lying between 0 and 1.
Other SM backgrounds include the single or pair production of vector bosons (V ) with jets and ttV . Even though V + jets and V V + jets have relatively large cross sections, they seldom produce sensible solutions for the equations imposed by the signal kinematics. The small fractions that give real solutions rarely pass our selection cuts either. As a result, they give much less yields compared to tt + J ISR . ttV , on the contrary, has the kinematic features akin to the signal, but suffers from a tiny cross section. As a result, contributions of other SM backgrounds are negligible compared to the main backgrounds. Because one isolated lepton is required in the final state, the hadronic decays of the top pair production and the pure QCD backgrounds are also negligible.
Besides the SM backgrounds, the dileptonic decay oftt + J ISR can be an irreducible background to the signal. However, this process has a much smaller cross section compared to the SM processes and it is effectively negligible.
A. Signal and background generations
We use MadGraph 5 [10] and Pythia 8 [11] to generate events for both the background and the signal events. MLM matching scheme is turned on to prevent double-counting between the matrix element calculation and the parton shower [12]. The detector simulation is performed by Delphes 3 [13] with the anti-k t jet algorithm [14]. We normalize the background cross sections to the LHC 13 TeV top production [15][16][17][18][19]. A K-factor of 1.29 is applied to both semileptonic and dileptonic decays of the tt backgrounds. For the signals, the production cross section is normalized to LHC 13 TeV NLO+NLL results [20].
B. Event selection
The selection for the events of interest starts with at least 4 jets with one or more btags and exactly one isolated lepton. The b-tagging efficiency is set to be 80% with a misidentification rate of 0.015 [21]. Events with τ -tagging are vetoed. The non-b-tagged jet with the hardest p T is our ISR jet candidate. In particular, it must satisfy p T ≥ 475 GeV. The second and third hardest jets must satisfy p T ≥ 60 GeV. In order to ensure that the ISR jet is approximately in the opposite direction of the neutralino momentum sum, we require that |φ J ISR − φ MET | ≥ 2. As shown in Fig. 1, a MET cut effectively eliminates most of the SM backgrounds whose missing momentum mainly comes from the neutrinos, hence an MET cut > 200 GeV is imposed. Another useful kinematic variable is ∆φ ,MET , the azimuthal angle difference between the lepton and the missing transverse momentum. Since the main source of the missing momentum for the backgrounds is from the neutrino, it tends to be more collinear with the lepton for the background events compared with the signal events. Fig. 2 shows the ∆φ ,MET distributions for the signal and the backgrounds for the benchmark. After a cut on ∆φ ,MET > 0.9, most of the semileptonic background can be suppressed. However, this cut is less effective on the dileptonic background, because its MET is the sum of two neutrinos' momenta, which results in a wider ∆φ ,MET distribution.
A more sophisticated estimate to take into account the difference between the shapes of the signal and background can be obtained by the likelihood method. The likelihood ratio between signal plus background hypothesis and background only hypothesis is given by The theoretical predictions for each bin {s} and {b} are taken from the MC simulation, i.e. Assuming that the background in each bin is a normal distribution around its central value b with an uncertainty σ b , the new expression for the likelihood ratio is obtained by The integration can be done numerically and the upper and lower bounds of the integration are chosen to be b ± 5σ b . The significance obtained is a function of the fractional uncertainty σ b /b exp , as shown in Fig. 4, where we see that the significance can still maintain as high as 5σ even with a 20% uncertainty in the background normalization.
To compare our result with the study based on fully hadronic final states, we repeat the analysis done by Ref. [8] for the benchmark. Fig. 3b shows the result obtained after applying the selections adopted in Ref. [8]: p T (J ISR ) > 700 GeV, 3 sub-leading jets with p T > 60 GeV, benefit from requiring a lepton in the final state, therefore enjoy a smaller background compared to the hadronic stop decays. After applying a cut at 1 ≥ R M ≥ 0.42, we get 57 signal yields and 232 background yields for the fully hadronic channel, which roughly corresponds to a 4σ significance.
IV. RESULTS AT LHC 13 TEV
We have demonstrated that our method can produce a large signal significance for a 400 GeV stop in the compressed region with 300 fb −1 integrated luminosity in the case study. To result, the sum of their momenta may no longer be strictly antiparallel to J ISR , thus our assumption that the neutrino is solely responsible for / p ⊥ T (Eq. (3)) is less valid. TheR M value obtained by solving these equations will be smeared by the error in Eq. (3) and the smearing is estimated to be On the other hand, when the stop is lighter than the sum of m t and mχ, it will decay via the virtual top quark. Since the LSPχ is a stable particle, it must be produced on shell.
The virtual top will be almost static in the rest frame of the stop, therefore Eq. Finally a scan of (mt, mχ) in the compressed region is performed based on our method.
The result is shown in Fig. 7. The scan is done along the mt − (mχ + m t ) = 30, 0, −30 GeV lines. The significances are calculated using the simple expression Eq. (8) after applying the selections discussed in Sec. III, which are: • The second and third hardest jets with p T > 60 GeV.
In Table I, we present the significances for all the points we studied in the compressed region. As expected, the mt − (mχ + m t ) = −30 GeV line achieves as great significances can be seen clearly that our method can cover a wide mass range when mt − mχ m t .
The curves are the exclusion limits from ATLAS [3] as the mt − (mχ + m t ) = 0 line. They even perform better for lighter stops. This is because the R theory M is higher for heavier mχ given the same mt, which means more events Overall, the final significances of the three lines agree well with our earlier observation from
V. CONCLUSIONS
In this paper we investigate the stop search from the direct stop pair production in the compressed region, using the semileptonic decay mode. With a hard ISR jet, the neutralinos Compared with the fully hadronic decay channel, our method for the semileptonic channel requires more sophisticated kinematic reconstruction, but suffers from less SM backgrounds.
As a result, we show that the semileptonic channel can have a better reach than the fully hadronic channel along the compressed line mt − mχ = m t . For 300 fb −1 integrated luminosity at LHC 13 TeV, the semileptonic channel can have a discovery reach of the stop mass up to about 500 GeV, in comparison to ∼ 400 GeV for the fully hadronic channel. Even though our kinematic equations are strictly valid only for mt − mχ = m t , as long as the deviations from this relation is small, the kinematic reconstruction still works pretty well.
The reach is somewhat degraded for mt − mχ > m t but not for mt − mχ m t .
The stops hold the key to the SUSY solution to the hierarchy problem. Their searches are indisputably important. The traditional stop searches are ineffective for a spectrum of mt − mχ ≈ m t . By resorting to a hard ISR jet, one can construct kinematic variables which can be used to distinguish the stop signal from the very similar SM top backgrounds. Future LHC runs will have a significant coverage of the stop mass even in the compressed region, probing the heart of natural SUSY.
ACKNOWLEDGMENTS
We would like to thank Zhangqier Wang for discussion on the likelihood method. This work is supported in part by the US Department of Energy grant DE-SC-000999. | 4,173.4 | 2016-03-31T00:00:00.000 | [
"Physics"
] |
Quantum mechanical path integrals in curved spaces and the type-A trace anomaly
Path integrals for particles in curved spaces can be used to compute trace anomalies in quantum field theories, and more generally to study properties of quantum fields coupled to gravity in first quantization. While their construction in arbitrary coordinates is well understood, and known to require the use of a regularization scheme, in this article we take up an old proposal of constructing the path integral by using Riemann normal coordinates. The method assumes that curvature effects are taken care of by a scalar effective potential, so that the particle lagrangian is reduced to that of a linear sigma model interacting with the effective potential. After fixing the correct effective potential, we test the construction on spaces of maximal symmetry and use it to compute heat kernel coefficients and type-A trace anomalies for a scalar field in arbitrary dimensions up to d=12. The results agree with expected ones, which are reproduced with great efficiency and extended to higher orders. We prove explicitly the validity of the simplified path integral on maximally symmetric spaces. This simplified path integral might be of further use in worldline applications, though its application on spaces of arbitrary geometry remains unclear.
Introduction
The path integral formulation of quantum mechanics [1] carries a certain amount of subtleties when applied to particles moving in a curved background. These subtleties are the analogue of the ordering ambiguities of canonical quantization, and can be addressed by specifying a regularization scheme needed to make sense of the path integral, at least perturbatively. The action of a nonrelativistic particle takes the form of a nonlinear sigma model in one dimension, and as such it identifies a superrenormalizable one-dimensional quantum field theory. It can be treated by choosing a regularization scheme supplemented by corresponding counterterms, the latter being needed to match the renomalization conditions, i.e. to fix uniquely the theory under study.
While several regularization schemes have been worked out and tested, see [2], in this article we take up an old proposal, put forward by Guven in [3], of constructing the path integral in curved spaces by making use of Riemann normal coordinates. It assumes that in such a coordinate system an auxiliary flat metric can be used in the kinetic term, while a suitable effective potential is supposed to reproduce the effects of the curved space. This construction transforms the model into a linear sigma model. The simplifications expected in having a linear sigma model, rather then a nonlinear one, are rather appealing, and motivated us to investigate the issue further. Indeed, a simplified path integral might be more efficient for perturbative calculations, making worldline applications easier. We shall apply and test the method on spaces of maximal symmetry (e.g. spheres) by perturbatively computing the partition function, and check if it reproduces known results. This happens with a dramatic gain in efficiency. We recall that the partition function on spheres can be used as generating function for the type-A trace anomalies of a scalar field in arbitrary d dimensions. The evaluation of trace anomalies is a typical worldline calculation, performed in [4] up to d = 6 by using the nonlinear sigma model. The linear sigma model allows to reproduce those results and to push the perturbative order much further. We use it to scan dimensions up to d = 12, though one could go higher if needed. Our conclusion is that the method is viable on spaces of maximal symmetry, and indeed we provide an explicit proof of its validity. However, an extension to generic curved spaces is not warranted, as we shall discuss later on.
We structure our paper as follows. We first review the path integral construction in arbitrary coordinates, to put the new method in the right perspective. The action in arbitrary coordinates is that of a nonlinear sigma model, and we seize the opportunity to comment on its use in worldline applications. In Section 3 we review the proposal of ref. [3], and point out that the identification of the effective potential reported in that reference is incorrect (though it could be a misprint). More importantly, we stress that the proof of why the effective potential should work is not given in ref. [3], nor is it contained in the cited references. In some of those references [5,6], see also [7], we have found arguments why the assumption of an effective potential might work perturbatively, at least up to few perturbative orders. Those arguments use the Lorentz symmetry of flat space recursively, and do not seem to apply on generic curved spaces. Thus in Section 4 we restrict ourselves to spaces of maximal symmetry, where those arguments might have a better chance of working. We test the method with the correct effective potential by computing perturbatively the partition function. We find indeed that it reproduces more efficiently known results. Moreover it permits to push the calculations to higher perturbative orders. In Section 5 we use the partition function to extract the type-A trace anomalies for a scalar field in arbitrary d dimensions up to d = 12. This produces further checks on the path integral results. Conforted by this success, we are led to provide an explicit proof of the validity of the simplified path integral on maximally symmetric spaces, which is presented in Appendix A, while Appendix B is left for details on our linear sigma-model worldline calculations.
Particle in curved space
The lagrangian of a nonrelativistic particle of unit mass in a curved d-dimensional space contains just the kinetic term where g ij (x) is the metric in an arbitrary coordinate system. It is the action of a nonlinear sigma model in one dimension, and the corresponding equations of motion are the geodesic equations written in terms of the affine parameter t, the time used in the definition of the velocityẋ i = dx i dt . The corresponding hamiltonian reads where p i are the momenta conjugated to x i . Upon canonical quantization it carries ordering ambiguities, which consist in terms containing one or two derivatives acting on the metric 1 . These ambiguities are greatly reduced by requiring background general coordinate invariance. Since the only tensor that can be constructed with one and two derivatives on the metric is the curvature tensor, the most general diffeomorphism invariant quantum hamiltonian takes the form where ∇ 2 is the covariant laplacian acting on scalar wave functions, and ξ is an arbitrary coupling to the scalar curvature R (defined to be positive on a sphere) that parametrizes remaining ordering ambiguities. The value ξ = 0 defines the minimal coupling, while the value ξ = d−2 4(d−1) is the conformally invariant coupling in d dimensions.
For definiteness let us review the theory with the minimal coupling ξ = 0. Other values can be obtained by simply adding a scalar potential V = ξ 2 R. The transition amplitude in euclidean time β (the heat kernel) is defined with the covariant hamiltonian 2 In the coordinate representation the hermitian momentum acting on a scalar wave function takes the form Further details may be found in the book [2], or in the classic paper [8]. 2 We choose position eigenstates normalized as scalars: , 1 = d d x g(x) |x x|, so that the amplitude K(x, x ; β) is a biscalar.
It solves the Schroedinger equation in euclidean time (heat equation)
and satisfies the boundary condition at β → 0 In eq. (2.6) ∇ 2 x indicates the covariant scalar laplacian acting on coordinates x. The transition amplitude K(x, x ; β) can be given a path integral representation. Using a Weyl reordering of the quantum HamiltonianĤ(x,p) allows to derive a discretized phase-space path integral containing the classical phase-space action suitably discretized by the midpoint rule [9]. The action acquires a finite counterterm V TS of quantum origin, arising form the Weyl reordering of the specific hamiltonian in eq. (2.5), originally performed in [10] (the subscript TS reminds of the time slicing discretization of the time variable). The perturbative evaluation of the phase space path integral can be performed directly in the continuum limit [11] DxDp e −S[x,p] (2.8) with the phase-space euclidean action taking the form To generate the amplitude K(x, x ; β) the paths x(t) must satisfy the boundary conditions x(0) = x and x(β) = x, while the paths p(t) are unconstrained. We recall that perturbative corrections are finite in phase space. The presence of the noncovariant part of the counterterm V TS corrects the noncovariance of the midpoint discretization, and it makes sure that the final result is covariant. These noncovariant counterterms were also derived in [12] (and reviewed in the book [13]) by considering point transformations (i.e. arbitrary changes of coordinates) in flat space. The definition of the corresponding path integral in configuration space encounters more subtle problems. The classical action takes the form of a nonlinear sigma model in one dimension and power counting indicates that, in a perturbative expansion about flat space, it is a super-renomalizable model, with superficial degree of divergence D = 2 − L where L counts the number of loops [2]. Thus, viewing quantum mechanics as a particular QFT in one euclidean dimension one finds that possible divergences may arise at oneand two-loops. Therefore, just like in generic QFTs, one must define a regularization scheme with corresponding counterterms. Usually counterterms contain an infinte part, needed to cancel divergences, and a finite part, needed to match the renomalization conditions. In the present case the counterterms are finite if one includes the local terms arising from the general coordinate invariant path integral measure. Three well-defined regularizations have been studied in the literature, all prompted by the effort of computing QFT trace anomalies with quantum mechanical path integrals [14,15]. The latter extended to trace anomalies the quantum mechanical method used for chiral anomalies in [16][17][18]. In the case of chiral anomalies the presence of a worldline supersymmetry carries many simplifications. However, supersymmetry is not present in the trace anomaly case, and the corresponding quantum mechanical path integrals must be defined with great care to keep under control the full perturbative expansion.
To recall the various regularization schemes let us first notice that in configuration space the formally covariant measure can be related to a translational invariant measure by using ghost fields a i , b i and c ià la Feddeev-Popov Considering a i bosonic variable and b i , c i fermionic variables allows to reproduce the factor g(x(t)) √ g(x(t)) = g(x(t)) in the measure. By Dx, Da, Db and Dc we indicate the translational invariant measure, useful for generating the perturbative expansion (e.g. Dx = 0<t<β d d x(t), and so on). Thus, the path integral for the nonlinear sigma model in configuration space can be written as with the full action taking the form and with V CT indicating the counterterm associated to the chosen regularization. To generate the amplitude K(x, x ; β) the paths x(t) must of course satisfy the boundary conditions x(0) = x and x(β) = x.
The time slicing regularization (TS) in configuration space was studied in [19,20], by deriving it from the phase space path integral, and studying carefully the continuum limit of the propagators together with the rules that must be used in evaluating their products. Indeed one may recall that the perturbative propagators are distributions: how to multiply them and their derivatives together is the problem one faces in regulating the perturbative expansion. This regularization inherits the counterterm V TS in (2.9).
Mode regularization (MR) was employed in curved space already in [14,15]. The complete counterterm was identified in [21] to address some mismatches originally found between TS and MR. With the correct counterterm those mismatches disappeared. The rules how to define the products of distributions in this regularization scheme follows from expanding the quantum fluctuations in a Fourier series truncated by a cut-off, which eventually is removed to reach the continuum limit. Including the vertices originating from the counterterm produces the covariant final answer. Finally, dimensional regularization (DR) was introduced in the quantum mechanical context in [22][23][24]. It needs the counterterm which has the useful property of being covariant. All these regularizations have been extensively tested and compared, see e.g. [25,26]. Extensions to supersymmetric models have been recently discussed again in [27], where the counterterms in all the previous regularization schemes were identified for the supersymmetric nonlinear sigma model with N supersymmetries at arbitrary N . Additional details on the various regularization schemes may be found in the book [2].
The case of trace anomalies provided a precise observable on which to test and verify the construction of the quantum mechanical path integrals in curved spaces, clearing the somewhat confusing status of the subject present in previous literature. With this tool at hand, more general applications of the path integral were possible, in particular in the first quantized approach to quantum fields [28] coupled to gravitational backgrounds, such as the worldline description of fields of spin 0, 1/2 and 1 coupled to gravity [29][30][31][32], the analysis of amplitudes in Einstein-Maxwell theory [33][34][35][36], the study of photon-graviton conversion in strong magnetic fields [37,38], the description of higher spin fields in first quantization [39], as well as worldline approaches to perturbative quantum gravity [40].
A linear sigma model
In the previous section we have reviewed the quantum mechanical path integral for a nonlinear sigma model, that describes a particle moving in a curved space by using arbitrary coordinates. In this section we wish to take up in a critical way an old proposal, put forward by Guven in [3], of constructing the path integral in curved space by using Riemann normal coordinates. The proposal assumes that in Riemann coordinates an auxiliary flat metric can be used in the kinetic term, while an effective potential reproduces the effects of the curved space. This construction aims at transforming the original nonlinear sigma model into a linear one. If correct, it carries several simplifications, making perturbative calculations simpler and more efficient. It may also improve its use in the worldline applications mentioned earlier.
Thus, let us review the considerations put forward in [3]. First of all it is convenient to consider the transition amplitude as a bidensity by defining so that, from (2.6), K is seen to satisfy the equation where ∇ 2 x is the scalar laplacian ∇ 2 = 1 √ g ∂ i √ gg ij ∂ j acting on the x coordinates. The differential operator appearing on the right hand side of eq. (3.2) can be rewritten through a direct computation as where derivatives act through and with the effective potential given by where all derivatives now stop after acting on the last function. At this stage, one may use Riemann normal coordinates (see [41,42], and also [43,44] for their application to nonlinear sigma models). It was claimed in [6] that the Lorentz invariance (rotational invariance in euclidean conventions) of the momentum-space representation of K written in Riemann normal coordinates implies that the g ij in the ∂ i g ij ∂ j operator of (3.4) can be replaced by the constant δ ij . Indeed, in the momentumspace representation of K previously studied in ref. [5] by using Riemann normal coordinates, it was found that in an adiabatic expansion of K the first few terms depended on certain scalar functions, which were functions of δ ij x i x j only (see also the book [7]). However it is not obvious why such a property should hold to all orders. In a curved space Lorentz invariance obviously cannot hold, for example scalar terms proportional to R ij x i x j may also arise (by R ij we consider the Ricci tensor evaluated at the origin of the Riemann coordinates, and by x i the Riemann normal coordinates themselves). Guven in [3] claimed however that in Riemann normal coordinates eq. (3.4) simplifies to while referring to [45] for a proof. Thus he was led to consider the euclidean Schroedinger equation that can be solved by a standard path integral for a linear sigma model However again, in reviewing this construction, we have not been able to find the proof of (3.6) in [45], which does not contain such statements. Also the effective potential used in [3] does not coincide with the one written in eq. (3.5) (even taking care of the different conventions used). In any case, it is the potential in (3.5) that might have a chance of working. Given this state of understanding, we still find the conjecture that "the path integral in curved space can be reduced in Riemann normal coordinates to that of a linear sigma model" to be rather appealing. Also, the reasonings leading to (3.6) has a better chance of working if one considers maximally symmetric spaces, where Lorentz (or rotational) symmetry can indeed be implemented in a suitable sense. This is indeed the case, and we prove in Appendix A that the bidensity (3.1), on a d-dimensional maximally symmetric space described by Riemann normal coordinates satisfies the heat equation with the flat operator (3.4). Thus, in the next sections, we proceed in testing explicitely the path integral construction on spaces of maximal symmetry.
Path integral on maximally symmetric spaces
We wish to test the path integral in Riemann normal coordinates using the linear sigma model of eq. (3.8) and considering maximally symmetric spaces. In particular, we wish to compare it with the path integral calculation done with the nonlinear sigma model and Riemann normal coordinates in [4]. The observable computed there was the transition amplitude at coinciding points K(x, x, β). In the present analysis we use the same notations of ref. [4], except for a change in sign in the Ricci tensors, so to have a positive Ricci scalar on spheres.
On maximally symmetric spaces the Riemann tensor is related to the metric tensor by R mnab = M 2 (g ma g nb − g mb g na ) (4.1) where M 2 is a constant that can be either positive, negative, or vanishing (flat space). The Ricci tensors are then defined by so that the constant M 2 is related to the constant Ricci scalar R by which is positive on a sphere. We want to use Riemann normal coordinates. The expansion of the metric in normal coordinates around a point (called the origin) is obtained by standard methods and reads where x m denote now Riemann normal coordinates and One may compute all terms of the series recursively, and sum the series to get [4] g mn (x) = δ mn + P mn where the projector P mn is defined by Defining the auxiliary functions allows to write the metric, its inverse, and the metric determinant in Riemann normal coordinates as where, on the right hand side of these formulae, indices are raised and lowered with the flat metric δ mn .
We are now ready to consider the linear sigma model (3.8). We wish to evaluate the transition amplitude at coinciding points x = x = 0 (taken to be the origin of the Riemann coordinates) in a perturbative expansion in terms of the propagation time β. To control the β expansion it is useful to rescale the time t → τ = t β so that τ ∈ [0, 1] and the action takes the form The leading term for β → 0 is just the free particle which is exactly solvable. It is notationally convenient to set M = 1, as M can be reintroduced by dimensional analysis. Now we must compute the potential V ef f (x). Using eqs. (4.8) and (4.9), from (3.5) we find which is evaluated to and which expands to The perturbative expansion of the path integral is obtained by setting so that eq. (3.1) reduces to (x = 0 is the Riemann normal coordinate of the origin) where ... denotes normalized correlation function with the free path integral.
Using the free propagator and Wick contractions, we obtain the following perturbative answer (see appendix B for details) with the exponential that can be expanded to identify the first six heat kernel coefficients (also known as Seeley-DeWitt coefficients). Amazingly, it compares successfully with eq. (16) of ref. [4] (taking into account that ξ = 0 and that the sign of R has been reversed). In that reference the calculation was performed up to order (βR) 3 . In the present case those results are reproduced almost trivially, and in fact we have been able to push the calculation to higher orders. For arbitrary d these higher orders are new, as far as we know. In the next section we will further test our coefficients at specific values of d. It is also amusing to note that the path integral result is exact on the 3-sphere, as the effective potential V ef f in eq. (4.12) becomes constant at d = 3. This is as it should be, as the transition amplitude on S 3 is known exactly [46], thanks to the fact that S 3 coincides with the group manifold SU (2).
Type-A trace anomaly of a scalar field
A further test is to use our results to compute the type-A trace anomaly of a conformal scalar field. Trace anomalies characterize conformal field theories. They amount to the fact that the trace of the energy-momentum tensor for conformal fields, which vanishes at the classical level, acquires anomalous terms at the quantum level. These terms depend on the background geometry of the spacetime on which the conformal fields are coupled to, and they are captured by the appropriate Seeley-DeWitt coefficient sitting in the heat kernel expansion of the associated conformal operator, see [47] for example.
A simple way to obtain this relation is to view the trace anomaly as due to the QFT path integral measure, so that it is computed by the regulated Jacobian arising from the Weyl transformation of the QFT path integral measure [48]. For a scalar field the infinitesimal Weyl transformation δ σ g mn (x) = σ(x)g mn (x), applied to the one-loop effective action, yields Tr σe −βR (5.1) where the consistent regulator R, that appears in the exponent, is just the conformal operator associated to the scalar field, and reads It can be identified as the hamiltonian operator (2.3) for a non-relativistic particle in curved space. Therefore, one identifies the trace anomaly in terms of a particle path integral by where it is understood that the limit picks up just the β-independent term-divergent terms are removed by QFT renormalization. This procedure selects the appropriate Seeley-DeWitt coefficient sitting in the expansion of K(x, x; β).
Trace anomalies have been classified as type-A, type-B and trivial anomalies in [49]. On conformally flat spaces the type-B and trivial anomalies vanish, so that only the type-A anomaly survives. It is proportional to the topological Euler density, and its coefficient enters the so-called c-theorem of 2 dimensions [50] and a-theorem of 4 dimensions [51] at fixed points. These theorems capture the irreversibility of the renormalization group flow in 2 and 4 dimensions. Their extension to arbitrary even dimensions has been conjectured, but not proven (see also [52] for a more general conjecture).
We are going to use the previous results on the sphere (a conformally flat space) to calculate the type-A trace anomaly for a scalar field in arbitrary dimensions up to d = 12, which will serve as a further test on the linear sigma model approach of the previous section. Using the expansion obtained in the previous section, and choosing x as the origin of the RNC coordinate system, we have by definition g mn (x) = δ mn in Riemann normal coordinates, and so that expanding (4.17) (recall that there x = 0 indicates the origin of the RNC), and picking the β 0 term in the chosen dimension d, we obtain the trace anomalies for a conformal scalar field in d dimensions reported in Table 1, where the second form is written in terms of a 2 = 1 R to directly compare with the results tabulated in [53]. The comparison is successful, except at d = 12, where our respective coefficients differ by a number of the order of 10 −13 . Our result is correct as using the zeta function approach employed in [53,54] we have been able to reproduce our findings 3 .
Conclusions
We have tested a method of computing the path integral for a particle in curved spaces in Riemann normal coordinates that employs a linear sigma model action with an additional scalar effective potential. This method was proposed by Guven in [3], but with assumptions whose proof were not given. We have checked the method by restricting it to maximally symmetric geometries, and found that indeed it reproduces correct results in a quite efficient way. In particular, we have used it to obtain the first six Seeley-DeWitt coefficients at coinciding points for the d-dimensional sphere (more generally, for maximally symmetric spaces), and computed the type-A trace anomaly for a scalar field up to d = 12. This helped us also to correct a wrong value for the trace anomaly of a scalar field in twelve dimensions reported in ref. [53]. The success of the simplified path integral on maximally symmetric spaces has led us to search for a simple proof of its validity, which we have found and reported in Appendix A.
It would be interesting to extend the present method to supersymmetric nonlinear sigma models, so to consider fields of spin 1/2 and 1, if not higher, in worldline applications, or to consider curved spaces with boundaries, following the path integral treatment of refs. [55,56] which dealt with flat space only.
As for arbitrary geometries, we cannot say much at this stage. If a proof of the crucial relation used in constructing the path integral cannot be produced, one may still test it by a perturbative computation at sufficiently high order. We wish to be able to report on this subject in a near future.
A A simple proof in maximally symmetric spaces
Here we give a simple proof that the bidensity (3.1) satisfies the heat equation in a maximally symmetric space described by Riemann normal coordinates. For this to be true we must show that the "curved" differential operator (3.4) acts on (3.1) identically as the "flat" operator (3.6), i.e.
Taking x = 0 as the origin of the Riemann normal coordinates, and using (4.7) and (4.9), the left hand side of (A.2) reduces to In maximally symmetric spaces, all curvature tensors are given algebraically in terms of the metric and of the constant scalar curvature R, see eqs. (4.1)-(4.3), so that by symmetry arguments the bidensity K(x, 0; β) can only depend on the coordinates through the "scalar" function x 2 = δ ij x i x j . Therefore, using the orthogonality condition P ij x j = 0, one gets Therefore, (A.2) is proven. Casting (A.1) in the form of a path integral is now immediate. | 6,328 | 2017-02-14T00:00:00.000 | [
"Mathematics"
] |
SVR-EEMD: An Improved EEMD Method Based on Support Vector Regression Extension in PPG Signal Denoising
Photoplethysmography (PPG) has been widely used in noninvasive blood volume and blood flow detection since its first appearance. However, its noninvasiveness also makes the PPG signals vulnerable to noise interference and thus exhibits nonlinear and nonstationary characteristics, which have brought difficulties for the denoising of PPG signals. Ensemble empirical mode decomposition known as EEMD, which has made great progress in noise processing, is a noise-assisted nonlinear and nonstationary time series analysis method based on empirical mode decomposition (EMD). The EEMD method solves the “mode mixing” problem in EMD effectively, but it can do nothing about the “end effect,” another problem in the decomposition process. In response to this problem, an improved EEMD method based on support vector regression extension (SVR-EEMD) is proposed and verified by simulated data and real-world PPG data. Experiments show that the SVR-EEMD method can solve the “end effect” efficiently to get a better decomposition performance than the traditional EEMD method and bring more benefits to the noise processing of PPG signals.
Introduction
PPG [1] is a promising biometric technique based on Lambert-Beer's law [2] and the difference in spectral absorption characteristics of human skin and blood to convert optical signals into blood volume and blood flow information. It can be used for noninvasive detection of microvascular blood flow changes, providing quantities of possibilities in detecting blood volume and blood flow parameters [3][4][5]. Unfortunately, the noninvasiveness of PPG has both advantages and disadvantages: PPG signals are susceptible to disturbances from external environment and thus it causes inaccuracies to the measured results and those disturbances, including respiratory activities (RA), motion artifacts (MA), power line interference, and high-frequency noise generated by electronic components, tend to cause PPG signals to be doped with nonlinear and nonstationary components, which can result in spectral aliasing and distortion when processed with traditional methods. e EMD method proposed by Huang et al. [6] in 1998 decomposes the time series into a set of intrinsic mode functions (IMFs), and noise can be eliminated by selecting appropriate IMFs. However, some drawbacks impede its further development. Several years later, a more powerful ensemble EMD [7] method called EEMD is presented and solves the "mode mixing" problem, one of the major drawbacks of the original EMD.
e EEMD method has proven to be quite versatile in a broad range of applications such as geology [8,9], banking [10], machinery [11,12], and medicine [13] for extracting signals from data generated in noisy processes. Respecting the denoising of PPG signals, lots of researches have also been carried out. Sweeney et al. [14] used EEMD with canonical correlation analysis to remove artifacts both from electroencephalography (EEG) and functional near infrared spectroscopy (fNIRS) single channel data; Liao et al. [15] used the EEMD method to achieve accurate analysis for PPG signals and implemented it on a specific platform; Chuang et al. [16] analyzed the highfrequency band (0.4-0.9 Hz) of IMF 5th decomposed by EEMD to measure pulse rate variability (PRV); Motin et al. [17] proposed an algorithm based on EEMD with principal component analysis (EEMD-PCA) as a novel approach to estimate heart rate (HR) and respiratory rate (RR) simultaneously from PPG signals; Sadrawi et al. [18] used PPG data corrupted by vertical MA noise to evaluate the performance of EEMD filtering. e EEMD method overcomes the "mode mixing" problem in EMD, but it does not consider the second problem existing at the same time: "end effect," which causes the two ends of the time series to diverge when spline interpolation. In order to solve this problem, this paper proposes an improved EEMD method (SVR-EEMD) based on support vector regression extension and verifies its denoising performance by simulated data and real-world PPG data. is paper will first describe the experimental materials and introduce the principle of the SVR-EEMD method and its implementation steps. en, we will report the results of the proposed method on the simulated data and real-world PPG data and compare the denoising performance of different methods, and further advices on the necessary research are also discussed. Finally, we will draw the conclusion part to clarify the effectiveness and efficiency of this method.
Simulated Data Acquisition.
e simulated signal which is sampled at 1 kHz for a duration of one second consists of a sinusoidal signal of 5 Hz and a cosine signal of 20 Hz. It can be expressed by equation (1), where n(t) is the superimposed Gaussian white noise to ensure that the signal-to-noise ratio (SNR) of the simulated signal is 15 dB: y(t) � sin(2 * π * 5 * t) + cos(2 * π * 20 * t) + n(t). (1) e SNR is calculated by equation (2), in which s(t) is the signal component that is equal to the first two parts on the right-hand side of equation (1) and n(t) is the noise component. Accordingly, we can calculate the noise intensity which is 0.0316 in the simulated signal:
Real-World PPG Data
Acquisition. e real-world PPG data are obtained from BIDMC PPG and Respiration Dataset of PhysioBank, which is supported by the National Institute of Medical Sciences (NIGMS) and National Institute of Biomedical Imaging and Bioengineering (NIBIB) and whose data were originally acquired from critically ill patients during hospital care at the Beth Israel Deaconess Medical Centre (Boston, MA, USA) [19,20]. ere are a total of 53 sets of patient data in the dataset, each of which records some basic information of the patient and a series of physiological data of certain duration. ese physiological data include respiratory activity data, EEG data, PPG data, and so on. We picked 10 sets (2, 5, 33, 34, 37, 38, 43, 45, 50, and 53) of PPG data to carry out the real-world PPG data experiment of this study.
2.3.
e Proposed SVR-EEMD Method. e SVR-EEMD method can be generally implemented by two steps: firstly, construct a training set based on the original signal to train the SVR model and use the trained SVR model to extend a finite number of maxima and minima time series, respectively, to the left and right ends of the original signal; then the EEMD algorithm is performed on the extended signal and appropriate IMFs are selected for reconstruction when the extension part is truncated. e implementation process is shown in Figure 1.
Signal Extension Based on Support Vector Regression.
Support vector regression is a "tolerant" regression model, which maps the data x ∈ R n to a high-dimensional feature space H through a nonlinear mapping function φ and performs the linear regression in this space correspondingly [21]. It can be abstracted into the following expression: where w is the normal vector of the regression hyperplane and b is the threshold. Based on this algorithm, we extend the time series by steps (1) to (4): (1) Construct a training set T � {(x 1 , y 1 ), . . ., (x n , y n )} using the left-end data of the time series (2) Select precision parameter "ε", error penalty factor "C", loss function "e," and kernel function k(x i , x j ) to construct the SVR model where a * i , a i i � 1, 2, . . . , n are Lagrange multipliers and only a small part which corresponds to the so-called support vector (SV) is not zero In this step, the SMO [22] algorithm, the key point of which is to decompose a complex optimization problem into several suboptimization problems that are often easy to solve, is performed by iteratively selecting subsets of {a * i , a i } only of size 2, leaving all the other kept fixed and optimizing the suboptimization problems of equation (5) e threshold b is derived by 2 Computational and Mathematical Methods in Medicine where N SV is the number of support vector samples (3) Use the trained SVR model to extend a finite number of maxima and minima points to the left end of the time series (4) By repeating steps (1)-(3) for the right end data, we can get the left and right ends of the time series being extended Considering that SVR model is to seek a linear regression function to fit all the samples to minimize the total variance of the sample from the hyperplane, we let C equal to infinity and ε be zero to improve the regression accuracy. Furthermore, we have also used the commonly used ε-insensitivity loss function and linear kernel function for the sake of convenience.
Signal Decomposition and Reconstruction Based on EEMD.
In the first step of EEMD, an independent identically distributed and zero mean white noise whose intensity (N p ) should match the noise intensity in the signal as much as possible is added and then EMD is applied to drive a set of IMFs. ese steps are repeated for N times to conclude an ensemble of IMF sets and, finally, the ensemble should be averaged to receive one set of IMFs.
EMD of the main work is performed basically by a sifting process as follows: (1) Assign the original signal to y(t).
(2) Find the local maxima and minima of the signal y(t).
(3) Interpolate (cubic spline interpolation here) between the local maxima and minima to generate upper and lower envelops: e max (t) and e min (t). (4) Subtract the mean value of envelops from y(t) (5) Calculate the sift relative tolerance (rtol), the stop criterion of IMF, which is set to 0.2 in this paper where c i (t) and c i− 1 (t) denote the current and previous c(t), respectively. (6) Determine if rtol is less than 0.2, and if so, terminate the loop and treat the current c(t) as an IMF; otherwise assign c(t) to y(t) and continue iterating the steps from (2) to (6) (7) Subtract c(t) from the original signal and repeat the steps from (1) to (7) until y(t) can never be decomposed, then the original signal can be expressed as where n is the total number of IMFs and r(t) is the residual component Typically, the original signal will be decomposed into several IMF components, and the first few correspond to the high-frequency band of the time series and the last few correspond to the low-frequency band. As a result, we can Figure 1: Implementation process of the proposed SVR-EEMD method. e left part describes the signal extension procedure, and the right part describes the signal decomposition procedure.
obtain the denoised signal by selecting the target IMFs for reconstruction based on the signal and noise frequency distribution characteristics; that is to say, if the noise frequency is in high-frequency band or higher than the signal frequency, we can zero the first few IMFs and reserve the other IMFs where the signal is located and vice versa.
Results and Discussion
To demonstrate the denoising performance of the proposed SVR-EEMD method, we applied it to the simulated data and real-world PPG data. For the simulated data, we use SNR and correlation coefficient (Corr) to evaluate the effectiveness of this method and select precision rate (P) and recall rate (R) of the pulse wave peak as estimations of this method on the real-world PPG data.
Experiments for the Simulated Data.
We choose N � 100 and N p � 0.0316 to make sure that the EEMD and SVR-EEMD methods are under the same decomposition condition. Figure 2 depicts the IMF components of the simulated signal decomposed by those two methods in detail.
In Figure 2, we can discover that, first, unlike the general mirror extension or zero-padding operation, our SVR model can predict the previous and future trend of the signal and extend it accurately. Second, the IMFs are arranged in order of frequency from high to low and in this decomposition, IMF 3rd and IMF 4th correspond to the signal components of the simulated signal with frequencies of 20 Hz and 5 Hz, respectively, while the other corresponds to the noise components.
ird, all the IMFs (left) decomposed by EEMD have different degrees of divergence at the left and right ends, especially the left. In contrast, the SVR-EEMD method suppresses this effect to a large extent. Figure 3 compares the processed signals by FIR low-pass filter (cutoff frequency at 22 Hz), EEMD, and SVR-EEMD method. Due to the "end effect," the signal reconstructed by EEMD has severe distortion at both sides and the filtered data also have small deviation from the original signal because of phase shift, in which circumstances only the signal reconstructed by SVR-EEMD maintains a high degree of consistency with the original signal as the left and right subgraphs show. We calculated the SNR and Corr listed in Table 1, which proves the SVR-EEMD an effective method to significantly suppress the "end effect" and filter out noise in the signal. Figures 4(a) and 4(b) briefly describe the time-frequency distribution of the PPG signal of the patient 25 during the 345-370 s period. We can see that there was a strong motion disturbance (red arrow) around the 362 s and the PPG signal was completely submerged in the noise. In addition, we can clearly see that the respiratory activity (red elliptical area) is superimposed on the PPG signal, which is also confirmed in Figure 4(b). In Figure 4(b), the respiratory rate is about 0.27 Hz consistent with the dataset record and the signal also contains a large number of harmonics in addition to PPG signal (about 2.08 Hz). We use the proposed method (N � 30, N p � 0.6) to decompose the data, and results are shown in Figure 5(a). Additionally, we draw the power spectral density (PSD) map of each IMF in Figure 5(b).
Experiments for the Real-World PPG Data.
It can be seen from Figure 5(a) that the left and right ends of the original PPG signal are accurately extended by three peaks (red rectangular area) after the SVR extension, and the extended signal is decomposed into seven IMF components in different frequency bands by EEMD. Among the IMFs, there is no divergence at each component, which proves that the SVR-EEMD method can solve the "end effect" problem in EMD when decomposing PPG signals. From the perspective of IMF frequency, IMF 1st and IMF 2nd are mainly random noise and harmonics with relatively higher frequency and lower intensity compared with IMF 3rd and IMF 4th (the maximum intensities of IMF 1st and IMF 2nd are 0.01 and 0.72 with corresponding frequencies of 12.57 Hz and 8.33 Hz, respectively, while the maximum intensities of IMF 3rd and IMF 4th are 126.5 and 245.9 with corresponding frequencies of 4.18 Hz and 2.08 Hz, respectively). IMF 4th is the peak position of PPG signal whose details can be found in IMF 3rd . IMF 6th and IMF 7th are the least lower frequency bands corresponding to the respiratory activity elliptically annotated in Figure 4(a), and the frequency of IMF 5th is the most mixed with two distinct ripples at 362 s and 369 s. We reconstructed the PPG signal, respiratory signal, and interference signal shown in Figure 6 with these IMFs. It can be found that the two evident motion artifacts (red elliptical area) in the original signal have been decomposed into the MA signal, and the reconstructed RA signal (black solid curve) is also in good agreement with the respiratory activity (red dotted curve) recorded in the dataset. Compared with the original signal, the reconstructed PPG signal not only filters out most of the interference but also successfully recovers the PPG signal (red rectangular area) that is submerged in MA noise. However, at the 362 s moment, the interference is too strong to recover the PPG signal clearly but enough to detect the PPG peak position.
We count the ratio of the number of successfully recognized peaks to the total number recognized as the precision rate and to the actual number in PPG data as the recall rate to verify the performance of the FIR filter (cut-off frequency at 12 Hz according to PPG signal frequency range), EEMD, and SVR-EEMD method again using the patient data, and the results are listed in Table 2.
It can be found statistically from Table 2 that the EEMD method is slightly better than the FIR filter in terms of precision and recall. For data of patients 2, 5, 34, 38, 43, and 50, the EEMD method works better than the FIR filter, while for patients 33 and 45, the FIR filter does indeed better than the EEMD method. Unsurprisingly, the SVR-EEMD method is more often outstanding than the previous two methods. e reasons we analyzed for this result may be that first, the frequency components in PPG signals of different patients are different, especially those whose pulse rate is extremely unstable, resulting in different methods with different treatment results; second, the "end effect" causes the signal to diverge during decomposition and leads to false peaks or missing peaks, to make matters worse, and this divergence may penetrate into the signal and contaminate the entire data sequence. ird, the random nature of the auxiliary added Gaussian white noise may cause large fluctuations at a certain position of the signal, which could make the 345 350 355 360 365 370 345 350 355 360 365 370 345 350 355 360 365 370 345 350 355 360 365 370 345 350 355 360 365 370 345 350 355 360 365 370 345 350 355 360 365 370 345 350 355 360 365 370 IMF1 IMF2 IMF3 IMF4 IMF5 IMF6 effectiveness of the EEMD method not as effective as the FIR filter. Furthermore, we calculated the correlation coefficient and mean delay time (MDT) of the data processed by those three methods as shown in Table 3. e mean delay time is calculated by equation (10), where n is the total number of peaks for successful recognition at t Eig in the processed data and t Eig ′ is the time at which the peaks of the original data are located is defined as the average of the sum of the absolute time difference between the successfully identified peaks and the corresponding peaks in the original data: We can see that, due to the phase shift effect, the FIR filtered data have a significant time delay phenomenon, thus with a relatively lower correlation coefficient. Inversely, the EEMD and SVR-EEMD methods have higher correlation coefficients while achieving lower latency. Moreover, the SVR-EEMD method solves the "end effect" problem and improves both the MDT and Corr indicators. Table 2, the EEMD method is generally better than the FIR filter, except for patients 33 and 45.
Discussion. In
Taking the data of patient 45, Figures 7(a) and 7(b), respectively, describe in detail the comparison of the left and right ends between the processed data and the original data.
It can be seen that the data filtered by the FIR filter have a serious phase delay problem, which is the reason why the MDT is longer and the Corr is lower in Table 3. e left and right ends of the data reconstructed by the EEMD method also have different degrees of divergence and deviate from the original data trend. Even worse, a false peak appears in the right end data, which reduces the precision and recall of the EEMD method to some extent. In contrast, the SVR-EEMD method does not have these two problems and has achieved good results.
In addition, the intensity of noise superimposed on the signal has an important influence on the decomposition effect of the EEMD method. For the simulated signal, we can calculate the relative energy of noise and select appropriate noise intensity. However, for the real PPG data, we have no prior knowledge of the noise in the data, but we can estimate the noise intensity distribution range by posterior statistics. Figure 8 shows the averaged correlation coefficient, precision, and recall of the 10 sets of data processed by the SVR-EEMD method applying different noise intensities ranging from 0.15 to 2.25 and the suitable noise intensity range is 0.75-1.25, the key point of which is that how much it should be applied needs further study. Although the phase shift characteristic of the FIR filter makes the filtered data less correlated with the original data, the filter is simpler and easier to use. If a low phase shift or zero phase shift filter is used, the result will be improved, but the signal and noise in the data cannot be decomposed into different intrinsic mode functions like the EEMD method does.
Conclusions
In order to solve the "end effect" problem in the EEMD method, this paper proposes an SVR-EEMD method based IMF 1st + IMF 2nd + IMF 5th Figure 6: MA, RA, and PPG signals reconstructed by the SVR-EEMD method. e red elliptical areas are two ripples decomposed from the original signal, and the red dotted curve presents the recorded respiratory activity in the dataset. e two red rectangles indicate the recovered peaks of the reconstructed PPG signal by IMF 3rd plus IMF 4th . on support vector regression extension and applies it to the denoising of PPG signals. Both simulated data and realworld PPG data are used to compare the denoising performance of the FIR low-pass filter, EEMD, and SVR-EEMD methods. For the simulated data, the SNR of which processed by the SVR-EEMD method improves nearly three times higher with a correlation coefficient over 0.99. For the real-world PPG data processed by the SVR-EEMD method, not only the precision and recall are higher than the other two methods but also it maintains high consistency with the original PPG data. e results of the simulated data and realworld PPG data prove that the proposed method can overcome the "end effect" problem of the traditional EEMD method in decomposition, which can improve the decomposition performance and bring beneficial results for nonlinear and nonstationary signal analysis.
Data Availability
e simulated data used to support the simulation part of this study are available from the corresponding author upon request, and the real-world PPG data can be obtained from BIDMC PPG and Respiration Dataset of Physio Bank at https://www.physionet.org/physiobank/database/bidmc.
Conflicts of Interest
e authors declare that there are no conflicts of interest regarding the publication of this paper. Computational and Mathematical Methods in Medicine 9 | 5,198.2 | 2019-12-12T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Interferon-stimulated TRIM69 interrupts dengue virus replication by ubiquitinating viral nonstructural protein 3
In order to eliminate viral infections, hundreds of interferon-stimulated genes (ISGs) are induced via type I interferons (IFNs). However, the functions and mechanisms of most ISGs are largely unclear. A tripartite motif (TRIM) protein encoding gene TRIM69 is induced by dengue virus (DENV) infection as an ISG. TRIM69 restricts DENV replication, and its RING domain, which has the E3 ubiquitin ligase activity, is critical for its antiviral activity. An in vivo study further confirmed that TRIM69 contributes to the control of DENV infection in immunocompetent mice. Unlike many other TRIM family members, TRIM69 is not involved in modulation of IFN signaling. Instead, TRIM69 interacts with DENV Nonstructural Protein 3 (NS3) directly and mediates its polyubiquitination and degradation. Finally, Lys104 of NS3 is identified as the target of TRIM69-mediated ubiquitination. Our study demonstrates that TRIM69 restricts DENV replication by specifically ubiquitinating a viral nonstructural protein.
Introduction
Recently, mosquito-borne viral diseases become global threats to human health. As the most significant mosquito-borne viral pathogen, Dengue virus (DENV) is responsible for outbreaks of dengue fever (DF), dengue shock syndrome (DSS), and dengue hemorrhagic fever (DHF). DENV causes millions of infections in over 100 countries annually, resulting in more than 25,000 deaths [1,2]. A DENV vaccine was recently licensed for use after several decades of efforts, however, it confers only partial cross protection for all DENV serotypes [3,4]. Additionally, there is still no antiviral drugs have been approved to treat DENV induced diseases [5,6].
The tripartite motif family members (TRIMs) share three conserved domains, an N-terminal Really Interesting New Gene (RING) domain, one or two B-Boxes (B1/B2) and a coiledcoil (CC) domain. TRIM proteins are implicated in multiple cellular functions, ranging from transcriptional regulation to post-translational modifications involved in various cellular processes, such as cell differentiation, apoptosis and oncogenesis [30]. TRIM proteins have been long predicted to be part of the innate immune pathway. In line with this, recent studies show that an increasing number of TRIM proteins are recognized as ISGs and mediate antiviral activities [31][32][33]. The antiviral activities of TRIM proteins depend, for the most part, on their function of E3-ubiquitin ligases activity. TRIM38 sumoylates cGAS and STING during the early phase of virus infection to promote the stability of these two proteins [34]. TRIM56 inhibits bovine viral diarrhea virus (BVDV) replication by targeting intracellular viral RNA replication [35]. TRIM5a is responsible for post-entry restriction of diverse retroviruses, including N-MLV and HIV-1 [32,36]. TRIM5α blocks HIV-1replication by targeting the capsid and promoting its rapid, premature disassembly [37]. At the same time, TRIM5α stimulates innate immune signaling by catalyzing the synthesis of unanchored K63-linked poly-ubiquitin chains that bind and activate TAK1-dependent NF-κB [32]. TRIM22 inhibits HIV-1 by down-regulating the viral long terminal repeat-directed transcription [38,39]. Some TRIM proteins restrict viral replication by directly targeting viral proteins. TRIM22 has been reported to interact with HIV-1 Gag protein, EMCV 3C protease, influenza virus nucleoprotein and HCV NS5A, resulting in inhibition of viral replication [40][41][42][43]. TRIM79α inhibits tick-borne encephalitis virus via targeting the viral RNA-dependent RNA polymerase, NS5, for lysosomal degradation [44]. Although knowledge on the cellular roles of TRIM E3 ubiquitin ligases has rapidly grown over the last years, many aspects of their molecular functions remain unclear.
Here, we identified another TRIM family member, TRIM69 (also known as RNF36, HSD34, and Trif) as an IFN-inducible virus restriction factor. As an E3 ubiquitin ligase [45], TRIM69 plays crucial roles in apoptosis [46], tumor control [47] and zebrafish development [48,49]. However, TRIM69 has not been reported to have any function on antiviral immunity. In this study, we demonstrated that TRIM69 is an IFN-stimulated gene and restricts DENV replication in vitro and in vivo. TRIM69 directly interacts with viral NS3 and results in NS3 degradation by proteasomes. Thus, TRIM69 is a novel IFN inducible restriction factor for DENV.
TRIM69 is upregulated upon DENV infection
To evaluate the mechanisms of how host cells resist a pathogenic microorganism, RNA-Seq was performed to screen out the host factors involved in DENV-2 infection. 152 mRNAs were significantly changed after DENV infection in 293T cells (99 mRNAs were induced, while the others were decreased). As expected, many genes related to antiviral innate immune signal pathway were found out as shown by gene cluster analysis (S1 Fig). Furthermore, 52 of the 99 upregulated genes upon DENV-2 infection were predicted as ISGs by searching the Interferome V2.01 database (www.interferome.org). Many well-known ISGs, such as DDX58, IRF9, ISG15 and STAT1, were screened out after DENV-2 infection (S1 Table). The mRNA expression of TRIM69, together with five other putative ISGs (LGALS3BP, C19ORF66, DDX60, FBXO15, and HELZ2) was confirmed to be significantly upregulated after DENV-2 infection (Fig 1A, S2A Fig and S1 Table). The protein level of TRIM69 was also increased with DENV-2 infection in a virus dose-dependent manner ( Fig 1B). Consistent with this, TRIM69 was also upregulated in A549, HUVEC and PBMC cells infected with DENV-2 ( Fig 1C). In addition, the expression of TRIM69 was increased in peripheral blood cells from DENV-2-infected mice ( Fig 1D). When HUVEC and HFF cells were stimulated with SeV, TRIM69 was also upregulated ( Fig 1E).
A previous study reported the expression of TRIM family genes in response to interferons in immune cells. TRIM69 was identified as one of 27 TRIM genes which were induced by interferons [50]. Consistent with their results, we also found that the expression of TRIM69 mRNA and protein were induced in 293T, HUVEC and HFF cells upon IFN-β stimulation (Fig 1F and 1G). Four of other five selected ISGs, LGALS3BP, C19ORF66, DDX60, and HELZ2, were also induced in 293T cells stimulated with IFN-β (S2B Fig). Taken together, these data further confirmed that TRIM69 is an ISG induced by type I IFN and virus infection.
DENV replication is restricted by TRIM69
To explore the function of TRIM69 on DENV replication, cells were transfected with TRIM69-Myc and then infected with DENV-2. Transient overexpression of TRIM69 did not cause noticeable cell toxicity at 72 h post transfection (Fig 2A). qRT-PCR and Western Blot results suggested that the viral RNA and proteins were significantly decreased in TRIM69 overexpressed cells compared with control cells (Fig 2B and 2C). Immunofluorescence (IF) assay also confirmed that the viral NS3 and NS4B protein levels were significantly decreased in TRIM69 overexpressed cells (Fig 2D). Consistently, the released viruses in cell supernatants were also decreased in TRIM69 overexpressed cells (Fig 2E). In addition, we also used a luciferase-based DENV replicon (DGL2), derived from DENV-1 [51,52], to analyze the function of TRIM69 on virus replication. The replicon replication was also impaired by TRIM69 ectopic expression (Fig 2F).
To further confirm this phenotype, two TRIM69 knockdown shRNAs (sh69-1 and sh69-2) were constructed. Silencing TRIM69 by shRNA transfection did not influence cell viability ( Fig 2G). The abundance of both DENV NS3 and NS4B proteins was significantly increased in two TRIM69 shRNAs transfected 293T cells after DENV-2 infection (Fig 2H). The virus titers in cell supernatants were also increased in TRIM69 silenced cells (Fig 2I). In addition, a TRIM69 knockout stable cell line was generated by CRISPER/Cas9 system, and the replication of DGL2 replicon was significantly increased in TRIM69 knockout cells compared with controls ( Fig 2J). Furthermore, the replication of DGL2 was also increased in TRIM69 silenced Huh7.0 stable cell line ( Fig 2K).
Since TRIM69 is induced by IFNs and plays a role to restrict DENV infection, we wondered whether TRIM69 is critical for the efficacy of IFN on DENV inhibition. Viral replication assays suggested that IFN-β treatment could not efficiently suppress DGL2 replication in TRIM69 silenced cells (Fig 2L). This demonstrates that TRIM69 is critical for IFN mediated anti-DENV activity.
To investigate whether mouse TRIM69 has the same function as its human homolog, mTRIM69 was overexpressed or silenced in mouse B16F10 cells. The results suggested that viral proteins expression and viral titers of DENV were significantly decreased in mTRIM69myc transfected B16F10 cells compared with controls (S3A Fig). Furthermore, DENV infection was also obviously increased in mTRIM69 silenced cells (S3B and S3C Fig). Altogether, these data illustrate that both human and mouse TRIM69 protein acts as antiviral factors for DENV replication.
The E3 ubiquitin ligase activity of TRIM69 is required for DENV restriction
TRIM69 is a TRIM family member containing a RING domain with E3 ubiquitin ligase activity. We next tested whether the E3 ubiquitin ligase activity of TRIM69 is necessary for DENV inhibition.
TRIM69 CA, a mutant TRIM69 with the catalytic amino acids Cys61 and Cys64 of the RING domain substituted by two Alanines, loses its E3 ubiquitin ligase activity [53]. Cell lines stably expressing TRIM69-Flag and TRIM69 CA-Flag were generated using pLV-Flag vector by antibiotics selection. Western Blots confirmed that stable cell lines expressed higher level of TRIM69 (or TRIM69 CA) compared with endogenous TRIM69 (Fig 3A). After DENV-2 infection, the abundance of viral NS4B, as shown by Western Blots (Fig 3A) and IF assay (Fig 3B), was decreased in TRIM69 expressing cell line, but not in TRIM69 CA cells. The virus titers from the cells stably expressing TRIM69 were lower than the control or TRIM69 CA ( Fig 3C). In line with this, DGL2 replication was also impaired in TRIM69, but not TRIM69 CA, overexpressed cells (Fig 3D). When the cells treated with MG132, a proteasome inhibitor, the content of NS4B was recovered in TRIM69 overexpressed cell after DENV infection ( Fig 3E). These knockout 293T cells. TRIM69 -/cell line was generated by CRISPER/Cas9 system. (K) DGL2 replicon replication in Huh7.0 cells with TRIM69 knockdown. (L) The inhibitory efficiency of IFN-β on DGL-2 replication in normal or TRIM69 knockdown 293T cells (left). The TRIM69 mRNA levels in these cells were indicated by qRT-PCR (right). Results are expressed as mean ± SEM. Ã p < 0.05, ÃÃ p < 0.01, and ÃÃÃ p < 0.001. The data shown are representative of at least 3 independent experiments. results indicate that the E3 ubiquitin ligase activity of TRIM69 is critical for its antiviral activity.
Knockdown of TRIM69 renders mice susceptible to DENV infection
Previous study suggested DENV causes a transient infection in immunocompetent mice with detectable virus in various organs [54]. To explore the function of TRIM69 on DENV in vivo, shm69-1 and shNC lentiviruses were generated and used to inject into mice via caudal vein. 7 days post lentivirus infection, mice were challenged with DENV-2 by intravenous injection. qRT-PCR and western blot suggested that mouse TRIM69 was silenced by shm69-1 lentiviruses in mouse lung, spleen and kidney (Fig 4A and 4B). In consistent with the in vitro data, both DENV RNA level ( Fig 4C) and virus titers ( Fig 4D) were significantly increased in organs from TRIM69-silenced mice. These data further confirmed that TRIM69 is an important host antiviral factor against DENV in vivo. To test whether TRIM69 also restrict other virus infection, TRIM69 overexpressing or control cells were infected with influenza virus H1N1 (an RNA virus) or herpes virus HSV-1 (a DNA virus), respectively. The results suggested that TRIM69 did not interfere with H1N1 or HSV-1 infection (S4A and S4B Fig). TRIM69-silenced mice showed similar susceptibility with wide type mice to H1N1 infection (S4C and S4D Fig). These data suggest that TRIM69 may play a specific antiviral activity against DENV.
DENV NS3 is specifically targeted by TRIM69
Several members of TRIM family proteins were reported to restrict viral replication by modulate the IFN pathways. We next tested whether TRIM69 is involved in IFN or ISG activation. Results suggested that overexpressing or silencing TRIM69 did not significantly influence SeV-induced IFN or ISG production (S5 Fig). This is also consistent with previous report by Versteeg G et al., that TRIM69 does not modulate either IFN production or ISG expression [33].
To further elucidate the mechanisms of TRIM69 on DENV inhibition, immunoprecipitation and mass spectrometry (IP-MS) were performed to find out proteins that interact with TRIM69 during DENV infection ( Fig 5A and S6A Fig). Three viral proteins, NS3, NS4B and NS5, were pulled down by TRIM69-Flag coupled beads but not with beads alone (S6B Fig). Three peptides of NS3, one peptide of NS4B, and one peptide of NS5 were identified by IP-MS ( Fig 5B). These three viral proteins were co-expressed in 293T cells with or without TRIM69-Myc. The result suggested, the abundance of NS3, but not NS4B or NS5, was significantly reduced in TRIM69 overexpressed cell compared with controls ( Fig 5C). Moreover, the ectopic expression of NS3 was increased via TRIM69 knockdown by sh69-2 ( Fig 5D). This suggests that NS3 is a target for TRIM69.
NS3 forms a protease complex with NS2B, not only responsible for cleavage of viral polyprotein, but also for immune evasion [19,20]. NS2B3 could specifically cleave human STING, play a role to escape STING mediated antiviral pathway. We then tested whether TRIM69 also influences the cleavage activity of NS2B3 on STING. We found that TRIM69 significantly reduced the amount of NS2B3 protein, thereby impaired the cleavage of STING (S7A Fig). These data further suggest that TRIM69 targets NS3 and modulates NS3 function.
TRIM69 interacts with DENV NS3
To confirm the interaction between DENV NS3 and TRIM69, the cellular distribution of NS3 and TRIM69 was examined by confocal microscopy. When co-expressed with TRIM69, NS3 is re-distributed from a predominantly diffuse cytoplasmic localization to punctate sites co-localizing with TRIM69 ( Fig 6A). The co-localization of TRIM69 and NS3 was specific, as another viral protein, NS4B, did not co-localize with TRIM69 ( Fig 6A). Co-IP assays were performed to further confirm the physical interaction between TRIM69 and NS3. IP of NS3 with Flag antibody successfully coprecipitated TRIM69-Myc (Fig 6B). Likewise, the reciprocal test using Myc antibody could immunoprecipitate TRIM69 with NS3 ( Fig 6C). Furthermore, endogenous TRIM69 also interacted with NS3 (Fig 6D and 6E) from DENV infected cells. Finally, a GST pulldown assay also confirmed that purified TRIM69 protein interacts with GST-NS3 directly (Fig 6F).
The interaction between mTRIM69 and NS3 was also investigated in mouse cells. mTRIM69 also co-localized and interacted with NS3 in B16F10 cells (S8A and S8B Fig). These data suggest that TRIM69 interacts with DENV NS3.
TRIM69 is an ubiquitin ligase of DENV NS3
Since TRIM69 is an E3 ligase, we next investigated whether NS3 is ubiquitinated by TRIM69. As shown in Fig 7A, overexpressing TRIM69, but not TRIM69 CA, led to NS3 degradation; [65,71]. This is consistent with results described previously and the smaller protein may have arisen from internal initiation of translation). however, this degradation was blocked by MG132. When the ectopically expressed NS3 was immunoprecipitated by Flag, we observed ubiquitination modifications on NS3, and the ubiquitination of NS3 was obviously increased in the presence of TRIM69-Myc, but not of TRIM69 CA-Myc (Fig 7B). We also detected more endogenous ubiquitin conjugated to NS3 in the presence of TRIM69, but not TRIM69-CA (Fig 7C). Consistent with this, the ubiquitin ligated to NS3 was significantly reduced when TRIM69 was knockdown (Fig 7D). Finally, an in vitro ubiquitination assay further confirmed that TRIM69 can directly ubiquitinate NS3 in the presence of ubiquitin E1 and E2 in a cell-free system (Fig 7E).
Lys104 of NS3 is an ubiquitination site for TRIM69
DENV-2 NS3 contains 46 lysine residues. Seven (Lys15, Lys90, Lys104, Lys170, Lys489, Lys515, and Lys584) of these were predicted to be potential ubiquitination sites by the UbPred program (http://www.ubpred.org/). To determine the NS3 ubiquitination sites by TRIM69, we replaced each of the seven NS3 lysine residues noted above individually with arginine. Immunoprecipitation with anti-Flag and immunoblot analysis of ubiquitin demonstrated that K104R substitution significantly decreased the ubiquitination of NS3 by ectopically expressed TRIM69 (Fig 8A). Furthermore, NS3 WT, K90R, and K104R were transfected into 293T cells together with or without TRIM69-Myc. The immunoblot analysis showed that the expression of NS3 WT and K90R were significantly reduced via TRIM69 ectopic expression, however, K104R was not (Fig 8B).
To further confirm the Lys104 is a potential ubiquitination site of NS3 for TRIM69, we constructed a mutant DENV-1 DGL2 replicon (NS3-K104R) via site-directed mutagenesis, in which the Lys104 of NS3 being replaced by arginine. DGL2 NS3-WT and NS3-K104R were transfected into 293T cells individually together with or without TRIM69-Myc. The results revealed that the replication of DGL2 reduced with TRIM69 ectopic expression, however, the replication of NS3-K104R did not ( Fig 8C). All the data illuminate that Lys104 is an ubiquitination site of NS3 for TRIM69.
Discussion
This work has illustrated a TRIM family member, TRIM69, as a key host factor needed to restrict DENV infection. TRIM69 mRNA was found to be overexpressed in 293T cells infected with DENV-2 as determined by RNA-Seq analysis. A significant amount of differential expressed genes found in virus infected cells have been described as signaling pathway molecules involved in antiviral innate immunity (S1 Fig). 52 out of the 99 upregulated genes are predicted ISGs, such as TRIM69, LGALS3BP, C19ORF66, DDX60, and HELZ2 (S2 Fig). All of these putative ISGs were upregulated after DENV infection (S2 Fig). Consistent with our results, two recent studies also report that both C19ORF66 and HELZ2 are induced by IFN and suppress DENV replication [55,56]. A previous report identified that TRIM69 is induced in peripheral blood cells upon type I IFN stimulation [50], here we show that TRIM69 is also upregulated in 293T, HUVEC, and HFF cells by IFN-β stimulation or virus infection (Fig 1), and has antiviral properties against DENV infection.
Many TRIM family proteins are involved in regulating signaling pathways such as Toll-like receptors (TLRs) and RIG-I-like receptors (RLRs) which are needed for viral detection and innate immune responses [57]. For example, TRIM12c interacts with TRAF6, leading to a and HA-Ub were co-transfected into 293T cells together with shNC or sh69-2 for 48 h. (E) In vitro ubiquitination of NS3 by TRIM69. In vitro ubiquitination assay was performed using an E3 Ligase Auto-Ubiquitination Assay Kit (Abcam) according to manufacturer's instructions. Representative blots from three different repeats were shown.
https://doi.org/10.1371/journal.ppat.1007287.g007 [58]. TRIM38 negatively regulates TLR3/ 4-mediated innate immune and inflammatory responses [59]. TRIM13 acts as a negative regulator for MDA5-mediated type I interferon production [60]. Since TRIM69 is an ISG, we also tested whether or not it participates in IFN-induced signal pathway. Unlike other reported TRIM family members, TRIM69 did not influence SeV-induced IFN-β production or IFN-β/ SeV induced ISRE promoter activation, which were consistent with the findings from a previous screening [33]. They found roughly half of the 75 TRIM family members modulated the interferon response, but TRIM69 was not in the list [33]. These results suggest that TRIM69 has no influence on IFN production or IFN function. In line with this, we also found that TRIM69 did not influence other virus infection, such as H1N1 or HSV-1 (S4 Fig). These results suggest that TRIM69 may use a specific mechanism to restrict DENV infection, independent of interferon pathway.
Some TRIM proteins have been demonstrated to have direct antiviral activity, including TRIM5α, TRIM22, and TRIM79α [41,44,[61][62][63][64]. To further investigate the mechanism of TRIM69 on inhibiting DENV replication, IP-MS analysis was performed to search for host and viral proteins interacting with TRIM69. DENV NS3 was found directly interacts with TRIM69 and degraded via TRIM69 ectopic expression (Fig 5). Then, we tested whether TRIM69 also influences the function of NS3. A recent study suggested that a K27-linked ubiquitination of NS3 enhance the interaction of NS3 and NS2B, thereby promotes the cleavage STING by NS2B3 complex [65]. We found that overexpression of TRIM69 impaired the cleavage of STING by NS2B3 (S7A Fig). This is reasonable, since TRIM69 targets NS3 to degradation, NS2B3 level will also be decreased (S7A Fig). While, we found TRIM69 seems not influence the interaction efficiency of NS2B and NS3 (S7B Fig). A possible reason is that our other preliminary experiments suggest TRIM69 may influence K11-linked ubiquitination of NS3, rather than previously reported K27-linked ubiquitination [65]. And K11-linked poly ubiquitination can mediates protein degradation in a proteasome dependent manner [66]. Further experiments will be required to address the detailed ubiquitination form on NS3 mediated by TRIM69.
In this study we found, TRIM69 acts as an IFN-β-stimulated ISG and has antiviral activity via its RING domain. As an E3 ubiquitin ligase, TRIM69 was reported to restrict DENV replication by direct ubiquitination of NS3 which leads to NS3 degradation. The viral protease NS3 is highly conserved throughout the Flavivirus genus and necessary for viral replication and immune evasion [67,68]. We next will further investigate whether TRIM69 acts as a broadspectrum restriction factor for all the closely related mosquito-borne flaviviruses.
Ethics statements
The HUVEC (Human Umbilical Vascular Endothelium Cells) and PBMC (human Peripheral Blood Mononuclear Cells) were obtained from BeNa Culture Collection (Bejing, China). All samples were anonymized and the projects using of human biological specimens were approved by an institutional review board (IRB) of Soochow University.
Animal experiments were conducted according to the Guide for the Care and Use of Medical Laboratory Animals (Ministry of Health, People's Republic of China) and approved by the TRIM69-Myc. The supernatants from the cells were harvested at indicated time-points to detect the luciferase activity. Results are expressed as mean ± SEM. Ã p < 0.05, ÃÃ p < 0.01, and ÃÃÃ p < 0.001. The data shown are representative of at least 3 independent experiments. https://doi.org/10.1371/journal.ppat.1007287.g008 Animal Care and Use Committee as well as the Ethical Committee of Soochow University (SYSK- (S2012-0062)).
Construction of stable cell lines
293T cells were transfected with pLV-TRIM69-Flag or pLV-TRIM69CA-Flag, and selected with puromycin (2 μg/mL) for at least 3 weeks. The overexpressions of TRIM69 and TRIM69CA in selected stable cell lines were confirmed by Western Blot. Similarly, shTRIM69 (U6-shRNA-GFP-puro) and TRIM69 -/cell line (pX462) were constructed by puromycin selection followed by single cell clone culture and Western Blot identification.
RNA-Seq and data analysis
Total RNA was collected from DENV-2 infected (or non-infected) 293T cells at 48h postinfection. cDNA libraries were prepared through the sequential use of the RNeasy Mini Kit with On-Column DNase Digestion Set (QIAGEN, Venlo, Netherlands), Dynabeads mRNA DIRECT purification Kit and Total RNA-Seq Kit v2 (Thermo Fisher Scientific). The transcription sequences were sequenced using an Illumina Hiseq2000, and the total base number was more than 20 Gb per sample. RNA-Seq de novo assembly was performed using Trinity. Get ORF in EBOSS were used to find protein from contigs.
Dual-Luciferase Reporter (DLR) assays
100 ng expression plasmid, 50 ng IFN-β-Luc/ISRE-Luc, and 10 ng pRL-TK (internal control) were co-transfected into 293T cells plated in 96-well plates. Then the cells were treated with SeV infection or IFN-β (200 U/ml) stimulation where indicated. 24 hours later, cells were harvested and the DLR assays were performed with a luciferase assay kit (Promega, Madison, WI). All reporter assays were completed at least in triplicate, and the results were shown as average values ±standard deviations (SD) from one representative experiment.
DENV replicon Gaussia luciferase reporter assay
In 96-well plates, 50 ng of DGL2 replicon plasmid was transfected into 293T cells stably expressing TRIM69 (or TRIM69 CA) or TRIM69 silenced (shTRIM69 or TRIM69 -/-) cells. For the Gaussia luciferase assay, culture supernatants were collected at different time points and luciferase was measured using BioLux Gaussia Luciferase Assay Kit (New England Biolabs) according to manufacturer's instructions.
For co-localization study, plasmids with NS3-Flag or NS4B-Flag were transfected into Hela cells together with or without hTRIM69-Myc. To investigate the co-localization of mTRIM69 and NS3, NS3-Flag was transfected into mouse B16F10 cells together with or without mTRIM69-Myc. All the cells were treated with MG132 (20 μM) for 4 h before fixation.
Virus titration
The titers of DENV-2 in cell-free supernatants were determined with a median tissue culture infective dose (TCID 50 ) assay according to standard protocols on Vero cells [69]. Briefly, Samples were serially diluted and inoculated into Vero cells in 96-well plates. After 5-day incubation, cells were examined for cytopathic effects (CPE) under a light microscope. The virus titer (TCID 50 /ml) was calculated using the Reed-Muench method. 1 TCID 50 /ml was equivalent to 0.69 pfu/ml [69,70].
Immunoprecipitation (IP) assays
TRIM69-Flag plasmid was transfected into 293T cells and then infected with DENV-2. The cells were treated with MG132 (20 μM) for 4 h before lysed with RIPA buffer (25 mM Tris•HCl pH 7.4, 150 mM NaCl, 1% NP-40, 1 mM EDTA, 5% glycerol) together with Protease Inhibitors (CST). Samples were centrifugated for 10 min to remove cellular debris. The lysates were incubated with Flag Ab conjugated agarose beads (Sigma-Aldrich) overnight at 4˚C. After immunoprecipitation, proteins were separated on SDS-PAGE gels (Invitrogen) and stained with coomassie blue staining. Gel slices were excised and proteins were reduced with 10 mM DTT prior to alkylation with 55 mM iodoacetamide. Peptides were extracted and analyzed by nano-LC-MS/MS (ekspertnanoLC, TripleTOF 5600-plus, AB Sciex, USA).
For co-immunoprecipitation (Co-IP) assays, NS3-Flag was transfected together with or without human or mouse TRIM69-Myc constructs. The lysate was incubated with Myc Ab overnight at 4˚C. Then protein A/G was added into the lysate and incubated for 4 hours. The beads were then washed four times and western blot analysis was performed to detect NS3 and TRIM69. TRIM69 antibody and NS3 antibody were used to in the Co-IP of endogenous TRIM69 and DENV NS3.
For GST pulldown assay, recombinant GST-NS3 and GST control were incubated with immunoprecipitation purified TRIM69-Myc protein. The proteins pulled out by GST agarose were analyzed by western blots.
Ubiquitination assays
NS3-Flag and HA-Ub were co-transfected into 293T cells together with or without TRIM69-Myc. The cells were then treated with MG132 (20 μM) for 4h and lysed by RIPA buffer with PI and NEM. All the samples were heated at 95˚C for 5 min prior to affinity purification in 1% SDS to remove NS3 interacting proteins. Then the Flag Ab conjugated agarose beads were added into the samples separately. Following incubation overnight at 4˚C, the samples were examined via western blotting.
In vitro ubiquitination assay was performed using an E3 Ligase Auto-Ubiquitylation Assay Kit (Abcam) according to manufacturer's instructions. Briefly, immunoprecipitated NS3 were incubated with purified recombinant TRIM69 (or TRIM69 CA), E1 (Hdm2), and E2 (UbcH5a) in the presence of ATP. The in vitro ubiquitination of NS3 was analyzed by western blots.
DENV infection of TRIM69 lentiviruses treated mice
The lentiviral shRNA against mouse TRIM69 and matched control lentiviral vector were transfected into 293T cells together with the relative packaging plasmids. Lentiviruses were produced from the cells after 72 h transfection, and purified by ultracentrifugation. Then the 5×10 7 pfu of lentiviruses derived from shm69-1 were injected into mice caudal vein. 7 day post lentiviruses injection, mice were challenged with DENV-2 (1×10 7 pfu) via intravenous injection. 3 days post DENV infection, mice were sacrificed and the Lung, Spleen and Kidney organs were dissected to monitor the DENV replication.
Statistical analysis
Prism 7 software (GraphPad Software) was used for charts and statistical analyses. The significance of results was analyzed by an unpaired two-tailed ANOVA test or Student's t-test with a cutoff P value of 0.05. | 6,251 | 2018-08-01T00:00:00.000 | [
"Medicine",
"Biology"
] |
What’s Wrong with Evolutionary Causation?
This review essay reflects on recent discussions in evolutionary biology and philosophy of science on the central causes of evolution and the structure of causal explanations in evolutionary theory. In this debate, it has been argued that our view of evolutionary causation should be rethought by including more seriously developmental causes and causes of the individual acting organism. I use Tobias Uller’s and Kevin Laland’s volume Evolutionary Causation as well as recent reviews of it as a starting point to reflect on the causal role of agency, individuality, and the environment in evolution. In addition, I critically discuss classical philosophical frameworks of theory change (i.e. Popper’s, Kuhn’s and Lakatos’) used in this debate to understand changing views of evolutionary causation.
Introduction
Causality and evolution are a problematic couple. In the last decades, biologists and philosophers of biology have increasingly questioned the standard view of evolutionary causation according to which the central causes of evolution are genetic variation (causing trait variation), inheritance (causing transmission of trait variation and resemblance between parents and offspring), and natural selection (causing adapted traits to spread in populations). This critical stance is spelled out in at least five different ways: It has been argued that (a) natural selection has no or merely little causal power in evolution (Goodwin 1994), (b) natural selection cannot explain all evolutionary relevant processes (Laland et al. 2015), (c) the ultimate-proximate distinction excludes developmental causes from evolutionary explanations (Laland et al. 2013), (d) natural selection is not a causal force working on the levels of populations (Sober 1984), but a statistical epiphenomenon resulting from the reproduction and survival of individuals (Matthen and Ariew 2002), and (e) not genes and populations but the developing and acting organism should be a starting point of evolutionary explanations (Laland et al. 2015;Walsh 2015).
In recent years, these theoretical discussions have coincided with a paradoxical situation in evolutionary biology, in which directly opposing causal explanations of the same developmental phenomena increasingly emerged (e.g., Laland et al. 2014;Wray et al. 2014; see Baedke et al. 2020). For instance, niche construction has been described as a causal starting point of evolutionary trajectories (Laland et al. 2005), or as nothing but an 'extended phenotype' (Scott-Phillips et al. 2014). Developmental plasticity has been explained as facilitating and directing evolutionary processes (reviewed in Sultan 2017; Uller et al. 2020) or as merely an adaptation to environmental stochasticity. In addition, epigenetic inheritance has been seen as an (partly) independent cause of variation (Jablonka 2017) and as being caused by genetic programs (Dickins and Rahman 2012). One might see these conflicts as hinting towards opposing causal frameworks underlying evolutionary biologists' research practices and explanations.
This situation constitutes the theoretical and empirical background of Tobias Uller's and Kevin Laland's new volume Evolutionary Causation (MIT Press, 2019). It emerged from a workshop on 'Cause and Process in Evolution' held at the Konrad Lorenz Institute for Evolution and Cognition Research in 2017 (Baedke 2017). It contains 15 chapters by evolutionary biologists and philosophers of biology that offer important and novel perspectives on how different views of causation can bias, limit, enrich, or expand evolutionary theory, and thus affect our views on what phenomena require evolutionary explanations and how they should be explained. From the biological side, these contributions focus on topics such as the origin of variation and bias in variation (Stoltzfus, chapter 3; Moczek, chapter 4), environmental induction (Dayan et al. chapter 5), phenotypic plasticity and extra-genetic inheritance (Sultan, chapter 6; Watson and Thies, chapter 10), niche construction (Laland et al.,chapter 7;Duckworth,chapter 8;Watson and Thies,chapter 10;Otsuka,chapter 12;Chiu,chapter 14), and evolutionary transitions from individuals to collectives (Helanterä and Uller, chapter 9; Watson and Thies, chapter 10). From the philosophical side, contributions discuss the suitable level of organization on which evolutionary relevant causes are located, i.e. the level of individuals or populations (Walsh,chapter 11), how certain ontological assumptions about the set of evolutionary relevant causes affect methodological decisions on how to study these causes (Otsuka, chapter 12), the role time scales play in accepting different slower or faster processes as evolutionary relevant ones (Pocheville, chapter 13), how niche construction through the organism's experience of its environment (rather than its manipulation) affects natural selection (Chiu, chapter 14), and how biological information can be conceptualized and measured as a common causal factor of both development and evolution (Stotz, chapter 15).
These views on evolutionary causation are highly diverse and, at first sight, the reader might assume they are too diverse to be able to point towards a coherent novel causal framework for evolutionary biology. Some adopt compositional accounts, in which part-whole relations structure evolutionary relevant causal relations (Watson and Thies). Others highlight a scalar account, in which chosen time scales structure what counts (or should count) as a causal explanation (Duckworth, Pocheville). Others, explicitly or implicitly, develop their positions close to that of developmental systems theory (Oyama 1985;Oyama et al. 2001). This includes, for example, the idea of causal reciprocity (discussed in Pocheville). This view argues that instead of dividing biological causes in two classes, developmental proximate causes and ultimate causes like natural selection (Mayr 1961), both causes should rather be seen as forming causal feedback loops that permanently constitute one another, for example, through niche construction see Buskell 2019). Other chapters explore, in a 'dialectical manner', the causal relations between high degrees of variation (e.g., in phenotypic plasticity) with high degrees of stability and evolutionary stasis (e.g., highly conserved gene regulatory networks, robust ecological interactions; Moczek, Duckworth). However, besides this diversity on views about what qualifies as evolutionary causation, there are also similarities.
Other reviewers of this volume (Dickins 2020; Svensson 2020) have framed similarities between the chapters as emerging from the fact that most of the contributors (are said to) support recent calls for expanding evolutionary theory (see Pigliucci and Müller 2010a; Laland et al. 2015). Subsequently, these reviews critically discuss the historical narrative of how the view on evolutionary causation defended in the so-called 'extended evolutionary synthesis' (EES) differs from that of the modern synthesis. They also question whether the historical narrative about the limitations of the modern synthesis, widely discussed in the EES debate, is accurate. Here, I will try to avoid these issues as far as possible, especially because in this volume the central issues on causation have been decoupled most widely from questions about the EES. In most cases the EES has been simply ignored (for an exception, see Otsuka). Instead, in order to identify similarities in the causal views presented in this volume we might take a different approach: All authors seem to agree that the above position (a) -defended, for example, in biological structuralism -is a too radical view of evolutionary causation. Instead, most of them believe that our understanding of which causes should do the explaining in evolutionary theory needs to be rethought in more moderate ways along the lines of two arguments: argument (b-c), i.e. the inclusion of developmental causes into evolutionary explanations, and argument (d-e), i.e. the explanatory focus should lie on (or be expanded by) the causes of the individual acting organism.
The first argument, criticizing the "exclusion of development in evolutionary explanations" (Uller and Laland: 4), has been widely discussed before and is a cornerstone of many empirically and theoretically insightful chapters in this volume. Therefore, I will focus on some points related to the second argument that I think are thus far underrepresented in the current debate on evolutionary causation.
Individuals, Agency and Environments
In many of the chapters the organism is attributed a number of important causal roles in evolutionary processes. Among others, the active organism establishes the robustness of ecological interactions through its plasticity and niche constructing 1 3 behaviors (Duckworth), it mediates evolutionary transitions in individuality (Watson and Thies), and its experience of the environment modulates niche construction effects and changes natural selection from an external to a (at least in part) internally constructed cause (Chiu). Such an organism-centered perspective on evolutionary causation is tempting as it is seemingly in line with the many recent findings on how organisms affect the internal origin and transmission of variation and how they externally affect selection pressures. Nonetheless, to provide the organism a special explanatory status in evolutionary theory is also a quite challenging conceptual enterprise (see Baedke 2019). I want to highlight only three issues here related to this challenge. They refer to evolutionary individuality, agency, and the concept of environment.
If we consider organismic individual agents as the core entities that partake both in development and evolution, attempts to integrate the two realms have to show in each case that, in fact, it is the same unit that develops and evolves. In other words, if we want to unify development and evolution through the unit of the biological individual (being the one entity that partakes in both) this unit needs to meet criteria of both physiological (e.g., metabolic, immunological) and evolutionary individuality. Evolutionary individuals have been traditionally conceptualized as reproductive units with differential fitness and shared lineages, so-called 'Darwinian individuals' (Godfrey-Smith 2009), or as units of selection, so-called 'interactors' (see Hull 1980). Unfortunately, both of these physiological and evolutionary units do not always coincide (Godfrey-Smith 2013; Pradeu 2016). For example, some individuals (e.g., holobionts) form developmental but no reproductive units, as they include a multitude of lineages (e.g., microbial ones). Other possible units of selection (like genes or populations) are not identical with physiological individuals. Thus, a physiological individual may not necessarily be an evolutionary unit or vice versa.
Against this background, and against the discussions in this volume, it might become more fruitful for developmentalist views of evolutionary causation to explore different, less classical definitions of evolutionary individuality. Rather than trying to identify, first and foremost, units of selection or reproduction, the evolutionary relevant individual could be more generally construed as a causal nexus of intra-, inter-and extra-organismic processes (occurring on different developmental and evolutionary time scales) that can affect the origin, direction, and speed of evolutionary processes. This could include a range of different physiological individuals-developing organismic agents-that experience and interact with their environment in ways that affects the availability and character of variation and the organization of groups, populations and whole ecosystems in ways that might have evolutionary-relevant downstream effects. For example, these developmental units might causally contribute to the origin and stabilization of 'Darwinian individuals' at different levels of organization (as discussed in insightful chapters by Tobias and Helanterä as well as Watson and Thies). However, by no means must their evolutionary relevance be restricted to necessarily satisfying the above classical criteria of evolutionary individuality. In short, linking developmental with evolutionary causes through the unit of the (organismic) individual might necessitate approaching new and possibly broader ways to conceptualize evolutionary individuality, apart from reproductive and fit units.
This also includes clarifying what organismic agency actually should mean in a developmentally-informed causal framework of evolution. The present volume highlights 'active phenotypes' (Watson and Thies), 'active agents' and 'purposive organisms' (Laland et al.). Unfortunately, a consistent theory of agential causation that would strengthen especially the status of niche construction as a theory is missing. Attempts to provide such a framework (e.g., Laland et al.) draw on classical understandings of purposefulness of organisms through thermodynamics and selforganization (see Schrödinger 1944; see also Nicholson 2018; Baedke 2019). However, it remains unclear how this framework can incorporate those cases of purposeful behavior of organisms that include, for example, the experiential side of niche construction (Chiu; see also Sultan 2015). It should be able to flesh out the different causal roles the agent is performing in changing its environment (by modifying it) and by changing its relation to its environment (by experiencing it). There is an important analytical difference between the two that any theory of agency needs to incorporate, as both cases can have very different evolutionary effects.
This challenge might make necessary delving deeper into widely neglected issue of organismic teleology. According J.B.S. Haldane "Teleology is like a mistress to a biologist: he cannot live without her but he's unwilling to be seen with her in public" (attributed to Haldane by Pittendrigh ;Mayr 1988: 63). While some have begun to directly address the issue of teleology and how it could be linked to evolutionary causation (e.g., Walsh 2015), in the present volume this difficult task is mostly left untouched. Earlier proposals for organism-centered views of evolution that have tried to develop a non-vitalist framework of organismic goal-directedness might be used as a stepping stone for such an enterprise (see Haldane 1917;Schaxel 1919;Russell 1924Russell , 1945Bertalanffy 1928). Against longstanding anti-teleological traditions in evolutionary thought, these teleological and constructionist views argued, in line with more recent approaches, that the organism purposefully molds itself and its environment in development and evolution, like "clay modeling itself" (Russell 1924: 61). However, these approaches should be also seen as a historical warning sign. They were never able to develop enough argumentative persuasiveness, possibly due to their lack of formalization, in order to advance into mainstream evolutionary reasoning.
Another related issue concerns the concept of environment. The volume's contributions explore in detail various forms of organism-environment and genotypeenvironment relationships, but what the environment itself is and how it should be causally explained is only addressed in passing (but see Moczek, Chiu). The authors usually adopt a causal framework which, due to the dominant agential status of organisms, makes them reject externalist perspectives of the selective environment (see Moczek 2015). As Waddington (1959Waddington ( : 1636 described this view: "Natural selection is far from being as external a force as the conventional picture might lead one at first sight to believe." In this standard view, the environment is conceptualized as a "source of error that reduces precision in genetically studies," and thus one has "to reduce it as much as possible" (Falconer 1960: 140). In contrast, according to the internalist or constructionist perspective, organisms (co-)construct their environments and, as a consequence, themselves. Rather than adjusting their traits to suit their 'external' local environments, organisms alter their environments in a flexible manner so that these environments suit their traits. Thus, the causal realm of the environment is no longer constituted 'from within' but through the organismic agent.
While this view has become widely accepted in developmentalist accounts to evolutionary causation, it lacks clarity. What is needed is a classification of the different 'environments' identified in evolutionary research. These include, among others, collective or individual, general or unique, homo-or heterogeneous, invariant or spatio-temporally flexible, selective or constructed, passive or actively generative, external or internal, and experienced or 'acted on' environments. Which disciplinary or experimental settings in evolutionary biology favors certain of these views of the environment and why? Which views of the environment do recent trends towards organism-environment reciprocity and individual environments favor, and why? Furthermore, what are the epistemological and methodological challenges going along with each of these concepts? These and related conceptual questions should be addressed head-on to support first attempts to empirically fleshing-out the character of different environments, like organism-constructed environments (Clark et al. 2020). This will also allow clarifying the different causal roles the environment plays in evolution as well as identifying the problems different views on the environment pose for experimental setups and explanatory standards in evolutionary research. The present lack of conceptual clarity may lead to ambiguities about the boundaries of environments. What counts as environment in one field may be understood as (part of) an organism in another. In addition, this situation can contribute to poor communication and impossibility of collaboration across fields.
Beyond Popper, Kuhn and Lakatos
Another question this volume triggers is whether the views on evolutionary causation presented here impose some kind of theory change to the field (see Dickins 2020, Svensson 2020. Theory change has been one of the most debated topics in the history of philosophy of science. In the 20th century it was a central topic of works by authors like Karl Popper, Thomas Kuhn and Imre Lakatos. According to these models, scientific change occurs through falsification of theories (Popper), through revolutionary breaks with past theories (Kuhn), or, as a middle ground between the two, though modifications of research programs (with a hard core and protective belt of auxiliary hypotheses) that have strategies to protect themselves from falsification (Lakatos). Even though these approaches largely focused on addressing cases of scientific change in physics and mathematics, and have problems in application on biology, they had quite an impact on how biologists reasoned about change in their own field (see Mayr 1976Mayr , 1982Lewontin et al. 1984). And while philosophy of science has long moved on from these rather simplistic and in many ways problematic views of scientific change from the early and mid-20th century, many biologists today are still quite happy with discussing them. They are widely applied to understand whether or not the causal framework provided by evolutionary theory is currently changing (see Dickins and Rahman 2012;Tanghe et al. 2018, Laland et al., Otsuka; for discussion, see Fábregas-Tejeda and Vergara-Silva 2018).
Let us have a closer look at one of these attempts by Dickins (2020), and at what views of current evolutionary theory the above philosophical approaches lead to. By drawing on Lakatos, Dickins argues that the best confirmed and thus best protected theoretical core of evolutionary theory (described by the modern synthesis) is the theory of natural selection, flanked by other explanations of how populations change (e.g., through drift). This core is the only legitimate starting point to develop models that lead to a better understanding of evolutionary phenomena: What natural selection does is enable the construction of falsifiable hypotheses about particular biological systems. As such, the MS [modern synthesis] might be seen as a viable research program, following Lakatos. (Dickins 2020: 513) In contrast, developmentalist views of evolution-including their different causal frameworks like reciprocal causation-are given a different status. According to Dickins, they merely address feedback effects that should be studied first and foremost in ecology, not evolution. They cannot be included into the theoretical core of evolutionary theory, because these theories on the origin of variation, developmental bias, inclusive inheritance and niche construction, do actually not challenge natural selection directly. In contrast, "these challenges are really around quibbles with regard to specific models" (ibid.). Thus, following Lakatos and Popper, they address and try to falsify auxiliary hypothesis derived from the theoretical core. In other words, they ask for different constrains and other degrees of abstraction in models of natural selection by more strongly considering developmental processes. Dickins adds: Nonetheless the MS should welcome a fully worked theory of the emergence of the phenotype, of variation, and of inheritance and so all of this work needs to be considered. What is not presented is a real challenge to the core axioms of the MS. […] My suspicion is that true and productive challenges to evolutionary biology will arise from efforts directed toward the origins of life itself, and constraints upon this afforded by physics. (Dickins 2020: 513) Two problems should be highlighted here: First, why, at all, should a critique of evolutionary biology only be considered a valid contender or 'real challenge' if it criticizes (or tries to falsify) the theory of natural selection? In order to meet this high bar, the current developmentalist framework would basically have to defend a position similar to that of biological structuralism (see point (a) above), which failed in changing evolutionary theory in the 1980s-1990s largely because it argued for a too radical theory of evolution without natural selection. This shows that, if we assume that natural selection forms (large part of) the theoretical core of a Lakatosian research program in evolutionary biology, this automatically constrains the conditions under which evolutionary theory could 1 3 be changed. This assumption can easily distort understandings of how the current debate affects and possibly reshapes evolutionary theory. Second, why should only more fundamental sciences like physics and investigations of more causally up-stream events like the origin of life, be able to address the core of evolutionary theory? This only makes sense within a view of evolutionary theory that rests on the existence of certain "hard-core axioms" (ibid.). However, axiomatic reasoning can hardly be considered adequate for explanatory practices in biology; nor is any dogmatic stance that sees natural selection as no longer in need of proof within biology (see, e.g., Mayr 2002: 26).
As we have seen, when starting from classical philosophical frameworks of theory change, there is a danger of introducing stereotypical, axiomatic (maybe even dogmatic) and oversimplifying views of theory change to the current evolutionary debate. However, this danger exists for both critics and defenders of developmentalist approaches to evolutionary causation. Currently, the debate seems somewhat trapped within a theoretical framework that understands theory change to happen within a spectrum between normal science and revolutions, with more or less radical changes in research programs. This limitation is at least in part due to philosophers of science who so far have largely failed to provide scientists in the field with more accurate conceptual frameworks that could help them reasoning about and framing opposing views of evolutionary explanations. In other words, philosophers have failed to enrich these biological debates through fruitful active engagement (of the form requested by Pigliucci in his chapter). This should certainly not mean that there exists no theory change in the field. However, this issue is not related to questions about falsifications, revolutions, and axiomatic cores of theories.
What could an alternative approach look like? What developmentalist explanations of evolution are thought to provide is a usually neglected causal perspective that should broaden our understanding of evolution (Uller and Laland; see also Laland et al. 2015;Uller et al. 2020). Adding explanations of developmental causes, like developmental bias, phenotypic plasticity, niche construction, and inclusive inheritance, to the causal picture of evolutionary theory should lead to "more complete explanations" (Laland et al. 2015) and a "significantly expanded explanatory capacity" (Pigliucci and Müller 2010b: 12). In short, these explanations should increase the explanatory power of evolutionary theory. What exactly does this mean? What we learned from the debate so far is that explanatory power seems to be not (or not directly) linked to how much an explanation is supported by empirical evidence. In fact, while there is often agreement in evolutionary biology over the existence of these developmental phenomena Wray et al. 2014;Svensson 2020;Dickins 2020), at the same time, their explanatory relevance is questioned (Wray et al. 2014;Futuyma 2017;Svensson 2020;Dickins 2020). In short, developmental causes are accepted, but their explanatory power is not.
Against this background, one has to understand due to which explanatory virtues developmentalist explanations are better, and which tradeoffs between explanatory standards-like precision (specificity), proportionality, sensitivity, and idealization-those accounts face that seek to integrate developmentalist and genetic explanations of evolution (Baedke et al. 2020). In other words, this perspective highlights a narrative according to which evolutionary theory, expanded by developmental causes, currently faces tensions between different explanatory standards. New explanations are accepted or rejected based on criteria of explanatory power entrenched in the field. If developmentalist explanations do not meet these criteria (like a specific degree of precision, sensitivity or proportionality) scientists are skeptical whether they carry explanatory power and increase our understanding of evolution. In such cases, the integration of developmentalist and populationist views within a more pluralist framework of evolutionary causation, as requested by many authors of this volume (e.g., Pigliucci, Dayan et al., Watson and Thies, Walsh, Otsuka), is rejected by critics. In addition, this perspective on explanatory power highlights the necessity of evolutionary biologists to stop reasoning about theoretical cores, falsifiability and about whether we currently witness a gradual or revolutionary theoretical change in the field. Instead, they have to start jointly reflecting on which explanatory standards they want their evolutionary explanations and models to hold in the future.
Conclusions
Evolutionary Causation provides an excellent collection of empirical and theoretical papers. They will surely work as stimulating starting points for both biologists and philosophers to reflect on what we consider to qualify as a cause and causal explanation in evolutionary theory. This is despite-or perhaps because-of the fact that the views presented do not form a coherent whole. Interestingly, this positive assessment is shared by more critical readers of this volume, who, at the same time, take the views presented to be largely in agreement with standard views of evolutionary causation and thus to be not that new after all (Dickins 2020;Svensson 2020). This might be due to a general reluctance of critiques to rework conceptual frameworks with proven quality or to rethink entrenched explanatory standards. But this might also be because the causal picture presented by advocates of developmentalist perspectives still lacks argumentative strength and persuasiveness (e.g., when it comes to agency, individuality and the environment). One thing is for sure. Both sides don't do the debate a service in forcing it into simplistic frameworks of theory change. These approaches cannot do justice to the diversity of views of causation we currently witness in evolutionary biology. And they cannot provide guidance towards research practices that successfully integrate opposing views and explanatory standards within one research framework. | 5,843.6 | 2020-04-13T00:00:00.000 | [
"Philosophy",
"Biology"
] |
Pro-caspase-3 Is a Major Physiologic Target of Caspase-8*
The apoptotic signal triggered by ligation of members of the death receptor family is promoted by sequential activation of caspase zymogens. We show here that in a purified system, the initiator caspases-8 and -10 directly process the executioner pro-caspase-3 with activation rates (k cat/K m ) of 8.7 × 105 and 2.8 × 105 m −1 s−1, respectively. These rates are of sufficient magnitude to indicate direct processingin vivo. Differentially processed forms of caspase-3 that accumulate during its activation have similar rates of activation, activities, and specificities. The pattern and rate of caspase-8 induced activation of pro-caspase-3 in cytosolic extracts was the same as in a purified system. Moreover, immunodepletion of a putative intermediary in the pathway to activation, pro-caspase-9, was without consequence. Taken together these data demonstrate that the initiator caspase-8 can directly activate pro-caspase-3 without the requirement for an accelerator. The in vitro data thus help to deconvolute previous in vivo transfection studies which have debated the role of a direct versus indirect transmission of the apoptotic signal generated by ligation of death receptors.
Regulation of apoptosis is vital to the development and long term survival of metazoan animals. Apoptosis is required to maintain the balance between cell proliferation and cell death and, therefore, disruptions in the apoptotic program are associated with pathologies such as cancer, where there is too little cell death, and degenerative diseases, where there is too much cell death. Apoptosis can be initiated by at least three types of signals: (i) specific ligation of members on the tumor necrosis factor receptor (TNFR-1) 1 family, which includes Fas/Apo-1/ CD95; (ii) cellular stress, which includes genotoxic damage and anti-neoplastic drugs; and (iii) delivery of granule-associated serine proteases from cytotoxic lymphocytes into target cells. Key mediators that initiate and execute the apoptotic program are members of the caspase family of cysteine proteases whose activation is believed to be essential for virtually all forms of apoptosis (1)(2)(3). Caspases-3, -6, and -7 are involved in the execution of cells in response to many apoptotic stimuli including ligation of death receptors of the TNFR-1 receptor family, resulting in cleavage of a number of proteins whose limited proteolysis is definitive of apoptosis. However, these executioner caspases are not directly activated by receptor ligation, but rely on the proteolytic activity of upstream initiator caspases-8 and -10 (4 -6). In the case of caspase-8 the activation occurs by recruitment of the zymogen to the cytosolic face of the death receptor, such that the initial proteolytic signal originates by autoprocessing of the clustered zymogen (7,8).
At the execution phase, caspase-3 seems to be upstream of caspases-6 and -7 and, therefore, its activation represents a key point in transmission of the proteolytic signal (9). However, the exact mechanism of how the death signal is conveyed from caspase-8 to caspase-3 remains unresolved. Is the apoptotic signal transmitted by direct activation of the executioners by the initiators, thus constituting a minimal two-step cascade that serves to mediate the apoptotic signals, or is the signal further amplified by the presence of additional factors as suggested by Scaffidi et al. (10).
To address these issues we have performed a detailed kinetic study of the activation of pro-caspase-3 by caspases-8 and -10 using recombinant zymogens and active proteases in a defined system, and compared this to the activation of the zymogen by caspases-8 and -10 in cytosolic extracts. This allows us to predict the sequence of events that results in transmission of the proteolytic death signal originating from the initiator caspases to the executioners, and on the basis of in vitro observations, test the hypothesis that additional amplifiers and regulators of the apoptotic signals are required in vivo.
EXPERIMENTAL PROCEDURES
Materials-Active caspase-3, -8, and -10 were expressed in Escherichia coli and isolated as described previously (4,6,11). The expression constructs for caspase-3 contained a His-6 tag at the C terminus of the full-length protein, while caspases-8 and -10 contained N-terminal His tags. Pro-caspase-3 was obtained by reducing the induction time in the presence of 0.2 mM isopropyl-1-thio--D-galactopyranoside to 30 min. The concentrations of the purified caspase-3 zymogen was determined from the absorbance at 280 nm based on the molar absorption coefficient calculated from the Edelhoch relationship (12): caspase-3 (⑀ 280 ϭ 26000 M Ϫ1 cm Ϫ1 ). The concentration of caspases-8 and -10 were determined by active site titration using carbobenzoxy-Val-Ala-Asp-fluorom-ethyl ketone from Bachem AG, Switzerland. Carbobenzoxy-Asp-Glu-Val-Asp-7-amino-4-trifluoromethyl coumarin (Z-DEVD-AFC) and carbobenzoxy-Ile-Glu-Thr-Asp-7-amino-4-trifluoromethyl coumarin (Z-IETD-AFC) were purchased from Enzyme System Products (Dublin, CA). Dithiothreitol was from Diagnostic Chemicals Limited (Oxford, CT). Sucrose was from Mallickrodt (Paris, KY). All other chemicals were from Sigma. Mouse monoclonal anti-caspase-3 was purchased from Signal Transduction Laboratories. All SDS-PAGE were performed using 8 -18% acrylamide gels in the 2-amino-2-methyl-1,3-propandiol/ glycine discontinuous buffer system as described in Ref. 13. After electrophoresis the gels were either stained using Gel-Code (Pierce) according to the manufacture's protocol or Western blotted to Immobilon-P (Millipore) according to the procedure of Matsudaira (14). Western blot analysis were performed using the ECL kit (Amersham) according to the manufacture's protocol using rabbit anti-caspase-3, anti-caspase-9, or anti-PARP (Biomol). Rabbit anti-caspase-9 was generated from animals immunized with purified recombinant caspase-9 that had been expressed in E. coli, essentially as described by (15). Rabbit anti-sera for caspase-3 was prepared as described previously (16). Granzyme B was purified as described previously (17). Dr. Guy Porier kindly provided purified bovine poly(ADP-ribose)polymerase (PARP). The baculovirus caspase inhibitor p35 was obtained as a recombinant protein purified following expression in E. coli (18).
Design of Pro-peptide Mutants of Caspase-3-The D9A and D28A single mutants were prepared by PCR using the High Fidelity DNA polymerase (Boehringer) as described previously (19). PCR reactions were performed using the full-length gene encoding caspase-3 in pET-23b (Novagen) and the primers D9A (5Ј-ACTGAAAACTCAGTGGcTag-cAAATCCATTAAAAAT-3Ј) and D28A (5Ј-CATGGAAGCGAATCAATG-GcCTCcGGAATATCCCTG-3Ј) which introduces a NheI and a BspEI restriction site (bold), respectively, combined with standard primers matching the regions flanking the polylinker of pET-23b. Mutated nucleotides are indicated in lowercase. The truncated form of caspase-3 was generated in a standard PCR reaction by introducing a start codon at Ser 29 using the primer S29M (5Ј-CATGGAAGCGAATCAATGcatat-gGGAATATCCCTG-3Ј) which introduces a NdeI site at the new start codon, combined with a standard reverse primer matching the region downstream of the polylinker in pET-23b. The resulting PCR products were all introduced directly into pET-23b (caspase-3) expression vector as a NdeI-HindIII fragment. The D9A/D28A double mutant were prepared using the D9A primer combined with a standard reverse primer matching the region downstream of the polylinker in pET-23b and the D28A single mutant as template for the PCR reaction. The resulting PCR products were all introduced directly into pET-23b (caspase-3 D9A) expression vector as a NheI-HindIII fragment, thus, utilizing the restriction site introduced during the first round of mutagenesis. The sequence of the mutated DNA was verified using the Model 373A DNA sequencing system and the Dye terminator cycle sequencing kit from Applied Biosystems Inc.
In Vitro Activation of Caspases 3 by Granzyme B, Caspase-8, and Caspase-10 -The activation of the caspases-3 and -7 was investigated using an on-line activation assay as described previously (20). All reactions were performed in caspase buffer (20 mM Pipes, 100 mM KCl, 10 mM dithiothreitol, 1 mM EDTA, 0.1% CHAPS, 10% sucrose, pH 7.2) (21). The reactions were carried out in a Molecular Devices SpectraMax 340 plate reader thermostated at 37°C, operating in the kinetic mode. Rates of caspase activation were determined as described previously (20). Briefly, various zymogen concentrations in the range (69 -690 nM) were warmed at 37°C in caspase buffer containing 0.2 mM Ac-DEVD-pNA, followed by addition of caspase-8 to a final concentration of 18 nM. Substrate hydrolysis was followed at 5-s intervals and the resulting activation curve fitted by non-linear regression to the equation, Culture and Transfection of MCF-7 Cells-MCF cells were maintained in RPMI 1640 with 10% heat-inactivated fetal calf serum supplemented with 2 mM L-glutamine, 100 units/ml penicillin, and 50 g/ml streptomycin. The caspase-3 deficient MCF-7 line was kindly provided by J. Boothman (University of Wisconsin) (23). The absence of detectable caspase-3 was verified by immunoblotting with anti-caspase-3 rabbit polyclonal antibodies. The absence of caspase-3 is due to a 47base pair deletion within exon 3 of the CASP-3 gene. This deletion results in the skipping of exon 3 during pre-mRNA splicing, thereby abrogating translation of the caspase-3 mRNA (24). MCF-7 cells were transfected with caspase-3 in a retroviral vector (pBabe-puro) (provided by Dr. T. Sladek, Chicago Medical School). After the selection with puromycin, stable expression of the caspase was verified by immunoblotting. MCF-7 control cell lines were generated by transfecting with empty pBabe-puro vector. Stable expression of caspase-3 was verified by immunoblotting after selection.
Preparation of Cytosolic Extracts-For preparation of cell free lysates we used the procedure of Ellerby et al. (25) with minor modifications, the most important of which was omission of exogenous protease inhibitors. Human 293 cells from 12 10-cm Petri dishes were harvested by gentle scraping into phosphate-buffered saline at 4°C, pelleted by centrifugation for 5 min at 200 ϫ g, and subsequently washed once in the same buffer. The cell pellet was resuspended in HEB (20 mM Pipes, 10 mM KCl, 5 mM EDTA, 2 mM MgCl 2 , 1 mM dithiothreitol, pH 7.4) at 4°C and pelleted by centrifugation at 1000 ϫ g. The cell pellet was then resuspended in an equal volume of HEB and allowed to swell on ice for 30 min. The cells were then cracked by passing through a 24-gauge needle and pelleted by centrifugation at 16,000 ϫ g for 30 min, and the supernatant (cytosolic extract) recovered. The quality of the lysates were determined by the ability to support rapid generation of caspase activity by addition of cytochrome c and dATP as well as the inability to generate caspase activity upon addition of dATP alone.
Depletion of Caspases from Cell Free Extracts-Lysates from 293 cells were depleted of caspase-3 or caspase-9 by incubating 200 l of cytosolic lysate with 15 l of monoclonal anti-caspase-3 antibody, rabbit anti-caspase-9 antiserum, or as a control anti-NFB antibody on ice for 30 min. The antibody/lysate mixtures were then added onto drained protein G beads (15 l of packed beads) previously washed with HEB buffer containing 2 mM dithiothreitol and incubated on a rotator for 3 h at 4°C. The protein G beads were removed by centrifugation to allow recovery of the specifically depleted extracts. The depleted extracts were stored in appropriate portions at Ϫ80°C.
Activation Kinetics of Pro-caspase-3 by Caspases-8, -10, and
Granzyme B-To determine the relative efficiency of activation of pro-caspase-3 by physiologic activators, we produced the zymogen in E. coli by reducing the induction times normally used for expression of the mature enzyme to 30 min. Procaspase-3 was obtained as a full-length precursor under these conditions (Fig. 1), and is fully activable by the apical caspases-8 and -10, and also by the cytotoxic lymphocytes serine protease granzyme B (Table I). Thus, we are able to deter-mine the intrinsic activation kinetics of the zymogen under defined conditions. Zymogen at varied concentrations was equilibrated at 37°C in caspase buffer, followed by addition of an activator and analysis of activation rate using the reporter substrate Ac-DEVD-pNa (see "Experimental Procedures"). Background hydrolysis of the reporter substrate by caspases-8 and -10 was subtracted to allow direct analysis of the rate of zymogen activation. The observed rates of activation of caspase-3 depended on the activator. Granzyme B is by far the most efficient (k cat /K m ϭ 4.8 ϫ 10 6 M Ϫ1 s Ϫ1 ), capable of activating the caspase zymogen more than 5.5-fold faster than caspase-8 (k cat /K m ϭ 0.87 ϫ 10 6 M Ϫ1 s Ϫ1 ), which in turn catalyzes the activation about 3-fold faster than caspase-10 (k cat /K m ϭ 0.28 ϫ 10 6 M Ϫ1 s Ϫ1 ).
Kinetics of Activation of Caspase-3 in Cytosolic Extracts-On the basis of the reaction kinetics calculated above, the kinetics of activation in more complex systems may be predicted. Therefore, this information served as a framework for the analysis of cytosolic extracts of 293 cells. First it was necessary to determine the concentration of the substrate, pro-caspase-3, in the extract. This was performed by quantitative Western blot analysis using dilutions of known concentrations of the recombinant zymogen in comparison with the lysate used throughout this study. The approximate concentration of caspases-3 was found to be 100 nM. We focused on the apical caspase-8 and asked whether the kinetics of pro-caspase-3 processing is the same in the cytosolic extract. To address these questions we compared the rate of processing of purified caspase-3 zymogen and endogenous zymogen in the 293 extracts. As shown in Fig. 2, the processing pattern is essentially identical, irrespective of the source of the caspase zymogen, natural or recombinant. Thus the processing of pro-caspase-3 by caspase-8 in the cytosolic extracts may be recapitulated in the purified system indicating that no accelerator or inhibitor of the activation process is required.
Properties of Differentially Processed Caspase-3 N-peptide-A notable feature in the caspase-3 processing pattern, both with the recombinant and natural zymogens, is the appearance of two differently processed forms of the large subunit (Fig. 2), determined by Edman degradation to begin at either residue 1 or 10. Although not apparent in Fig. 2, eventually there is further processing at Asp 28 . These differentially processed forms of the large subunit of caspase-3 have been observed many times in apoptotic cells and have been attributed to autoproteolytic cleavage at Asp 9 and Asp 28 (26). Removal of an N-peptide (sometimes called the pro-peptide) is a classic way of activating proteases (27) and in the case of caspase-1 the removal of the N-peptide has been demonstrated to have significant influence on the properties of the resulting enzyme (28). We used purified recombinant caspase-3 zymogen to test the relevance of the cleavages in the caspase-3 N-peptide to the activation of the zymogen as well as the activity of the differentially processed forms of the enzyme. To address these issues we expressed and isolated the following caspase-3 derivatives: wild type caspase-3 enzyme, caspase-3 (S29M) which lacks the entire N-peptide, caspase-3 (D28A) which lacks the initial 9 amino acids of the N-peptide, and caspase-3 (D9A/D28A) where the N-peptide cannot be removed. Additionally, we expressed the enzyme caspase-3 (D9A), however, this mutation does not prevent the autocatalytic removal of the N-peptide and thus, the resulting enzyme is identical to the wild-type. Caspase-3 and derivatives thereof that were essentially fully processed between the large and small subunits (Asp 297 ) were obtained by expression in E. coli using a 3-h induction, whereas the zymogens were obtained by reduction of the expression time to 20 min (Fig. 1). The authenticity of the N-terminal sequences was confirmed by Edman degradation.
To test whether removal of the caspase-3 N-peptide modulated catalysis we determined the k cat /K m values for the hydrolysis of Ac-DEVD-pNA for the wild type, caspase-3 (S29M), caspase-3 (D28A), and caspase-3 (D9A/D28A). Since the k cat /K m for all the enzymes tested are essentially identical (see Table II), we rule out modulation of catalysis as a function for the N-peptide. Furthermore, comparison of the relative rates of hydrolysis of Ac-IETD-AFC and Ac-DEVD-AFC showed that the enzymes do not exhibit any significant differences in their substrate specificities (data not shown). Thus, the catalytic apparatus does not appear to be sensitive to the presence/ absence of the N-peptide.
TABLE I Apparent k cat /K M values for the activation of caspase zymogens
It is formally possible that the presence of the N-peptide may to some degree restrict access to the active site, thereby imposing a discrimination toward protein substrates but not the short tetrapeptide substrates. To investigate this possibility we tested the ability of caspase-3, caspase-3 (D28A), and caspase-3 (D9A/D28A) to cleave the protein substrate PARP. Cleavage was monitored after addition of the caspase-3 constructs in concentration from 0.067 to 440 nM, a range which allowed a comparison of the amount of caspase required to cleave PARP in 30 min at 37°C. Although caspase-3 containing the fulllength N-peptide (D9A/D28A) cleaved PARP a bit more efficiently, the increase was considered marginal (Fig. 3). In addition, we tested the ability of the baculovirus caspase inactivator p35 to inhibit the different caspase derivatives. As it is evident from Table II the efficiency by which p35 inhibited the different caspase-3 derivatives was not affected to any significant degree and all showed comparable k obs values at the same p35 concentration.
Since the N-peptide did not influence the activity of the matured enzymes, we considered that it may influence zymogen activation, either in a defined zymogen activator system, or in cytosolic extracts. Table I demonstrates a 2-4-fold decrease in activation rate with the zymogen lacking a pro-peptide (S29M), however, we do not consider this a dramatic difference when comparisons of the magnitudes of the rates are made. To determine whether a cytosolic factor could influence activation via the caspase-3 N-peptide the endogenous zymogen was depleted with a mouse anti-caspase-3 monoclonal and the recombinant pro-caspase-3 mutant zymogens were added to the depleted extracts. Western blotting of the immunodepleted cytosolic extracts confirmed removal of at least 95% of the endogenous pro-caspase-3.
Cytosolic extracts reconstituted with recombinant procaspase-3 mutant zymogens were treated with cytochrome c plus dATP, to achieve caspase activation (Fig. 4A) and activation followed by monitoring cleavage of the reporter substrate Ac-DEVD-pNa which, in a whole extract, measures a combination of the caspases present. The response of the reconstituted extracts was identical, with the exception of the activation of the S29M reconstituted extract, which showed an activation rate about 1.5-fold higher than the others. When the identical amount of purified zymogens were activated by granzyme B (Fig. 4B), there were no observable differences in rate of activation in agreement with the close similarity of the activation rates (Table I). Taken together these results argue that the N-peptide of caspase-3 only minimally influences activity of the enzyme, or its activation.
Processing of Pro-caspase-9 -Although our data support a direct link between the initiator caspase-8 and the executioner caspase-3, others have suggested that the caspase-8 signal is indirectly passed on to pro-caspase-3 via a mitochondrial route (10) where caspase-9, the first caspase in the post-mitochondrial pathway (29), activates the caspase-3 zymogen. As shown in Fig. 5, caspase-9 processing occurs after the addition of caspase-8 to 293 cell extract. The zymogen of caspase-9 (48 kDa) was converted first to a 36-kDa form and then to one of 34 kDa. The first product is consistent with processing at Asp 330 (caspase-9 numbering) and the second represents processing at Asp 315 , previously demonstrated to be the cleavage utilized during processing of pro-caspase-9 (30). Cleavage at Asp 330 is reported to be mediated by activated caspase-3, but the enzyme responsible for processing at Asp 315 in vivo is unknown (30). Conceivably, either cut in caspase-9 could be due to the direct action of caspase-8.
To determine whether caspase-8 directly activated procaspase-9 during death receptor-dependent apoptosis, and if so, whether activated caspase-9 accelerated the processing of caspase-3, we performed two different experiments. We first asked whether caspase-9 processing required the presence of caspase-3. To address this question, processing of caspase-9 was examined in MCF-7 cells which are known to be deficient in caspase-3 and in transfected MCF-7 cells that express this family member (24), but which still undergo apoptosis without DNA fragmentation (23). Following treatment with TNF, caspase-3 negative MCF-7 cells failed to show evidence of caspase-9 processing. In the caspase-3 positive transfectants, however, conversion of caspase-9 was apparent after 2 h (Fig. 5B), although it does not appear to result in generation of the fragments typically associated with caspase activation. Consequently, caspase-8 activation initiated by TNFR-1 ligation (5, 31) is not associated with caspase-9 processing unless caspase-3 is present. This indicates that caspase-8 does not directly activate caspase-9 in whole MCF-7 cells. However, since the capacity of caspase-8 to activate caspase-9 may not be operative in all cell types, we then evaluated this hypothesis using our model system where the cytosolic extract was depleted of caspase-9. Although the addition of cytochrome c/dATP did not result in detectable caspase-3 activation (data not shown), the extract was fully responsive to exogenous caspase-8 (Fig. 6). In combination, these data provide further evidence that the cleavage of caspase-9 is mediated initially as a feedback from caspase-3 (32). However, more importantly caspase-9 is not cleaved directly by caspase-8 in vivo.
DISCUSSION
Caspases are commonly divided into apical and executioner subsets (1, 33). There exist two well characterized points at TABLE II Catalytic properties of caspase-3 mutants Caspase-3 mutants constructed to retain portions of the pro-peptide were compared for activity versus a synthetic substrate and for inhibition by the natural inhibitor p35. Inhibitory data for the S29M mutant were not obtained since this is essentially identical to the wild type: it contains no pro-peptide. which apical caspases initiate apoptotic signals. One is at the cell surface where members of the TNFR-1 family of death receptors transmit a signal across the cell membrane following receptor clustering. The second point of initiation follows the release of mitochondrial factors (34,35), and although this post-mitochondrial pathway is well documented it is unclear how the mitochondrion perceives the apoptotic signal. Nevertheless, anti-neoplastic drugs, genotoxic damage, and inhibition of cellular signal transduction pathways all seem to con-verge on the mitochondrial route (36). At each point of initiation the first recognizable biochemical event is specific caspase activation, and each initiation point utilizes distinct caspases. In the death receptor pathway(s) the apical caspase-8 (and possibly 10) transmits a proteolytic signal following autoactivation at the cytosolic face of the receptor (4 -6, 37, 38). The signal for activation appears to be local clustering of procaspase-8 that possesses enough activity in its zymogen to achieve autolytic proteolytic maturation (7,8,39). In the mitochondrial route specific or nonspecific delivery of cytochrome c (29, 40 -42) to the protein Apaf-1 results in recruitment and activation of caspase-9 (29). Both caspase-8 and caspase-9 have The background activity due to the added caspase-8 has been subtracted from all rates. As it is evident from the graph, no substantial influence of the presence of pro-caspase-9 on the rate of activation was observed. been demonstrated to act on in vitro translated pro-caspases-3 and -7, the executioner caspases whose activation correlates with apoptosis. Thus there are two potential routes to activate the executioner caspases, and in this context both caspases-8 and -9 can be thought of as initiators whose pathways converge at the execution phase of apoptosis.
Although caspases-8 and -10 can activate pro-caspase-3 in vitro, it has proven difficult to determine whether the apical caspases perform this function directly or indirectly, because previous studies have relied on in vitro translated zymogens or cytosolic extracts (6,31,37,43). We here demonstrate that caspase-8 and -10 can rapidly activate caspase-3. More importantly, focusing on caspase-8 we observe that activation proceeds with the same rate in a 293 cytosolic extract, eliminating a requirement for an intermediary component. Additionally, depletion of pro-caspase-9 from the extract has no impact on caspase activation by caspase-8, and MCF-7 cells deficient in caspase-3 failed to support processing of pro-caspase-9. Therefore, processing of pro-caspase-9 in death receptor-mediated apoptosis requires the presence and activation of caspase-3 which makes it a downstream event unlikely to play a major role in caspase activation.
It has been proposed that mitochondria are required to transmit the apoptotic signal generated by treatment of cells by agonistic Fas antibodies, but only in a small selection of cell lines (10). This would imply that caspase-8, the apical caspase of the Fas pathway, initiates a mitochondrial signal. In support of this, the generation of caspase activity initiated by addition of human caspase-8 to cytosolic extracts of Xenopus eggs is accelerated in the presence of mitochondria (44). However, the relevance of this to homologous systems is unclear, since it is not known whether Xenopus has a caspase-8, and the kinetics of activation of a putative Xenopus caspase-3 ortholog, or whether the caspase activity in Xenopus extracts is due to such an ortholog, have not been determined. The data presented above, in contrast, do not suggest any requirement for an intermediary between caspase-8 and caspase-3. Therefore, the question is whether the mitochondrial acceleration occurs in vivo, and whether caspase-8 must transmit its signal via mitochondria to the executioners in vivo. Currently the most valuable evidence for a role of mitochondria in apoptosis triggered by death receptor ligation comes from several studies investigating expression of ectopic or transgenic Bcl-2, which is hypothesized to operate by blocking mitochondrial-dependent apoptosis (reviewed in Ref. 36). Most investigators agree that Bcl-2 prevents apoptosis triggered by genotoxic damage, glucocorticoids, and chemotherapeutic drugs, but there are inconsistencies in the data describing the protective effect of Bcl-2 against apoptosis induced by death receptors (10,33,45,46). In a survey of several cell lines, Scaffidi et al. (10) noted that most are not protected by Bcl-2 from apoptosis triggered by agonist Fas antibodies. However, the use of agonist antibodies and immortalized cell lines may not be the best way to determine a role for Bcl-2. More significantly, T-cell apoptosis in vivo, which is dependent on physiologic Fas ligation, is unaffected in Bcl-2 transgenic mice (46). In contrast, death following injection of agonist Fas antibodies in whole mice was significantly retarded in Bcl-2 transgenic mice (47). These somewhat contradictory studies can be reconciled if some cells support direct transmission of caspase-8 to caspase-3, while others require a mitochondrial accelerator (10).
Since pro-caspase-9 is not processed by caspase-8, the mitochondrial accelerator must act upstream of the caspase-9 activator complex and, therefore, presumably upstream of the mitochondrion. Possibly the pro-apoptotic mitochondrial signal is activated by the action of caspase-8 on mitochondria, or a latent protein that activates mitochondria for apoptosis. The importance of the mitochondrial route in Fas death is not clear, since evidence suggests that only a minority of cell lines require mitochondria to transmit the caspase-8 signal. If direct transmission of the signal from caspase-8 to the executioner caspase-3 occurs in most cell lines, what is the advantage of a mitochondrial intermediate? Usually, levels of complexity are added to allow additional regulation points, as clearly evidenced by the evolution of the vertebrate blood coagulation cascade. It is not immediately clear what advantage cells would achieve by adding a level of regulation to the Fas-triggered death signal, but future studies will doubtlessly focus on this issue. Regardless, it would appear that those cells in the body that are primed to undergo apoptosis during education of the immune system have evolved death receptors, caspase-8, and the caspase-8 activator complex to allow rapid direct transmission of the death signal, bypassing any mitochondrial requirement, and that pro-caspase-3 is a major physiologic substrate of caspase-8 (Fig. 7). FIG. 7. Pro-caspase-3 activation. Currently there exist two recognized points at which apical caspases are activated to initiate apoptosis. Following TNFR-1 or Fas ligation, the initiator caspase-8 is activated by adapter-mediated recruitment to the receptor's cytosolic face (7,8). Alternatively, the initiator caspase-9 is activated following release of mitochondrial components to form the Apaf complex (34,35). Both activated initiators converge on the proteolytic activation of caspase-3. In death receptor-triggered apoptosis the main pathway (bold arrows) is direct activation of pro-caspase-3 by caspase-8. In some cell types an additional pathway (light arrow) may operate by caspase-mediated delivery of a signal (question mark) to mitochondria. The importance of the mitochondrial pathway in death receptor-triggered apoptosis is unknown, but apparently subordinate to the dominant, direct pathway in most cell types. This model predicts that deficiency of caspase-9 would not affect death receptor apoptosis that has been triggered by caspase-8 activation. | 6,437.2 | 1998-10-16T00:00:00.000 | [
"Biology"
] |
Relationship between Blood Volume, Blood Lactate Quantity, and Lactate Concentrations during Exercise
We wanted to determine the influence of total blood volume (BV) and blood lactate quantity on lactate concentrations during incremental exercise. Twenty-six healthy, nonsmoking, heterogeneously trained females (27.5 ± 5.9 ys) performed an incremental cardiopulmonary exercise test on a cycle ergometer during which maximum oxygen uptake (V·O2max), lactate concentrations ([La−]) and hemoglobin concentrations ([Hb]) were determined. Hemoglobin mass and blood volume (BV) were determined using an optimised carbon monoxide-rebreathing method. V·O2max and maximum power (Pmax) ranged between 32 and 62 mL·min−1·kg−1 and 2.3 and 5.5 W·kg−1, respectively. BV ranged between 81 and 121 mL·kg−1 of lean body mass and decreased by 280 ± 115 mL (5.7%, p = 0.001) until Pmax. At Pmax, the [La−] was significantly correlated to the systemic lactate quantity (La−, r = 0.84, p < 0.0001) but also significantly negatively correlated to the BV (r = −0.44, p < 0.05). We calculated that the exercise-induced BV shifts significantly reduced the lactate transport capacity by 10.8% (p < 0.0001). Our results demonstrate that both the total BV and La− have a major influence on the resulting [La−] during dynamic exercise. Moreover, the blood La− transport capacity might be significantly reduced by the shift in plasma volume. We conclude, that the total BV might be another relevant factor in the interpretation of [La−] during a cardio-pulmonary exercise test.
Introduction
Lactate is now recognised as a major metabolic intermediate and signalling molecule that is used for oxidative energy supply and is transported between different cells, tissues, and organs by means of transport proteins [1][2][3][4][5][6][7]. Lactate is widely accepted as a diagnostical marker and lactate kinetics or lactate concentrations ([La − ] are commonly used in the field of sports medicine for the assessment of endurance performance, e.g., during an incremental cardiopulmonary exercise (CPX) test [8]. Moreover, exercise prescriptions based on [La − ] allow for precise and predictable regulation of acute metabolic and cardiorespiratory responses during dynamic exercise, as is reflected in the previously postulated lactate turn point model [8,9].
However, [La − ] must always be considered in light of the prevailing production and elimination rates and numerous studies have shown that there are several factors that can significantly influence these rates leading to highly variable results. These include, for example, the applied protocol or workload characteristics [10][11][12] of a CPX test, the previous diet in relation to muscle glycogen [13][14][15][16], muscle fibre-type composition [17][18][19], the source of blood sampling [12,20,21], or cerebral lactate uptake [22].
Another important factor that has not yet been considered in this context may be the total blood volume (BV). This is surprising since the BV not only serves as a distribution Metabolites 2023, 13, 632 2 of 11 medium but also transports lactate to other cells and organs for the oxidative energy supply or gluconeogenetic metabolism.
It can be assumed that the concentration of any substance dissolved in a medium is dependent on both the size of the medium and the amount of substance dissolved in the medium. Usually, [La − ] is measured throughout an exercise test regardless of the medium, in this case, total BV. It is well known, however, that total BV not only differs greatly between individuals, e.g., due to training-induced volume expansion or a genetic predisposition [23][24][25][26] but also decreases by up to 10% during dynamic exercise mainly as a result of plasma volume (PV) shifts [27,28]. With respect to the latter, Davies et al. previously concluded that [La − ] above the lactate threshold should be corrected for this decrease in PV [29]. Moreover, since significantly more lactate is transported in the plasma than in the erythrocytes, especially during intensive exercise [30], PV losses might also have a considerably negative impact on the lactate transport capacity. Consequently, the total BV may have an impact on both inter-and intra-individual comparisons of [La − ]. Therefore, we hypothesise that the larger (or the smaller) the total BV, the lower (or the higher) the [La − ] tends to be in the course of an incremental ergometer test. In addition, if the BV is known, it is also possible to calculate the absolute lactate quantity (La − ) and thus, its impact on [La − ]. To the best of our knowledge, this has not yet been done. Therefore, the aim of this study was to determine the total BV and La − during an incremental CPX test on a cycle ergometer in healthy volunteers with heterogenous endurance capacities and quantify their influence on the measured [La − ] during dynamic exercise.
Participants
This was a secondary outcome analysis of a previously published descriptive crosssectional study [31] that reports preliminary observations. Twenty-six healthy, nonsmoking females with heterogenous endurance capacity and no history of cardiac disease were included in the study (see Table 1 for participant characteristics). The participants provided written consent after being informed of the study design, the associated risks, and their right to withdraw at any time. The study was conducted in conformity with the declaration of Helsinki and Good Clinical Practice and the study protocol was approved by the ethics committee of the University of Bayreuth in Germany (O 1305/1-GB). The data are presented as the mean values ± standard deviations. Min = minimum, Max = maximum, CI = confidence interval.
Study Design
After anthropometric measurements including analysis of body composition using a bioelectrical impedance analysis were conducted, a cubital venous blood sample was drawn for a full blood count as well as ferritin concentrations to exclude any iron deficiencies. The participants then performed an incremental CPX test on a cycle ergometer to determine the maximum power (P max ) and maximum oxygen uptake ( · VO 2max ). During this test, the
Anthropometry and Blood Sampling
Prior to the exercise test, lean body mass (LBM) and fat mass were measured twice consecutively using a bioelectrical impedance analyser (InBody 720, InBody Co., Seoul, Republic of Korea). Cubital venous blood samples (8 mL) were drawn after the participants rested for 15 min in an upright seated position. Heparinised blood samples were analysed using a fully automated hematology system (Sysmex XN 1000-1-A, Sysmex, Norderstedt, Germany) for red blood cells including hemoglobin concentration ([Hb]) and hematocrit (Hct). The serum ferritin and C-reactive protein (CRP) concentrations were determined by enzyme immunoassays [ferritin: LKFE1, CRP: highly sensitive-LKCRP1; ELISA & Immulite 1000 (Siemens Healthcare Diagnostics GmbH, Erlangen, Germany)].
Cardio-Pulmonary Exercise Test and Lactate Analysis
P max was determined using an incremental protocol on a cycle ergometer (Excalibur Sport, Lode, Groningen, The Netherlands). After a 3-min warm-up phase of 50 W, the mechanical power was increased by 50 Watts every 3 min (stepwise by 17, 17, and 16 Watts per minute) until subject exhaustion was reached. The VO 2 was determined via breath-by-breath technology (Metalyzer 3B, Cortex, Leipzig, Germany), and the · VO 2max ) was calculated as the highest 30 s interval before exhaustion. Capillary blood samples were taken from a hyperemised earlobe before exercise, every 3 min during exercise, and immediately at exhaustion to determine the [Hb] using a calibrated photometric analysis (HemoCue 201, Hemocue AB, Ängelholm, Sweden). At the same time points, capillary blood samples (20 µL) were taken from the other earlobe to measure the [La − ] using an enzymatic-amperometric approach (Biosen S-Line, EKF-Diagnostic, Barleben, Germany). Further blood samples for the determination of [La − ] were taken 1-, 3-, 5-and 7-min post-exercise. The maximum lactate concentration was defined as [La − ] max . The absolute lactate quantity (La − ) in mmol at the respective intensities was calculated as the product of [La − ] and BV and indexed for lean body mass. The [La − ] and La − at 60% of P max (P 60% ) and at P max were defined as [La − ] 60% and La − 60% and [La − ] end and La − end , respectively. The lactate transport capacity in the erythrocyte and in the plasma volume (PV) was calculated based on the blood volume at P max (BV end ), the Hct end, and the corresponding [La − ]. Additionally, the [La − ] and La − in the PV and the erythrocyte volume (ECV) at P max were separately estimated assuming a [La − ] ratio of 1:0.3 between the PV and the ECV [30].
Determination of Hemoglobin Mass and Blood Volume
The Hbmass, BV, PV, and ECV were determined using a carbon monoxide (CO)rebreathing procedure according to methods described in previous investigations [32][33][34]. In brief, an individual dose of CO (0.8-0.9 mL·kg −1 , CO 3.7, Linde AG, Unterschleißheim, Germany) was administered and rebreathed along with 3 L of pure medical oxygen (Med. O 2 UN 1072, Rießner-Gase GmbH, Lichtenfels, Germany) for 2 min. Capillary blood samples were taken before and 6 and 8 min post administration of the CO dose. The blood samples were measured for the determination of %HbCO using an OSM III hemoximeter (Radiometer, Copenhagen, Denmark). The Hbmass was calculated based on the mean change in %HbCO before and after the CO was rebreathed. As part of the equation to calculate changes in BV during the exercise period, the capillary [Hb] was converted to the venous conditions [35,36]. The Hct end was calculated as the quotient of [Hb] end and the mean corpuscular hemoglobin concentration (MCHC) at rest [37]. The BV was calculated according to the following formula where 0.91 = cell factor at sea level [38]: The BV at rest and at P 60% were defined as BV rest and BV 60% , respectively. The BV at maximum power was defined as BV end . For the calculation of the BV 60% , the [Hb], which was determined at rest and every 3 min during exercise, was interpolated for the respective exercise intensity, if necessary. Hbmass was measured twice on consecutive days and within 7 days after the ergometer test with a possible first test at least 2 h after the CPX test when the plasma volumes had returned to pre-exercise values [39]. Since the Hbmass does not change over short periods [40], the temporary offset determination of the [Hb] for the calculation of the BV is possible without compromising accuracy. For a detailed description and the accuracy of the methods see [32][33][34]. The typical error for Hbmass in our laboratory is 1.5%, which is comparable to previous investigations [35,41], while the typical error for BV is 2.5%.
Statistical Analyses
The data are presented as means and standard deviations. Statistical analysis was conducted using GraphPad Prism Version 8.0.2 (GraphPad Software, Inc., San Diego, CA, USA). Testing for normality was performed using the Shapiro-Wilk test. Pearson correlation coefficients or nonparametric Spearman correlations were computed to prove any relationship between the two variables. A paired t-test was computed to calculate the differences in La-with and without the BV shifts. Multiple linear regression was performed to predict the value of one dependent variable (e.g., [La − ]) based on two independent variables (e.g., BV and La − ). The level of significance was set to p ≤ 0.05.
(A), La − end and blood volume (BVend) (B), and [La − ]end and BVend (C). Data were indexed for lean body mass.
Discussion
This study aimed to calculate the influence of total BV and absolute lactate quantity on the measured [La − ] during an incremental CPX test on a cycle ergometer in healthy volunteers with heterogenous endurance capacities. Our findings confirm the theoretical assumption that both the blood volume and the lactate quantity have an impact on the resulting [La − ]. In addition, the exercise-induced BV shifts led to a significant 10.8% reduction in the lactate transport capacity.
The measured [La − ] in the blood is generally the result of lactate production (e.g., within the muscle cells) and lactate elimination (e.g., the diffusion of lactate into the PV and its distribution to other cells and organs for lactate clearance). The latter also includes an individual's total BV as a distribution space and the exercise-induced BV shifts. Previous investigations have already dealt in detail with various factors influencing lactate production and elimination rates [12,13,19,42,43], however, to the best of our knowledge, this is the first study that will focus exclusively on the potential role of the BV.
The role of blood volume as distribution space: In theory, if the exchange of lactate between cells remains constant, then a higher BV would consequently lead to lower [La − ] and vice versa. On the other hand, for a given [La − ], a higher La − would also be associated with a higher BV and vice versa. Our data demonstrate that both assumptions are equally true, however, there are quantitative differences between the two mechanisms.
Discussion
This study aimed to calculate the influence of total BV and absolute lactate quantity on the measured [La − ] during an incremental CPX test on a cycle ergometer in healthy volunteers with heterogenous endurance capacities. Our findings confirm the theoretical assumption that both the blood volume and the lactate quantity have an impact on the resulting [La − ]. In addition, the exercise-induced BV shifts led to a significant 10.8% reduction in the lactate transport capacity.
The measured [La − ] in the blood is generally the result of lactate production (e.g., within the muscle cells) and lactate elimination (e.g., the diffusion of lactate into the PV and its distribution to other cells and organs for lactate clearance). The latter also includes an individual's total BV as a distribution space and the exercise-induced BV shifts. Previous investigations have already dealt in detail with various factors influencing lactate production and elimination rates [12,13,19,42,43], however, to the best of our knowledge, this is the first study that will focus exclusively on the potential role of the BV.
The role of blood volume as distribution space: In theory, if the exchange of lactate between cells remains constant, then a higher BV would consequently lead to lower [La − ] and vice versa. On the other hand, for a given [La − ], a higher La − would also be associated with a higher BV and vice versa. Our data demonstrate that both assumptions are equally true, however, there are quantitative differences between the two mechanisms.
As demonstrated in the multiple linear regression analysis, both the BV end (β = −0.1244, p < 0.0001) and La − end (β = 10.78, p < 0.0001) significantly influence the [La − ] end . With regard to the correlation between the La − end and BV end ( Figure 1B), however, no significance was found. This finding might be explained by a plateau in net lactate release at maximum exercise indicating a disturbance in lactate exchangeability [44][45][46]. In contrast, the larger lactate quantity is not sufficient to align the [La − ] with different BV, which is confirmed by the significant negative correlation between [La − ] end and BV end ( Figure 1C). Therefore, a higher BV might have two opposing effects: First, it leads to a greater diffusion gradient allowing for more lactate to diffuse out of the muscle cell; second, it also increases the distribution space, which is reflected in the lower [La − ] ( Figure 1C). However, this does not consider simultaneous lactate extraction and net release within the same muscle fibre types of the same exercising muscle groups. In endurance-trained individuals, where this mechanism can be substantially enhanced, this would lead to a reduced systemic La − that further reduces the measured [La − ] in addition to their larger BV.
Contribution of ECV and PV to lactate transport: [La − ] are normally measured in the whole blood. However, erythrocytes show a lower [La − ] when compared to plasma [La − ]. This difference in [La − ] between plasma and erythrocytes was found to be 1:0.5 under resting conditions but is augmented by strenuous exercise, i.e., 1:0.2 [30]. Accordingly, both volumes are contributing to the distribution of lactate at submaximal exercise intensities. With increasing exercise intensity, however, the aforementioned ratio changes substantially making the plasma volume almost exclusively responsible for the lactate transport at maximal exercise [47]. The significant reduction in plasma volume further decreases blood lactate transport capacity. We have calculated that even when a more cautious ratio of 1:0.3 is used, the lactate transport capacity in the BV at maximum exertion was still significantly reduced by 10.8% exclusively as a result of the PV reduction ( Figure 2). It was previously argued that the erythrocyte membrane provides a barrier to the flux of lactate between PV and ECV during rapidly changing blood lactate levels [47]. Thus, in addition to the well-known thermoregulatory and cardiovascular disadvantages of a reduced plasma volume [48,49], our findings imply that it may also have a detrimental effect on lactate transport capacity.
Confounding factors: Two of our subjects with nearly identical La − end (1.02 vs. 1.04 mmol·kg −1 ) showed distinct differences in BV end when indexed for lean body mass (94.2 and 116.5 mL·kg −1 ), thus leading to different [La − ] end (10.9 and 8.9 mmol·L −1 ). However, these two participants also differed in their · VO 2max (36.2 and 59.4 mL·min −1 ·kg −1 ) and P max (3.2 and 5.0 W·kg −1 ), respectively. Since their La − end were identical, the differences in [La − ] end in these individuals can most likely be attributed to different blood volumes. Therefore, in order to understand the relationship between [La − ] and La − , a more holistic approach is required. As mentioned before, the maximum systemic lactate concentration, e.g., when measured in the capillary blood, is always the result of lactate production, exchange, and utilisation [7,50] and the absolute amount of lactate in the blood depends on the interaction between the aforementioned factors.
With regard to the role of total BV, both intra-and inter-individual factors have to be considered. For instance, it was demonstrated particularly in endurance-trained individuals that the lower blood [La − ] is usually the combined result of a training-induced decrease in the overall release of lactate from tissues to blood as well as an increase in clearance from plasma during exercise [51] equalling a lower absolute La − in the blood. This is most likely due to an increase in mitochondrial monocarboxylate transporters and enzymatic lactate dehydrogenase activity which in turn improves the oxidative capacity of the muscle cells [52]. In addition to their large BV, these adaptations most likely lead to chronically lower [La − ] during dynamic exercise.
Similar conditions can also be observed on the inter-individual level, e.g., in trained athletes from different disciplines. While athletes participating in sports requiring explosive muscular power, e.g., sprinting exercise, usually possess a larger quantity of high glycolytic fibre types, elite endurance athletes are characterised by a much higher percentage of oxidative fibres. Additionally, sprinters are characterised by a lower BV when compared to endurance-trained athletes [53]. Moreover, their PV losses are typically larger than during endurance exercise reducing the most effective distribution medium for La − even further [54]. This would suggest that the usually higher [La − ] found in sprinters at a higher power do not necessarily indicate a higher absolute blood La − when compared to endurance-trained athletes. The same would eventually apply for interindividual comparisons between untrained individuals with different BV, e.g., due to genetic predisposition [23].
Practical Implications: Although BV is not routinely determined in most exercise labs, this study highlights its general relevance in the context of lactate diagnostics and the interpretation of results. In terms of blood lactate diagnostics in exercise testing and training it should be noted, that the PV can increase by up to 15% as a result of systematic endurance training [25]; while the ECV can increase by up to 6% [55,56]. Notably, the PV significantly increases after just very short time periods (hours to days), thus increasing the potential distribution volume rapidly long before metabolic adaptations take place [57]. Therefore, the increases in BV should also be considered in the interpretation of changes in lactate kinetics, especially when (long-term) training interventions are conducted. This is independent of the fact that the percentage differences in La − as a result of the exerciseinduced PV shifts that we found in this study likely yield no practical importance for interpreting lactate kinetics. Moreover, our results underline the importance of ensuring an adequate hydration state before and during training and competition. Here, PV is important for supporting the homeostasis of cardiovascular and thermoregulatory systems [58]. For instance, it has been shown that an isotonic reduction in PV in turn leads to a reduction in sweat rate [59]. These findings may be of even greater importance when exercising in the heat [60] or monitoring exercise intensity using non-invasive biomarkers such as forehead sweat lactate secretion rate [61]. Lastly, it would be of great interest as to whether a threshold determination based on absolute systemic La − indexed for lean body mass rather than [La − ] could be a viable option in the context of lactate performance diagnostics.
Limitations: There are several limitations to this study. First, when interpreting our data, it is important to consider that our assumptions have been made on selected exercise intensities during an incremental CPX test, i.e., P 60% and P max . Second, it may well be that the dependencies between the BV, La − and [La − ] diverge with regard to different exercise modalities, e.g., during a Wingate test, high-intensity interval exercise, or a continuous moderate exercise, due to different lactate fluxes from working muscles to the bloodstream, different ratios between PV and erythrocytes, or different fluxes of lactate to consuming tissues [43]. Third, our study population consisted exclusively of heterogeneously trained female participants. Although we would hypothesise that the same mechanisms also apply to men, especially since it was demonstrated that their BV shifts are typically much larger [27,28,62], we have not controlled for the menstrual cycle which has been shown to have an influence on water retention, and thus plasma volume [63]. Moreover, interindividual differences in training status may also have affected the rate of lactate production and clearance during exercise thus affecting our results [64].
Conclusions
In this study, we evaluated the influence of blood volume and absolute systemic lactate quantity on the lactate concentrations during an incremental CPX test in healthy individuals. Our findings demonstrate that a higher BV was associated with a lower [La − ] at maximum exercise. Since the [La − ] between erythrocyte and plasma substantially differ during intensive exercise, acute plasma volume changes also have a substantial influence on the lactate transport capacity in the total blood volume. In addition to its influence on cardiovascular and thermoregulatory stability, our data indicate that another important property can be attributed to PV. | 5,334 | 2023-05-01T00:00:00.000 | [
"Biology"
] |
Development of a pulling machine to produce micron diameter fused silica fibres for use in prototype advanced gravitational wave detectors
A pivotal aspect in increasing the sensitivity of the Advanced LIGO detectors to allow the first gravitational wave detection, GW150914, was the installation of the monolithic fused silica suspensions. The 40 kg test mass suspended by four 400 μm fused silica fibres lowers the thermal noise as compared to initial LIGO. There is a desire for the use of thinner fibres to suspend smaller optics for other experiments of interest to the gravitational wave community that the current aLIGO fibre pulling machine is not capable of. We present here an overview of a new CO2 laser-based micron scale diameter fibre pulling machine developed at the University of Glasgow, based on the principals of our current aLIGO fibre pulling machine. We also discuss the upgraded fibre characterisation apparatus for dimensional and strength testing. It was found that fibres with a minimum diameter between 7.6 and 9.3 μm had an average breaking stress of 2.7 GPa and a Young’s modulus value of 63.3 GPa, which is lower than the accepted bulk value of 72 GPa. Fibres with an average diameter between 13.2 and 17.8 μm had higher breaking stress and Young’s modulus values ranging between 3.7–4.0 GPa and 71.8–75.9 GPa, respectively.
Keywords: gravitational wave, fused silica, Young's modulus, suspension (Some figures may appear in colour only in the online journal)
Introduction
Long baseline interferometers are currently used to detect gravitational waves radiating from massive astronomical objects in our universe by sensing the displacement of the two suspended mirrors at the end of each arm [1][2][3]. The Laser interferometer gravitational-wave observatory (LIGO) Scientific Collaboration announced in February 2016 their first detection of a gravitational wave signal at the two advanced LIGO (aLIGO) detectors [4] in Hanford, Washington and Livingston, Louisiana that was radiated from the merger of two black holes-GW150914 [5]. Two black holes of 36 and 29 solar masses, respectively, merged to create a single black hole of 62 solar masses. During the merger, three solar masses worth of energy was radiated away in the form of gravitational waves. A second detection, GW151226 [6], was reported a few months later with two lighter black holes of 14.2 and 7.5 solar masses coalescing to form a 20.8 solar mass black hole. During the second operating run, there were numerous confirmed detections. Notably, the third observed signal of a binary black hole coalescence-GW170104 [7], the first triple detector detection that included both aLIGO detectors and the advanced Virgo detector [2] in Italy-GW170814 [8], the first binary neutron star (BNS) signal-GW170817 [9] and the lightest binary black hole coalescence observed so far-GW170608 [10].
A key part in detecting gravitational waves was the upgrades that took place to bring the detectors from its first generation status, initial LIGO [11], to its second generation status, aLIGO. One key new technology implemented in aLIGO is the quadruple stage suspension system with a monolithic fused silica final stage to lower several noise sources [12]. These suspensions consist of a four stage pendulum system to limit the seismic noise. The material chosen for the optics and fibre suspensions was fused silica to limit the thermal noise [13].
A key property of fused silica is that it has an extremely low mechanical loss. Fused silica is also extremely strong with a high tensile stress, with aLIGO fibres failing at around 4.4 GPa [14], which is significantly larger than the operational stress of 780 MPa to which these are loaded to [15].
There is a desire to produce shorter and thinner fibres than that used in the aLIGO suspensions for advance prototype test experiments. The Sagnac speed meter (SSM) is currently under construction at the University of Glasgow [16]. The aim of this proof of concept experiment is to reduce back-action noise to allow better sensitivity in the low frequency region than is possible with a Michelson interferometer. This experiment aims to utilise fused silica fibres of diameters of 10 and 20 μm to suspend 1 g and 100 g optics, respectively. At the Albert Einstein Institute (AEI) in Hannover, the AEI 10 m prototype interferometer [17] is looking at increasing sensitivity through the use of squeezed light and will also utilise 20 μm fused silica fibres to suspend 100 g optics. To suspend these optics, a new fused silica fibre pulling machine dedicated to producing short and thin fused silica fibres was developed together with suitable fibre characterisation apparatus.
Design of pulling machine
The production of silica fibres for gravitational wave suspensions has evolved through several generations and bespoke processes and hardware have been developed [18][19][20]. An early example of developed hardware would be the production of silica ribbons that were produced by heating the stock with an oxygen-hydrogen torch until molten with two arms pulling the stock apart [18].
The pulling machine discussed in this paper, renderings shown in figure 1, is based on similar concepts to the aLIGO silica fibre pulling machine that was designed and developed in Glasgow [19]. In principle, a rod of fused silica stock is held in place with two clamps, one of which is attached to a moving stage and the other fixed in place. A CO 2 laser is directed around an optical bench with mirrors and passed through a lens system to a cone tip mirror, referred to as an 'axicon', that spreads the beam out to a conical mirror. The beam travels from this conical mirror as a cylindrical beam to another conical mirror, referred to as the feed mirror. This mirror focuses the beam onto the fused silica stock. Once molten, the pulling stage is moved to allow the fibre to be pulled from the stock.
The pulling stage is a Newport IMS-400LM motorised stage, connected to the Newport XPS-DRV02 Driver. The stage has a maximum velocity and acceleration of 500 mm s −1 and 26 000 mm s −2 , respectively, with a total travel range of 400 mm. The feed mirror is mounted on two Thorlabs MTS50-Z8 motorised stages, each connected to a TDC001 Thorlabs APT-DC Servo Controller. These stages have a maximum velocity and acceleration of 2.4 mm s −1 and 4.5 mm s −2 , respectively. The total travel range of each stage is 50 mm. The drivers for the pulling stage and the feed stages are controlled by a custom LabVIEW program. A text file can be uploaded to the program containing values for the velocity, acceleration and time variables to drive the pulling stage. The program also utilises the internal position encoders of the stages to allow absolute and relative position movements at velocities and accelerations predetermined by the user.
The CO 2 laser can be controlled via either a manual controller or through the LabVIEW program. The lens system consists of a plano-concave and plano-convex lens with focal lengths of 100 mm and 110 mm, respectively. They are positioned on the bench to give a beam waist around the silica stock of approximately 105 μm.
There are two main design changes between the thin fibre pulling machine and the aLIGO pulling machine. Firstly, the pulling stage is parallel instead of perpendicular to the bench. This is due to the pulling stage operational requirements as well as gravity affecting the maximum acceleration of the pulling stage. The second change is the use of an axicon, instead of a rotating 45° mirror.
The silica stock is held in place via metal collets with a 2 mm diameter hole that sits in a steel chuck housing. The collets are hand tightened with a nut that sits over it. This can be seen in figure 3. One clamp is attached to the back of the faceplate of the axicon mirror and the other on the pulling stage. The clamps are sitting on X-Y translation stages to allow the alignment of the silica stock. Extraction of the fibre from the pulling machine is achieved via the use of a fibre cartridge. The cartridge consists of two aluminium rods that connect the two chucks that the fibre ends sit in. Connecting the two ends creates one robust unit that can then be extracted out of the holders of the pulling machine to be stored away. Extracting the fibre out of the cartridge can be achieved by cutting with the light from a CO 2 laser.
Alignment
Ensuring that the beam hits the tip of the axicon and not at an angle or offset is critical for uniform heating around the silica stock. Failure to due so will result in an uneven distribution of the beam around the silica stock that can have an effect on the shape of the fibre that is produced. Examples of misalignment can be seen in figure 5. Proper alignment is achieved by placing irises and targets along the beam path and lens holders to ensure the beam is parallel to the optical bench. The beam is then directed onto the tip of the axicon via the vertical and horizontal translation stages that are adjusted using micrometers. The two conical mirrors are aligned mechanically to each other to ensure that they are coaxial. The axicon is fixed to a plate holder that is machined such that it sits centrally to the fixed conical mirror. Spirit levels are used to ensure that the pulling stage and conical mirrors are parallel to the optical bench.
A second method to check whether the faceplate the axicon is attached to is perfectly perpend icular to the bench can be used. That is to project a far field image of the beam with the alignment laser after it reflects off the first conical mirror. If the beam is hitting the axicon at the tip perfectly and the axicon is sitting in the correct position, a perfect circle should be projected in the far field. If not, then a non-circular pattern will occur. This is shown in figure 5. This could occur if there is an angular misalignment of the cylindrical spacers that hold the two face plates together.
The coaxial alignment of the two conical mirrors can be achieved by clamping in an opaque rod in place of silica rod and observing the distribution of the alignment beam around the stock. If everything is perfectly aligned, there should be a narrow beam going all the way around the circumference of the rod. If the feed mirror is set at an angle, then the alignment beam will take the shape of an ellipse going around the stock.
The final check to ensure that the components are all aligned properly is to heat up a piece of silica stock and see if there is any deformation of the stock during the heating process. If the clamps are slightly out of alignment, then the silica stock will be misshapen in the heated region indicating that the clamp on the pulling stage needs a slight translation.
Pulling procedure
There are currently two methods to producing fibres from the pulling machine, one of which has significant advantages over the other: absolute position movement and velocity profile movement. The pulling stage and the feed mirror stage can be controlled such that the stages would move to a given position at a set acceleration and velocity. The pulling stage has a position range of −200 mm to +200 mm, giving a total travel of 400 mm. This total travel is reduced by approximately 30 mm to allow the extraction of the fibre from the clamp holders. The feed mirror has a position range of 0 mm-50 mm. Once the silica stock is heated to its molten temperature, the stage can then travel to a position in its range to produce a fibre.
Absolute position method
This method of fibre production involves directing the pulling stage to move to a fixed position at a constant velocity. The CO 2 laser heats around a circumference of the fused silica stock and through the LabVIEW program, the stage can be controlled to move to a specific position between −200 and +200 to produce a fibre.
All fibres produced showed similar artefacts along the length of the fibre. Examples of this are seen in figure 6 where the right hand side of the graph represents the end of the fibre that was on the fixed clamp and the left hand side of the graph is representing the end of the fibre that was on the moving pulling clamp. At the start of the pull, the fibre would have a 'bump' that would appear where the fibre will thicken slightly after initially pulling down before tapering down to the end of the pull. This would not be an ideal fibre that could be used to suspend optics as these artefacts could have an affect on the dynamics of the suspension such as the bending points of the fibre [12]. This is a very common artefact that can be reduced by varying the velocity of the pulling stage during the pull.
Using this method for the production of 5 mm long 100 μm diameter fibres for the potential use in gravimeters has been attempted on this machine, with further development to be looked at in the future.
Velocity profile method
The pulling stage velocities and accelerations can be programmed to change mid-pull via the use of a velocity profile. The velocity profile is a file that is uploaded to the pulling stage that contains three columns: stage velocity, stage acceleration and travel time at stated velocity.
The stage will accelerate (or decelerate) to the desired velocity and will travel at this velocity for the stated time before moving onto the next value.
Depending on the size of stock used and the desired fibre diameter, a two stage pull may be required. This consists of pulling down the silica stock first to an intermediate diameter, then pulling down from this new diameter stock to the desired diameter fibre. This method of producing fibres resulted in the production of repeatable fibres. The two artefacts, tapering and the 'bump', that were experienced in the absolute position fibres could be minimised by fine tuning the velocity profiles.
A range of diameters can be achieved using the same velocity profile by increasing or decreasing the velocity of the feed mirror; the faster the feed mirror, the more stock the fibre has to pull from and vice versa. A selection of fibres are shown in figure 7 that have a minimum fibre diameter ranging between 10 and 15 μm. The feed mirror motor stage was set to move at a constant 0.05 mm s −1 while the second stage pull is happening.
The velocity profile that produced the fibres in batch 1 was designed initially for use in producing fibres for suspending optics in the SSM [16]. These fibres satisfied the thermal noise requirements and were therefore used as a starting point for further modifications in batches 2-4. Best attempts were made to minimise the 'bump' artefact that was previously described in figure 6.
Characterising fibre diameter and Young's modulus
Characterisation of fused silica thin fibres produced for the purpose of suspending optics in gravitational wave detectors is essential to understand the nature of the fibres. A key property of fused silica is the high tensile strength [14]. This is evident in the design of the monolithic suspension installed in the aLIGO detectors that helped increase the sensitivity of the detector by a factor of 10 [12,21].
Fibre profiles
Fibres that were produced for the aLIGO monolithic suspensions were profiled to determine the diameter of the fibre using a dimensional characterisation apparatus [20]. This apparatus, referred to as a fibre profiler, uses a shadow measurement concept to obtain the fibre diameter. The thin fibres discussed in this section were profiled with an upgraded camera magnification and resolution, shown in figure 8, to reduce the error associated with the fibre diameter due to focusing and depth of field with a maximum magnification of ×28 and resolution of 1280 × 1024.
The profiler stage utilises a 0.1 mm positional encoder to allow precise 0.1 mm intervals as the camera travels up the fibre. This allows a greater number of measurements to be taken in the thin section of the fibre. Figure 9 shows the range of magnifications available, with ×6 being the high magnification of the aLIGO profiler and ×28 being the maximum magnification of the upgraded profiler. The magnification used will depend on what diameter fibre and stock is in the profiler or depending on what aspect of the fibre is being investigated. The fibres discussed in this section were profiled with a magnification of ×12 to allow the 450 μm stock section to be profiled with the thin section of the fibre. If only the thin section of the fibre was desired, then the maximum magnification can be used.
The fibres used in this paper were split into four separate batches. Each batch was pulled using different pulling velocity profiles to provide a wide diameter range of tested fibres. Figure 7 shows the profiles of a selection of fibres that were pulled from batch 1.
Young's modulus measurements
The Young's modulus of the fibre, Y, is the ratio of the stress and strain placed on the fibre: where the stress, σ is defined as: where F is the force applied on fibre and A min is the minimum cross-sectional area of the fibre. The strain, ε, is defined as: where ∆L is the total extension of the fibre and L is the length of the fibre. The error for the strain of the fibre, δ , can be expressed as: The error associated with the Young's modulus, δY , can be defined by: This can only be applied if the fibre diameter was uniform all the way through the thin section of the fibre. This is not the case as there is some fluctuation that occurs during the fibre pulling process. The Young's modulus can instead be calculated by breaking up the fibre into equal intervals determined by the profiler stage positional encoder divisions. The extension of the nth interval, ∆L n , is calculated via: ∆L n = L n F YA n (6) where A n is the area of the segment and all other symbols hold prior meanings. The total extension of the fibre, ∆L, can then be calculated via: Equating the theoretical extension of the fibre with the experimental extension of the fibre will yield the Young's modulus of the fibre. The breaking force of the fibre is obtained via stretching the fibre to the point of destructive failure. The fibre cartridge is placed into the clamps of the strength tester. One clamp is attached to a load cell and the other to a plate with a lead screw through it. The lead screw is attached to a stepper motor and stretches the fibres to exert a force on the load cell. The position of the moving plate is read out via a magnetic encoder. This allows the position of the plate to be read out at the point of fibre failure. Both the encoder and load cell is read out to a custom LabVIEW program. Literature value for the Young's modulus of bulk fused silica is 72 GPa [22]. The range of minimum diameter, breaking stress and Young's modulus values are shown in table 1. The range of Young's modulus values is also shown in figure 10.
Batch 1 fibres, which consisted of a total of 13 fibres, showed a tendency to break at stresses lower than 3 GPa, with an average of 2.7 ± 0.1 GPa breaking stress. This gave a Young's modulus average value of 63.3 ± 2.7 GPa. One fibre in particular had a low Young's modulus value of 48.0 ± 2.2 GPa. This fibre had a sharp 2 μm dip in diameter towards the end of the fibre as it tapers up to the stock. This is a possible reason for the lower breaking stress of this fibre. This fibre was therefore treated as an outlier and not included in the average Young's modulus shown in table 1, but is shown in figure 10. Further investigation is needed to determine how significant an artefact of this nature effected the strength of the fibre and its resulting Young's modulus value. Including this fibre into the average Young's modulus of this batch lowers it by approximately 2% to 62.2 ± 2.7 GPa. Fibre batches 2-4, with each batch consisting of five fibres all gave an average value that was in the region of the expected Young's modulus value for fused silica. Batch 1 fibres had the smallest diameter out of all that were tested.
The error associated with the Young's modulus is calculated by combining the errors associated with the stress and strain applied on the fibre. The error is dominated by the fibre diameter error. This is calculated from the systematic error associated with the diameter measurements. The standard deviation of the systematic errors for all points along the fibre is calculated and applied to the fibre diameter.
Further non-destructive investigation with a fibre pulled in the same conditions as batch 1 was also looked at. A fibre with a minimum diameter of 8.3 ± 0.2 μm was stretched and It has been suggested that the surface layer of fused silica has a thickness of 1 μm [23,24]. It is therefore possible that this indicates the mechanical structure of the surface layer, which accounts for a significant 21%-37% of the diameter at its thinnest point, has influence on the breaking stress and Young's modulus of the fibres. This is an area of ongoing interest and research is ongoing into surface layers of thin fibres.
Conclusion
A machine dedicated to the production of thin fused silica fibres was developed and characterised. This machine can produce fibres of a wide range of diameters, down below 10 μm if desired. Equipment to characterise the thin fibres, profiler and strength tester, were also developed and modified. The high magnification lens system on the profiler allows accurate profiling of the fibre diameter and the magnetic encoder installed on the strength tester allows accurate reading of the fibre extension under load. Thin fused silica fibres are being used in several experiments related to future gravitational wave detectors. Understanding the properties they hold is therefore extremely beneficial for future experiments. It was found that fibres with a minimum diameter below 10 μm had Young's modulus values ranging between 59.8 ± 2.3 and 68.6 ± 4.8 GPa. Fibres tested above a minimum diameter of 10 μm showed Young's modulus values that were closer to the bulk value. These ranged between 71.8 ± 1.8 to 75.9 ± 3.1 GPa. Further research into surface layers of thin fibres is ongoing. | 5,318.6 | 2018-07-16T00:00:00.000 | [
"Physics",
"Engineering"
] |