id
stringlengths 3
9
| source
stringclasses 1
value | version
stringclasses 1
value | text
stringlengths 1.54k
298k
| added
stringdate 1993-11-25 05:05:38
2024-09-20 15:30:25
| created
stringdate 1-01-01 00:00:00
2024-07-31 00:00:00
| metadata
dict |
|---|---|---|---|---|---|---|
237623085
|
pes2o/s2orc
|
v3-fos-license
|
Design and Realization of Band Pass Filter in K-Band Frequency for Short Range Radar Application
Short range radar (SRR) uses the K-band frequency range in its application. The radar requires high-resolution, so the applied frequency is 1 GHz wide. The filter is one of the devices used to ensure only a predetermined frequency is received by the radar system. This device must have a wide operating bandwidth to meet the specification of the radar. In this paper, a band pass filter (BPF) is proposed. It is designed and fabricated on RO4003C substrate using the substrate integrated waveguide (SIW) technique, results in a wide bandwidth at the K-band frequency that centered at 24 GHz. Besides the bandwidth analysis, the analysis of the insertion loss, the return loss, and the dimension are also reported. The simulated results of the bandpass filter are: VSWR of 1.0308, a return loss of -36.9344 dB, and an insertion loss of -0.6695 dB. The measurement results show that the design obtains a VSWR of 2.067, a return loss of -8.136 dB, and an insertion loss of -4.316 dB. While, it is obtained that the bandwidth is reduced by about 50% compared with the simulation. The result differences between simulation and measurement are mainly due to the imperfect fabrication process.
I. INTRODUCTION
In the original K-band, there is a water vapor absorption line at 22.2 GHz which causes a serious problem of attenuation in some applications. The radar echo from rain can limit the capability of radars at these frequencies [1]. However, K-band frequency range is still attractive for a small size radar that is used for an application that does not require long-range detection. Several fields such as automotive industry utilize short range radar (SRR) for road safety and intelligent transport systems [1]- [3].
Automotive short range radar (SRR) provides various functions to increase drivers' safety and convenience. For SRR to have sufficiently high range resolution for detecting small objects at close range, it requires a wide bandwidth. The availability of wide bandwidth significantly improves range resolution which results in better separation of objects. With higher range resolution, it improves an SRR system's minimum distance of detection.
There are several devices that are part of the SRR system, one of which is the filter. A filter with certain specifications is needed so that it can be used in SRR systems. To have a filter designed with a low insertion loss, small size, and limited cost is essential for the manufacture of microwave systems. Unfortunately, the traditional technology, either planar or non-planar, is incapable to provide all these characteristics at the same time. In fact, the rectangular waveguides present low insertion losses and good selectivity [4]. Recently, a substrate integrated waveguide (SIW) is one popular method to design microstrip filters. This method is similar to the conventional rectangular waveguide, and can be used in the K-band frequency range in order to meet the bandwidth and filter characteristics requirement, including insertion loss, return loss, and minimization of filter dimensions. Figure 1 shows the similarity of the electric field distribution in the SIW and equivalent conventional waveguide. In the SIW filter structure, the via sequences are connected between the top and bottom of the conductor forming a cavity wall, where the microstrip transmission line is used for the feeding of the RF input and output. The cavity wall consists of via rows that provide outof-band rejection, making it perfect to be used in radar applications [6]. Basically, the microstrip transmission line is also functioned to guide a signal that enters a device. However, the thing that makes SIW different from other microstrips is the presence of the two rows of metal cylinders, causing low-frequency signals to not propagate in the SIW [7]. Also, a wide operating bandwidth can be achieved using the SIW, as reported in [8]- [9]. In [8], a hybrid and periodically drilled SIW (PDSIW) structure are proposed. While in [9], a substrate integrated plasmonic waveguide (SIPW) concept is used to obtain a wide bandwidth.
In this paper, a design of wideband band pass filter (BPF) using the SIW technique is proposed. The device is designed and fabricated on RO4003C substrate. This BPF will operates at the K-band with a center frequency of 24 GHz.
A. Design Methodology
The proposed design of bandpass filter is intended to pass signals at the center frequency of 24 GHz with a bandwidth of 1 GHz. The bandpass filter is designed as a part of the SRR block system. The flow chart of the filter design stages is depicted in Figure 2. It is started by determining the filter specification and dimension, followed by simulation and realization. Filter performances must meet the value of certain parameters such as insertion loss, return loss, and VSWR. After determining the specifications, next step is to calculate the dimensions of the filter. This process includes determining the order, the diameter, and the distance between the vias using a predetermined formula. Afterward, the simulation is conducted to obtain Sparameter values of the designed filter with calculated dimensions in the previous stage. If the simulation results are not in accordance with the specifications, optimization will be carried out, but if the simulation results have met the requirements, the filter design will be fabricated. The filter that has been fabricated will then be measured and compared with the simulation results. Then, the difference between the two results will be analyzed.
B. Substrate Integrated Waveguide (SIW)
The characteristics of SIW are similar to that of a rectangular waveguide. The combination of a non-planar circuit (waveguide) and a planar shape (microstrip) offers various advantages, namely the characteristics of a nonplanar waveguide can be obtained from SIW structures such as low insertion loss, low radiation loss, small EM interference, and low-cost fabrication.
The thin dielectric substrate on SIW does not allow transverse magnetic (TM) mode to resonate. Therefore, in SIW only the transverse electric (TE) mode can transmit effectively [10], [11]. In Figure 3 there are several important parameters to form the via wall. To find the diameter of the via, the distance between the via series, and the distance between the rows, following equation is used [12]. SIW filter can be designed using additional set of vias that is placed between the via walls. Figure 4 shows the parameters of the diameter and distance between the vias. To find these parameters, the following equation is used: after that, substitute the values in equations (4) and (5) as the parameters in the Chebyshev frequency response, in equations (6) and (7) [14].
Then substitute equations (6) and (7) into equation (8). This equation calculates the final impedance for each stage in equations (6) and (7) [14]. The dimensions of the via filter can be determined using the following equation [14]: To match the impedance between the feed and the SIW components, a tapper is introduced to connect these two. The dimensions of the tapper can be determined using equations (13) and (14) [13].
Before determining the dimensions of the filter, the first thing to consider is the order of the filter to be used. Based on the specifications, the most suitable frequency response is Chebyshev, with a ripple of 0.1 dB. To determine the filter order (n), following equations are used: The next step is to calculate the diameter and distance between the via walls, then determine the diameter and distance between the filter vias. After that, the ratio between the length and width of the tapper is calculated. Finally, a 3-pole Chebyshev SIW filter based on the equation is shown in Figure 5 and the detailed parameter values are shown in Table 1.
The simulation results of the filter performance at the center frequency of 24 GHz show the insertion loss of -4.0437dB, return loss -4.0820dB, VSWR of 4.337, and bandwidth of 256 MHz. These results did not meet the initial specifications. Therefore, an optimization step is carried out. Optimization by adding vias was carried out to obtain nearly ideal S 11 & S 22 parameters. Vias were added to guide the incoming waves from the feeder and tapper through the area with the via filter, and thus minimize the unwanted scattered waves.
In addition, vias located at the very end of both sides of the filter close to the tapper and feeder, are shifted by 0.8 mm to adjust the shape of the feeder and tapper. The purpose is to guide the incoming waves directly. The optimized design results are shown in Figure 6 and the dimensional changes are shown in Table 3. Figure 7 depicts the result of optimization where the S 21 parameter or insertion loss value has a slight change but still meets the requirement value. At the 23.5 GHz frequency, which previously -0.7120 dB, the insertion loss decreases to -1.1086 dB, at the 24.5 GHz frequency, which previously -1.2473 dB, the S 21 decreases to -1.2689 dB, while at the middle frequency which previously received S 21 of -0.9124 dB, is now changes to -0.6695 dB. The lower stopband area was previously at 23.38 GHz, and after the optimization it has shifted slightly to 23.382 GHz. Meanwhile, the upper stopband, which was previously at 24.6320 GHz, has shifted to 24.6180 GHz. Moreover, the S 11 or the return loss value has significantly changed but still meets the specification value and the resulting graphical shape is close to the ideal shape.
At the 23.5 GHz frequency, the S 11 changes from -16.9193 to -10.0260 dB, at 24.5 GHz frequency, the S 11 changes from -16.2063 dB to -19.7888 dB while at the middle frequency, the S 11 changes from -12.7049 dB to -36.9344 dB. In case of the stopband, the lower stopband is shifted from 23.424 GHz frequency to a frequency of 23.4690 GHz. Meanwhile, the upper stopband, which was previously at 24.6350 GHz, has shifted to 24.5990 GHz. Figure 8 shows the VSWR value. There is a slight change but still meets the specification value. The VSWR at 23.5 GHz frequency was previously 1.3319, now is 1.9209, at the 24.5 GHz frequency, of the VSWR was previously 1.3602, now is 1.2283, while the VSWR at the middle frequency changes from 1.6028 to 1.0308. The lower stopband area, which was previously at 23.4190 GHz, has shifted to 23.4870 GHz. Meanwhile, the upper stopband, which was previously at 24.64 GHz, has shifted to 24.6060 GHz.
A. Fabrication
The proposed SIW filter is fabricated on RO4003C substrate with 0.813 mm thickness, as shown in Figure 9. The fabricated bandpass filter is measured using a microwave analyzer measuring instrument N9918A which has a measurement frequency range from 30 KHz to 26.5 GHz. The measurement is carried out to find out the technical data from the proposed filter design, namely insertion loss, return loss, and VSWR. The measurement results data then will be compared with the design specification data.
B. Measurement
Before starting a measurement, a calibration is carried out using a calibrator shown in Figure 10. In this step, it is found that there is a loss of -0.8109 dB that will affect the measurement results. This is marked with a red box in Figure 10. Figure 11 shows the measurement results of the S 21 or insertion loss parameter. Although there is a loss of -0.8109 dB from the measurement system, the device's performances still meet the pre-determined specification at 23.5 to 24.5 GHz. Considering the system loss, the S 21 at 23.5 GHz frequency is -3.338 dB and at 24.5 GHz is -4.814 dB, while at 24 GHz is -4.316 dB. It is seen in the graph, there is a change in the measured bandwidth. Figure 12 shows the measurement results of the S 11 parameter or return loss. In the frequency range of 23.5 to 24.5 GHz, the value obtained still meets the specification target when the result has been reduced by the loss of -0.8109 dB which was known from the previous calibration.
C. Results and Discussion
If the system loss is taken into account, the value of S 11 at the 23.5 GHz frequency is -10.139 dB, at 24.5 GHz is -10.969 dB while at the 24 GHz frequency is -8.136 dB. Similar to the previous S 21 measurement, there is a shift in the S 11 of the bandpass section and it appears that the bandwidth is decreasing compared to the simulation. Figure 13 shows a graph of VSWR measurement. The measured VSWR are 1.833 at 23.5 GHz, 1.649 at 24.5 GHz, and 2.067 at 24 GHz. Based on the overall measurement, all of the results at the frequency range of 23.5 to 24.5 GHz still meet the requirements that are specified earlier. It is noted, however, that the measurement results are far from ideal. The measured operation frequency is slightly shifted and the bandwidth is reduced compared to the simulation results.
Based on the filter performance, although the insertion loss, return loss, and VSWR parameters still meet the initial specifications, there is a significant difference of the measured bandwidth parameter compared to the simulation. The summary of the performance results is shown in Table 3. The bandwidth discrepancy that occur during the measurement can be ascertained due to the fabrication factors. Based on the formula used when performing calculations, the diameter of the via filter and the distance between the via filters have an influence on the generated bandwidth. In addition, the dimensions of the tapper also have a significant effect on the bandwidth and the value of other parameters, because this section is an adjustment between the feeder and the components of SIW.
Furthermore, the imperfect drilling process on the substrate in the SIW method also contributes to the result discrepancies. If the drilling process is done manually, there will be a possibility of size or placing inaccuracy that may affect the measured device's performances. In other words, a high precision fabrication is needed in the prototyping of the bandpass filter using the SIW method. In addition, fabrication losses may arise due to poor soldering of the connectors that can increase the attenuation of the filter.
CONCLUSION
We have designed and realized a bandpass filter using a substrate integrated waveguide structure. The design of the bandpass filter has a good agreement with the specification of short-range radar applications with a working frequency of 24 GHz. The realization of the bandpass filter gives good performances in terms of insertion loss, return loss and VSWR. Nevertheless, the bandwidth is reduced by about 50% compared with the simulation bandwidth. It is caused by the fabrication process such as hole drilling that requires precision. Furthermore, the change of the tapper dimension during the fabrication also influence the bandwidth.
|
2021-09-24T15:08:52.973Z
|
2021-01-01T00:00:00.000
|
{
"year": 2021,
"sha1": "2a766c121a01f13d2f34563dfa23befe4f50c5f2",
"oa_license": "CCBYNCSA",
"oa_url": "https://www.jurnalet.com/jet/article/download/393/333",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "9ddf1b7bede18f7942468e467094bf725704dd81",
"s2fieldsofstudy": [
"Engineering",
"Physics"
],
"extfieldsofstudy": []
}
|
258619990
|
pes2o/s2orc
|
v3-fos-license
|
Interspecific common bean population derived from Phaseolus acutifolius using a bridging genotype demonstrate useful adaptation to heat tolerance
Common bean (Phaseolus vulgaris L.) is an important legume crop worldwide and is a major nutrient source in the tropics. Common bean reproductive development is strongly affected by heat stress, particularly overnight temperatures above 20°C. The desert Tepary bean (Phaseolus acutifolius A. Gray) offers a promising source of adaptative genes due to its natural acclimation to arid conditions. Hybridization between both species is challenging, requiring in vitro embryo rescue and multiple backcrossing cycles to restore fertility. This labor-intensive process constrains developing mapping populations necessary for studying heat tolerance. Here we show the development of an interspecific mapping population using a novel technique based on a bridging genotype derived from P. vulgaris, P. Acutifolius and P. parvifolius named VAP1 and is compatible with both common and tepary bean. The population was based on two wild P. acutifolius accessions, repeatedly crossed with Mesoamerican elite common bush bean breeding lines. The population was genotyped through genotyping-by-sequencing and evaluated for heat tolerance by genome-wide association studies. We found that the population harbored 59.8% introgressions from wild tepary, but also genetic regions from Phaseolus parvifolius, a relative represented in some early bridging crosses. We found 27 significative quantitative trait loci, nine located inside tepary introgressed segments exhibiting allelic effects that reduced seed weight, and increased the number of empty pods, seeds per pod, stem production and yield under high temperature conditions. Our results demonstrate that the bridging genotype VAP1 can intercross common bean with tepary bean and positively influence the physiology of derived interspecific lines, which displayed useful variance for heat tolerance.
Introduction
Common bean (Phaseolus vulgaris L.) is the most widely consumed legume in Latin America and Africa. Its seeds are of nutritional interest due to their taste and beneficial nutritional profile. In some contexts it provides up to one third of daily protein intake (Beebe, 2012). Living populations of common bean's wild ancestors have been found in an extensive tropical and subtropical area ranging from northern Mexico (approx. 30°N .) to northwestern Argentina (approx. 35°S; (Gepts, 1998). Generally sub-humid forest clearings are characterized by welldrained soils and bimodal rainfall patterns with a short dry period between the two wet seasons. Evolving in these conditions, the common bean's wild ancestor was rarely exposed to extreme soil constraints, high temperature or long drought conditions, and thus its modern common bean descendant is sensitive to such constraints (Toro et al., 1990;Gaut, 2014). Wild common bean is organized in two geographically isolated and genetically differentiated wild genepools (Mesoamerican and Andean) that diverged from a common ancestral form~165.000 years ago and from these wild genepools approximately 8.000 years ago common bean was domesticated in Mexico and South America (Schmutz et al., 2014).
Reproductive development of domesticated common bean is especially susceptible to high temperature stress with day and night temperatures greater than 30°C and 20°C, respectively, resulting in significant yield reduction (Porch & Jahn, 2001). Published studies suggest that heat sensitivity is caused by damage during reproductive development of male structures compromising pollen grain viability and modifying normal pollen tube development within the style, which impacts all yield components (Rainey & Griffiths, 2005). Additionally, it has been reported that heat stress disrupts translocation between sources and sinks, thus limiting flowering and seed filling (Soltani et al., 2020). Common bean is vulnerable to expected future climate scenarios especially for increases in average ambient temperatures above the range of bean adaptation (Beebe et al., 2011;Hummel et al., 2018).
The tepary bean (Phaseolus acutifolius) is linked to the tertiary common bean genepool. However, it evolved in the hot, arid Mexican and southwestern US deserts (Freytag & Debouck, 2002). It exhibits multiple traits associated with drought and heat resistance, this being a promising source of useful genes to improve the genetics of common bean. These include i) tolerance to temperatures greater than 32°C, ii) stomatal control, iii) dehydration avoidance, iv) excellent photo-assimilate mobilization to seeds, and v) a fine root system that allows it to rapidly penetrate the soil (Beebe, 2012). There is considerable genetic distance between tepary and common bean species. Thus, interspecific offspring are usually not viable and hybrid embryos abort within the mother plant pods. In vitro embryo rescue can increase hybrid F 1 plants' survival rate that are self-incompatible, so multiple backcrossing with common bean is needed to restore fertility (Honma, 1956;Garvin et al., 1997). A drawback of recurrent backcrossing with common bean is the rapid dilution of tepary introgressions. An alternative method named congruity backcrossing, which alternates P. vulgaris and P. acutifolius as the backcross parent can boost recombinations between the species (Mejıá et al., 1994). Segregation distortion in interspecific populations is characterized by deviating Mendelian ratios for certain markers, which displays homozygote deficiency for the tepary allele, and thus indicating a limited recombination between both species (Garvin et al., 1997).
Multiple interspecific populations combining tepary and common bean have been developed using in vitro embryo rescue possessing useful variance in common bacterial blight resistance (Xanthomonas campestris), bruchid resistance, and cold-and drought tolerance (Honma, 1956;Kusolwa & Myers, 2008;Martinez, 2010;Souter et al., 2017;Suaŕez et al., 2020). These populations shared a reduced population size for early generations. This was probably due to the low in vitro embryo rescue success rate that constrained establishing genetic mapping populations indispensable for studying the genetic basis of complex traits such as heat resistance. However, repeated intercrossing between the species P. acutifolius, P. parvifolius and P. vulgaris at CIAT has established VAP lines that permit hybridizing common and tepary beans without using embryo rescue techniques (Barrera et al., 2022). This bridging genotype approach enables establishing sufficiently large genetic mapping populations. The research objectives reported here are: (1) developing an interspecific population combining wild tepary bean and Mesoamerican common bean by using the bridging genotype VAP1, (2) characterizing the interspecific population in high temperatures using heated greenhouse environments, (3) assessing the introgression levels in the population, and (4) evaluating the association between introgression fragments and the population's phenotypic responses under controlled and high temperature conditions.
Plant material
We developed a unique Interspecific Mesoamerican X Wild Tepary (IMAWT) population with the following crossing scheme: ((P. acutifolius x VAP1) x P. vulgaris) x P. vulgaris). The bridging line VAP1 (with parentage of P. vulgaris, P. acutifolius, and P. parvifolius) allowed us to create interspecific crosses without embryo rescue (Barrera et al., 2022). We obtained F 1 plants crossing VAP1 with two wild accessions of P. acutifolius (G40056 or G40287) followed by two crosses to P. vulgaris (Figure 1). The common bean parental lines correspond to five elite breeding lines from the Mesoamerican genepool (SMR155, SEF10, SMC214, ICTA Ligero, and SEN118). They are drought tolerant and represent different commercial grain classes. The line SEF10 is an accession that also presents tepary crosses in its pedigree. We obtained 50 F 1:2 genotypes from 14 different combinations of the above-mentioned parental lines. The accessions' seed stocks were grown on until generation F 5 via single-seed descent by selecting random plants in optimal field conditions to avoid biased selection of certain plant idiotypes. The F 5 population embraced 892 lines. In 2019 we performed a non-replicated screening trial in two greenhouses with propane heaters that maintained the temperature above 24°C for the entire cultivation cycle (Supplementary Figure 1). The plot contained four sister plants planted in soil, with 5 cm between plants, and 40 cm between plots for a total of 408 experimental units (EUs) per greenhouse. We visually estimated the number of pods per plot in two greenhouses at the International Center for Tropical Agriculture (CIAT), Palmira, Valle del Cauca (03°30' 20.39" N, 76°2 0' 28.13" W, 973.22 m.a.s.l). We selected 302 F 5 population representatives. Two lines with the best pod load were selected from the F 2:3 groups and one line with the worst was selected from the F 1:2 groups to guarantee a wider range of phenotypic variation. We increased those 302 F 5 introgression lines (ILs) by bulking to create the F 5:6 IMAWT population (Supplementary Table 1).
Heat tolerance trial
The IMAWT population was evaluated at CIAT, Palmira in three environments: two in greenhouses with controlled temperature (GH1 and GH2) named jointly as heat stress environments (HS); and one in an open field trial without any heat stress (Non-stress environment, NS). In greenhouses, propane heaters maintained a minimum air temperature of 25°C during the growing cycle. In both HS and NS environments, drip irrigation was used to maintain soil water content at field capacity until necessary. The irrigation schemes were identical for GH1 and GH2. Fertilizer and pesticide applications did not differ between environments. Generally, all standard operation procedures were similar within environments. The main climatic parameters that logically differed between HS and NS environments were air temperature and photosynthetic active radiation (PAR). During the entire cultivation period (from sowing until final harvest), the minimum, maximum and average air temperatures were consistently higher in HS environments than in NS (Supplementary Figure 2A). Average day/night air temperatures were 32/25°C and 33/25°C in GH1 and GH2, respectively; and 30/22°C in NS. The minimum and maximum temperatures are consistently similar within HS and follow the climate dynamics observable in NS. The temperature in both GHs never dropped below 25°C. Similarly, average relative air humidity was higher in HS environments (with some short exceptions; 81.7% and 80% for GH1 and GH2, respectively) than in the NS environment (75%; Supplementary Figure 2B). PAR accumulated by day was higher in NS (48 mol/ m 2 day) than in HS environments (25.2 and 25.4 mol/m 2 day for GH1 and GH2, respectively) over the entire crop cycle (Supplementary Figure 2C). Crossing scheme used for IMAWT population development. First the bridging genotype (VAP 1) and a wild tepary accession was intercrossed using the first as a female. The F 1 was then crossed twice with a common bean parental line to produce the secondary F 1:2 generation.
Our partially replicated experimental design used 30% of the population with a second replicate in each environment (Moehring et al., 2014). Each EU contained four plants, establishing 408 EU in each environment. Plants were harvested when they reached senescence, at around 95 days after sowing (DAS) in HS environments and 75 DAS in NS. The number of harvested plants per plot was recorded, and plants were dried at 70°C until constant moisture content. The number of pods per plant (PP), number of seeds per plant (SP), the weight of 100 seeds (SW), dry weight of leafless stems per plot (StWP), number of empty pods per plant (EPP), yield per plant (YdPl), average number of seeds per pod (NSP = SP/PP), harvest index (HI = seed weight per plant/(pod weight per plant + StWP)) and pod harvest index (PHI) were registered/calculated following the methodology proposed by Assefa et al. (2017).
We fitted a mixed linear model for each environment, including the rows and columns as a random effect. This model was fitted using the statistical software Mr. Bean, which is based on the 'SpATS' R package (Rodrıǵuez et al., 2018;Aparicio et al., 2019). The genotype was included as a random factor to calculate the Best Linear Unbiased Predictors (BLUPs) and broad sense Cullis heritability (Oakey et al., 2006).
Library construction and sequencing
For DNA extraction, we sampled two 5cm-long axillary buds per plant per line at 25 DAS under field conditions using the standard CTAB method (ionic detergent cetyltrimethylammonium bromide) (Doyle & Doyle, 1987). DNA from each sample was quantified by electrophoresis using different DNA l as concentration standards and diluted to a concentration of 100 ng/ µl for all samples using ultrapure water as solvent. For DNA sequencing, we used genotyping by sequencing (GBS) using the ApeK1 restriction enzyme (Elshire et al., 2011). Four libraries were constructed and sequenced on the Illumina HiSeq X platform with 150 bp pair-end reads at the Macrogen facility in Seoul, South Korea (https://dna.macrogen.com/).
Variant calling and population structure
GBS reads were de-multiplexed using the Stacks software (v 2.52) using the module process_readtags for paired reads (Rochette et al., 2019). Trimmomatic software (v 0.36) was used to remove adapters and low-quality bases (Bolger et al., 2014). Processed reads were mapped to P. vulgaris G19833 v 2.1 reference genome (Schmutz et al., 2014) using Bowtie2 software with default parameters (Langmead & Salzberg, 2012). The first variant calling per individual was performed with NGSEP software v 4. 4.1 (Tello et al., 2019), considering a minimum average quality per read > 40 and a minimum depth of 3 reads. Subsequently, Bcftools (v 1.9) software was used to perform the second variant calling for the population (Li, 2011). The filters used removed variants in the reference genome repetitive zones (Lobaton et al., 2018), selecting only bi-allelic (SNPs), heterozygosity per marker ≤ 10%, and minor allele frequency (MAF) ≥ 2.5%. The genotypic matrix was set to 20% missing data by removing variants with less than 208 genotyped individuals. We included for population structure analysis the WGS samples VAP1 and P. parvifolius G40264 from Barrera et al. (2022).
Introgression analysis
For introgression analysis, we selected the SNPs where the common bean parental lines (ICTA Ligero,SEN 118,SEF 10,SMR 155,and SMC 214) presented no missing data and were monomorphic. From those, we selected the variants where the wild tepary parental lines (G40056 and G40287) and G40264 (jointly named Acutifolii samples hereafter) present no missing data, were homozygous, and doesn't share any allele with initially common bean parental monomorphic variants (Supplementary Figure 3). This set of contrasting SNPs was recoded into tree states: A (common bean origin), B (Acutifolii origin), or H (heterozygous), and the crossingover points between both backgrounds were detected using SNPBinner using an emission probability of 0.99 (representing the predicted region's genotypic homogeneity) and a minimum introgression size of 0.1% of chromosome size (Gonda et al., 2019).
Genome-wide association analysis
Genome Wide Analysis Studies (GWAS) were performed with the method named Fixed and random model Circulating Probability Unification or FarmCPU (X. implemented in the R statistical package GAPIT v 3.0 (Lipka et al., 2012). We removed the wild tepary parental lines from the analysis due to their contrasting genotypic and phenotypic differences in relation to the population that could produce spurious associations. To determine the significance threshold, we used the Bonferroni threshold with an a level of 5%. Manhattan plots and QQ-plots were plotted with a custom Python script.
Population development
We developed the IMAWT population by intercrossing two wild accessions of tepary bean (G40056 and G40287) with five Mesoamerican elite breeding lines of common bean (SEN118, ICTA Ligero, SMR155, SMC214 and SEF10). We obtained 302 F 5:6 introgression lines from 14 parental combinations. The IMAWT population exhibited morphological characteristics normally associated with wild tepary beans such as lanceolate trifoliate leaves and angular seeds (Supplementary Figure 4). A total of 24.205 bi-allelic SNPs were called for 244 samples. An introgression analysis was performed to identify the segments introgressed from the sister's species. Additionally, we performed a GWAS analysis to identify heat-tolerance related quantitative trait nucleotides (QTNs).
Heat stress trial
Traits evaluated at harvest differed significantly between NS and HS environments, except for PHI and SP in GH2; Figures 2C, E). EPP and StWP increased in HS environments relative to NS. Average StWP values in NS were 1.76 g/plant, however in GH1 and GH2 we observed values of 4.63 and 3.44 g/plant, respectively. Similarly, the overall EPP values in NS were 0.64 empty pods/plant but for GH1 and GH2 we observed 1.84 and 1.55 empty pods/plant, respectively (Figures 2A, G).
For YdPl, NSP, and HI traits, the average response was significantly higher in NS than in HS environments. Average YdPl in NS was 7.16 g/plant but 6.66 and 6.79 g/plant for GH1 and GH2, respectively, observing a reduction of 7% and 5% to NS, respectively ( Figure 2H). For NSP, average value in NS was 3.27 seeds/pod, whereas we observed 2.82 and 2.98 seeds/pod in GH1 and GH2, respectively, observing an average reduction of 14% and 9% to NS, respectively ( Figure 2B). In regards to HI the average value in NS was 85.5% whereas for GH1 and GH2 78.6 and 81.4%, observing a reduction of 7% and 5% to NS, respectively ( Figure 2I).
For PP, SP, and SW the averages in HS environments were significantly higher than in NS. Average PP in NS was 6.81 pods/ plant whereas for GH1 and GH2 was 9.15 and 7.47 pods/plant, respectively, indicating an significant increase of 34% and 9.7% ( Figure 2D). For SP, the average in NS was 21.4 seeds/plant whereas in GH1 and GH2 it was 26.6 and 22.2 seeds/plant, respectively, observing an significant increase of 24% and 3.7% to NS ( Figure 2E). For SW the average in NS was 20.8 g/100 seeds whereas for GH1 and GH2 it was 21.8 and 22.2 g/100 seeds observing an significant increase of 4.6% and 6.8%, respectively ( Figure 2F) . We observed positive and significant trait correlations between HS (GH1 and GH2) environments. All correlations between GH1/GH2 are higher than for HS vs NS with an exception for SP (Supplementary Figure 5). Interestingly, EPP in NS did not correlate with both HS treatments. GH2 showed lower, however still significant correlations to NS for traits as PP, SP, and YdPl. The broad-sense heritabilities (H 2 ) were greater than 50% for all traits except for EPP and PP in NS and StWP in the GH1. Importantly, except for StWP and SW, HS environments presented consistently higher H 2 values than those of NS. The GH2 (except for NSP, SP and HI) presented higher values of H 2 than GH1 (Supplementary Table 2).
For each trait and each environment, the Pearson correlation coefficients were calculated. Traits correlated in a similar fashion independently of environment. However, in HS environments the magnitude of correlations was often lower than in NS ( Figure 3). Correlations between YdPl and its components (PP, SP, SW, and NSP) were positive and significant for NS and HS, however in HS environments correlation between SW and YdPl were lower than in NS although still significant (0.37*** and 0.35*** for GH1 and GH2, respectively; and 0.59*** for NS). Similarly, SW correlated differentially with SP and PP ( Figure 3). The correlation between StWP and YdPl in NS was 0.42*** but in HS environments was 0.25*** and 0.14* for GH1 and GH2, respectively; Figure 3).
Wild tepary parental lines performed generally much better than common bean parental lines under the HS environments. Excepting for G40056 in GH2 where YdPl didn't differ to SEN118 (Supplementary Table 3), we observed significant differences between tepary and common bean parental lines in HS environments supporting the idea of heat resistance traits in tepary accessions. The same negative result is visible when comparing yields from NS, where both tepary accessions (according to expectations) had very low yields. Importantly, some interspecific lines outperformed in YdPl common bean parental lines in HS environments. In GH1 we observed 107 interspecific lines with a higher YdPl than SMC214 (7.04 g/plant), the line with highest YdPl among common bean parental lines in GH1. For GH2, SEN118 was the best common bean parental and presented an YdPl of 9.46 g/plant. In GH2, 19 interspecific lines outperformed SEN118 and 125 interspecific lines were superior to SEF10 (the second-best common bean parental with yield of 9.46 g/ plant). Among the 107 lines from GH1 and 125 lines from GH2, 66 lines were found in both outperforming groups.
Genotyping and population structure
The IMAWT population was genotyped by GBS and after filtering low quality, highly heterozygous and rare variants obtained a total of 24.205 bi-allelic SNPs for 236 interspecific lines and the eight parental lines mentioned previously. The number of SNPs per chromosome ranged between 877 and 6,720 SNPs (chromosome 09 and 03, respectively). The SNPs' distribution inside each chromosome was heterogeneous with higher marker density in the telomeric regions (Supplementary Figure 6).
Population structure was assessed through Principal Component Analysis (PCA). The first three principal components (PC) explained 29.7% of the total variance (Supplementary Figure 7). The first two PCs differentiate the wild relatives' samples (G40056, G40287 and G40264) from the common bean primary genepool. In between both groups a group of interspecific lines was located resembling similarities from both ( Figures 4A, B). The PC3 allowed the differentiation inside the P. vulgaris primary genepool observing contrasting genetic differences between ICTA Ligero and SMR155 ( Figure 4C).
Introgression analysis
The IMAWT population presented interspecific introgressions from wild tepary lines (G40056 -G40287) and P. parvifolius (G40264) from the bridging genotype VAP1. A subset of contrasting 7,915 bi-allelic SNPs were selected to detect the introgressions (Supplementary Figure 3). Those contrasting variants provide evidence of the location and extent of introgressions present in the population. For each variant we counted the number of samples that carried at least one Acutifolii allele ( Figure 5B). The selected SNPs' distribution was uneven along the genome. We observed in chromosomes 02, 09, and 11 a low density (17, 123, and 78 SNPs/chromosome, respectively) whereas chromosomes 01, 03, and 08 presented higher densities (1.219, 2.807 and 1.106 SNPs/chromosome, respectively; Supplementary Table 4). The crossing-over points between Acutifolii and P. vulgaris backgrounds were detected in each sample resulting in 465 homozygous introgression events in 203 interspecific lines. Additionally, a total of 309 regions was detected that were due to high heterozygosis, or background changes below the minimum introgression size were labeled as undefined (Supplementary Figure 8). For those families carrying at least one introgression event, the introgression percentage by sample (IP = Total introgression length/genome size) varied between 0.03 and 17.13%. Half of those families presented an IP below 1.8%, and GCDT 237 showed the highest IP (Supplementary Figure 9). Chromosomes 01, 03, 05, 06, 08, and 10 were covered almost completely by introgressions (86.7% -100% of coverage). Chromosomes 04, 07, and 09 presented a low introgression coverage (25% -26.5%). No introgression events were detected in chromosome 02 ( Figure 5). With the IMAWT population it was possible to cover 59.8% of the common bean with wild introgressions (Supplementary Table 4).
For the IMAWT population we detected introgressions in 54 and 46 samples for chromosomes 06 and 07 (20 -29 Mb and 36 − 39 Mb, respectively), and importantly VAP1 presented introgressions events in the same regions indicating the putative origin ( Figures 5D, E). We selected a subset of 1,860 variants where wild tepary accessions didn't share any alleles with the P. vulgaris parental lines, G40264 and VAP1 samples to confirm if the population carries wild tepary-specific alleles (Supplementary Figure 3). We counted the number of introgression lines that carried at least one wild tepary allele and found they were scattered throughout the genome, excepting chromosome 02 ( Figure 5C).
We selected 203 interspecific lines where introgression analysis detected at least one introgression event and correlated introgression percentage (IP) with measured quantitative traits for each environment. In the NS environment the correlations were higher (more negative and significant) for NSP, PHI and HI than in both HS environments where were not significant for HI. In contrast, correlations of IP to PP, SP, SW, and StWP were lower in NS than in both HS environments. For YdPl the correlations were similar across all environments ( Figure 6).
Genome-wide association analysis
Genetic association analyses were performed using the FarmCPU method, and significant associations or Quantitative Trait Nucleotides (QTNs) were selected according to the Bonferroni threshold (-log 10 (P-value) > 5.7). In total, 27 QTNs were detected for five traits (EPP, YdPl, StWP, NSP, and SW). In HS environments, we detected 24 QTNs (14 and 10 for GH1 and GH2) and 3 in NS (Supplementary Figure 10).
EPP showed the highest number of significant associations detecting 9 QTNs (5 and 4 for GH1 and GH2, respectively; Supplementary Figures 10, 11). EPP1.1 and EPP1.2 were detected independently in GH1 and GH2, respectively, and were separated by 29 bp. The minor allele for both QTNs was inherited from wild tepary parental lines presenting a positive allelic effect of 0.67 and 1.02 empty pods/plant and explaining 10.4 and 14.5% of the phenotypic variance, respectively (Table 1). All the QTNs presented a positive allelic effect except EPP6.4 (-1.76 empty pods/plant), in which the minor allele was inherited from ICTA Ligero. We detected 8 QTNs for YdPl (2 in NS, 4 in GH1, and 2 in GH2) and the most significant association was observed for QTN YdPl8.6 (-log 10 (P-value) = 9.94). Jointly with YdPl3.2 and YdPl8.5 (which was detected independently in both HS environments) in these three QTNs the minor allele was inherited from wild tepary parental lines and the estimated allelic effects ranged between 2.07 and 2.25 g/plant. YdPl6.3 and 10.7 presented a negative allelic effect reducing the YdPl by -0.65 and -1.07 g/plant (Table 1). Notably, QTNs near to YdPl8.6 existed at not significant levels but still highly associated with StWP (at 628 kb of distance) in GH1 and in the same position in GH2 (Supplementary Figures 10, 12).
For StWP, 7 QTNs were detected (3 and 4 in GH1 and GH2, respectively). The most significant association was observed for QTN StWP3.4. Jointly with StWP1.1,and StWP8.6, these are the QTNs whose minor allele was inherited from tepary parental lines, and the estimated allelic effect is positive, ranging between 0.57 to 0.86 g/plant (Table 1; Supplementary Figures 10, 13). StWP3.2 and StWP3.3 were monomorphic within parental lines. The origin of these putative alleles could be traced back to a duplicate of VAP1, indicating heterozygosity of the bridging genotype when crosses were performed (Supplementary Figures 14, 15) Two QTNs NSP3.1 and NPS4.2 were detected in GH1, and the estimated allelic effect was 0.32 and 2.1 seeds/pod, respectively Figure 17). The minor allele of NSP4.2 was present in both tepary parental lines and ICTA Ligero (Table 1). SW2.1 and SW7.2 were detected in the NS environment. The estimated allelic effect was negative in both cases, ranging between -0.79 to -2.20 g/100 seeds (Table 1; Supplementary Figures 10, 17). For SW2.1, the minor allele could be inherited from VAP1 or also possibly from wild tepary parental lines. For SW7.2, the minor allele was inherited from G40287 (Table 1).
Discussion
Climate change threatens current and future food security, including in regions using common bean as a staple crop. Predictions for common bean growing areas of southeastern Africa state more frequent scenarios of elevated temperatures (heat) and reduced rainfall (drought) that will become unsuitable for bean cultivation by the year 2050 (Hummel et al., 2018). Crop wild relatives and landraces historically adapted to arid or semi-arid conditions are promising sources of useful variation (Corteś & Loṕez, 2021). Tepary bean (P. acutifolius) is a wild relative of common bean adapted to desert and semiarid environments (Freeman, 1912). The use of tepary bean for common bean breeding was first reported by Honma (1956) who was looking for resistance to common bacterial blight (CBB; caused by Xanthomonas campestris). Honma developed an interspecific population using embryo rescue and used recurrent backcrossing with the common bean (Honma, 1956). More recently interspecific populations between tepary and common bean with useful variation for cold, drought and bruchid resistance have been reported (Martinez, 2010;Souter et al., 2017).
The population structure of IMAWT locates the wild tepary bean accessions close to G40264 (P. parvifolius) and more distantly to common bean parental lines. This is in accordance with the current taxonomy of the genus Phaseolus, which places these two crop wild relatives in the section named Acutifolii distanced from Phaseolii section to which P. vulgaris belongs (Freytag & Debouck, 2002). We located a group of ILs resembling both genetic backgrounds between these two clusters, indicating the presence of introgression fragments from the Acutifolii wild relatives. To detect the introgressions, we selected a subset of 7,915 SNPs that were contrasting between common bean parental lines and Acutifolii. This approach has been previously implemented for interspecific biparental populations in maize (Zea mays), melon (Cucumis melo) and tomato (Solanum lycopersicum L.) (Gonda et al., 2019;Kolkman et al., 2020;Oren et al., 2020).
We detected 465 homozygous introgressions events carried by 203 ILs that jointly covers with at least one introgression the 59.8% of the common bean reference genome. We observed that for centromeric regions there is a lower frequency of introgressions, probably due to the natural low recombination rate of centromeric regions (Schmutz et al., 2014). The lack of introgressions in chromosome 02 and 09 is in line with the reproductive isolation QTLs reported by Soltani et al. (2020). In a BCF 1 biparental population derived from ICA Pijao x Fijol Bayo (domesticated tepary bean), they observed an absence of recombination in the first 33 Mb of chromosome 02 and first 22 Mb of chromosome 09. They thus concluded that tepary bean carries chromosomal rearrangements in those regions that presumably cause hybrid sterility (Soltani et al., 2020). Notably, ICA Pijao is a genotype widely recognized for its reproductive compatibility with tepary bean, producing vigorous hybrid plants after embryo rescue (Parker & Michaels, 1986). This latter genotype is present in the interspecific line INB834 used twice in the VAP1 pedigree suggesting that G40264 (P. parvifolius) not only contributes to reproductive compatibility but also ICA Pijao.
We observed that IMAWT population presented a higher introgression detection rate in the regions predicted as introgression for VAP1 samples, indicating that the latter genotype is also a source of introgressions. The distal arm of both chromosomes 06 and 07 presented a high frequency of introgressions among the population, but no wild tepary exclusive variants were located, suggesting that those introgressions might be provided by G40264 through VAP1. In contrast, the distal end of chromosome 08 VAP1 presented an introgression where multiple exclusive wild tepary variants were
T/T T/T T/T T/T T/T T/T C/T T/T (Continued)
Cruz et al. 10.3389/fpls.2023.1145858 Frontiers in Plant Science frontiersin.org located, suggesting that VAP1 also carries introgressions from the tepary bean. The origin of this introgression could be traced back to INB841, which is a VAP1 parent, where a tepary introgression has been reported in the same location (Lobaton et al., 2018). There were also other regions where VAP1 introgression was detected, and multiple ILs were observed carrying wild tepary bean exclusive variants' alleles indicating that recombination had occurred between wild tepary accessions and VAP1, as in chromosome 01, 05, 10 or the distal end of chromosome 03.
Physiological response of IMAWT population
Our results indicate that the population developed under heat stress conditions displayed a reduction of YdPl and NSP between 5-6% and 9-14%, respectively. Similarly, in HS vs NS environments we observed an increase of StWP and EPP which indicated by a prolific branching and production of empty and malformed pods (EPP). Similar observations have reported an intense vegetative growth in common beans under high night temperatures of 27°C. These indicated that yield was constrained by an increase in flower bud abscission, in flowers, and in young pods rather than a reduction in flower production itself (Konsens et al., 1991).
Yield reductions estimated in this study are lower compared to previous reports. In field conditions Vargas et al. (2021) evaluated an Andean population in multiple locations in Colombia, comparing hot (34/23°C; average temperature day/night) vs control (30/19°C) environments, observing average yield reductions between 26 and 37% (Vargas et al., 2021). A study under greenhouse conditions reported even more severe heat stress effects with yield reductions between 77 and 98% for NS (27/21.1-22.9°C) vs HS (29/22.9°C) (Porch, 2006). The differences in yield reductions could be attributed to multiple environmental factors, including the reduction of the incident sunlight, in this case being 47% lower in an HS than in an NS environment, and logically also the temperature.
We observed a negative correlation between IP and yield components for all environments, indicating a negative effect of those introgressions over the plant performance. In NS, the correlations were higher in a magnitude than HS environments particularly for PHI, HI, NSP and YdPl. The observed negative effects in NS could arise from conflicts between both distant genomes but also due to the tepary natural adaptation to hot and dry environments (Bitocchi et al., 2017). Specialized genes for heat or drought resistance could be inherited but the same genes could have a trade-off effect in non-heat or in humid contexts. In each greenhouse we observed interspecific lines that outperformed the best common bean parental lines in yield indicating the existence of useful variance for heat resistance.
GWAS analysis
GWAS analyses in common bean using Andean and Mesoamerican germplasm collections offered a broad perspective on common bean diversity and the genetic basis of multiple traits such as growth habit, seed size, and drought tolerance (Hoyos- Villegas et Diaz et al., 2020;Valdisser et al., 2020). We used the method FarmCPU to perform the GWAS analyses, this method uses a fixed effect model (FEM) to test the association one marker at time using as covariates a multiple associated marker set (also named pseudo QTNs) to control false positives. To avoid overfitting pseudo QTNs are optimized by a random effect model (REM) based on the testing statistics (P-values) and positions by using the SUPER algorithm a bin-based algorithm used for selecting across the whole genome multiple associated QTNs (X. Wang et al., 2014;. Wild tepary parental lines presented wide genetic and phenotypic differences in relation to the remaining parental lines and the population itself. Besides the ability to control the false positive of FarmCPU we decided to remove tepary parental lines of the GWAS analyses due to an inflated trend in P-value QQ-plot when are included (data not shown).
Testing association one marker at time results in a multiple testing problem causing spurious associations. To overcome this, we used the Bonferroni threshold to differentiate the true positives from false positives, this threshold is considered the most conservative method and depends on genotype density and the desired significance level (Kaler & Purcell, 2019). Genotype density reported in other studies that used GBS and the RE ApeK1 produced between 20 -30k SNPs (Diaz et al., 2020;Garcıá-Fernańdez et al., 2021). Exist other GBS protocols that use different RE enzymes or dual digest with MseI and TaqaI that could increase in 3.8 to 12.5-fold the number of SNPs compared to ApeK1 (Schröder et al., 2016).
High rates of missing data are a major thread for GWAS statistical power or ability to identify true positives (Korte & Ashley, 2013). Is common the presence of missing SNP calls in GBS datasets due to presence-absence variation of cutting site, differential methylation, lower library quality and sequence coverage (Poland & Rife, 2012;Mohammadi et al., 2019). Commonly this is resolved via imputation of missing data but error rate of imputation increase as the MAF decrease (Marchini & Howie, 2010). We selected a low MAF threshold (2.5%) to avoid remove introgression sites because we expected a low frequency due to the crossing scheme that included two crossing cycles with common bean diluting the wild tepary contribution in the offspring.
With the GWAS analyses we reported 27 QTNs. From these QTNs, the less frequent allele could be traced back to the tepary parental lines in nine cases indicating that were located inside a tepary introgression segment. Three presented a detrimental effect in terms of increasing empty pod numbers (EPP1.1 and EPP1.2) or decreasing seed weight (SW7.2) and a favorable effect increasing yield (YdPl3.2, YdPl8.5 and YdPl8.6) and stem production (StWP1.1, StWP3.4 and StWP8.6).
In the proximal arm of chromosome 01 were detected two QTNs for EPP inside tepary introgressions (EPP1.1 and 1.2). A QTL covering the latter QTNs named PdShr1.1 located between the positions 1.4 -1.65 Mb for shriveled seeds in common bean under high temperatures had been reported (Vargas et al., 2021). EPP1.1 and 1.2 located in the gene model Phvul.001G020000 annotated as a GPI inositol-deacylase or PGAP1-like protein which is related with the endoplasmic reticulum export of proteins to the cell surface (Mañuel & Howard, 2016). For QTN EPP7.7 with exception of VAP1 all the parental lines were genotyped and observing in homozygous state the alternative allele in tepary lines. This QTN located in the gene model Phvul.007G254000 which presents a Slocus glycoprotein domain (PF00954) involved in pollen recognition system to avoid self-fertilization in the Brassicaceae family (Hinata et al., 1995).
The seed weight of wild P. acutifolius parental lines is significantly lower in comparison to domesticated common bean parental lines. Here we report SW7.2 a tepary introgression QTN that significantly reduce the seed size in NS environment. SW7.2 is in the gene model Phvul.007G031800 annotated as alpha-trehalose-6-phosphate synthase (TPS) which catalyzes the synthesis of alphatrehalose-6-phosphate (T6P) a signaling molecule that significantly affects the regulation of carbon allocation and utilization in plants (Paul et al., 2018). Valdisser et al. (2020) reported a common bean QTN for seed weight detected under drought and irrigated conditions at 1 Mb away of SW7.2.
Three QTNs with positive effect for YdPl in tepary introgressions were reported in this study. YdPl3.2 was detected in NS environment, inside the gene model Phvul.003G112400 annotated as ATP synthase gamma-related. In a QTL analysis for fatty acids of seed in soybean (Glycine max) two QTLs for palmitic and oleic acid content had been reported and in a centered window of 1 Mb on those QTLs was present the gene Glyma.17G032400 which is ortholog to Phvul.003G112400 (Smallwood, 2015). In a GWAS study with a Chinese diversity panel of common bean were reported at 460 kb downstream and 717 kb upstream of YdPl3.2 QTNs for days to flowering, growth habit and plant height (Wu et al., 2020). The QTNs YdPl8.5 and 8.6 were detected in HS environments inside the gene models Phvul.008G042800 and Phvul.008G177800 annotated as polyvinyl-alcohol oxidase and E3 ubiquitin-protein ligase, respectively. Protein ubiquitination is the major eukaryotic proteolytic pathway responsible of degradation of misfolded proteins (Zhang et al., 2021). Multiple reports states that members E3 ubiquitin ligases enhance the thermal resistance in plants regulating the activity of calcium channels, transcription of heat shock proteins and closure of stomata (Z. Bin Liu et al., 2014;J. Liu et al., 2016). Is important to mention that at 7 kb upstream of YdPl8.5 is located the gene model Phvul.008G043000 which present a WRKY DNA-binding domain present in transcription factors involved in various developmental and physiological process but particularly prominent in the modulation of response to biotic and abiotic stress (Jiang et al., 2017;Cheng et al., 2021). The QTN YdPl7.4 was detected with the phenotype data of GH2 environment at the distal end of chromosome 07. Tepary parental lines were homozygous for the alternative allele but also SEN118. This QTN located in the gene model Phvul.007G251800 a NADPH-dependent thioredoxin reductase 3 (NTRC) which helps to maintain a reduced cellular environment and protect against oxidative stress during stress events and had been reported to be overexpressed in heat stress in common bean (Soltani et al., 2019).
Three QTNs with positive effect inside tepary introgressions were detected for StWP. StWP3.4 is located inside the gene Phvul.003G282900 that belongs to RmlC-like cupins superfamily proteins. Is important to mention that this gene presented a differential expression between a terminal drought stress and non-stress environments in common bean (Subramani et al., 2023). StWP8.6 is located in the gene model Phvul.008G003600 and is flanked downstream by the gene Phvul.008G003500 nitrate transporter protein-peptide transporters (NTR1-PTR) and upstream by the genes Phvul.008G003700 and Phvul.008G003800 two flowering locus T like proteins (FT-like). The FT-like genes are identified as a major regulatory factor in a wide range of developmental process including fruit set, vegetative growth, stomatal control and tuberization (Pin & Nilsson, 2012). By the other hand the Chiba et al. (2015) reports that NRT1(PTR) plays an important role transporting phytohormones even during the stress conditions such as auxins, abscisic acid and gibberellins. A QTL named DPM8.1 that covers the latter gene model is related with days to physiological maturity in common bean (Diaz et al., 2020).
Future perspectives
The detected QTNs have the potential to broaden the genetic base in domesticated common bean genepools and support the strategy of incorporating functional genetic variation to increase heat tolerance from wild crop relatives (Tanksley & McCouch, 1997). Thus, it is necessary to further confirm if the introgression segments that include those QTNs can in fact host genes involved in adaptation to heat stress. Is important to validate the input from single introgression fragments, by revealing their effect in homogeneous genetic backgrounds, avoiding the possible interaction with other introgression events that diminish yield. An alternative approach would be testing contrasting lines with and without the QTNs under natural conditions. Therefore, further filtering and characterization of the genome regions exclusively related to heat stress should be identified. Applying cutting-edge techniques like long read sequencing can improve our understanding of interspecific introgression and improve breeding effectiveness. Long read sequencing will facilitate whole genome assemblies of parental and bridge lines, revealing hidden genomic regions masked by the reference genome utilized. Discovering the exact crossing over points, with higher resolution to identify smaller introgressed regions that cannot be found by SNPs matrixes will help validate wild introgression inputs. Regardless, this genomewide association study on interspecific common bean populations derived from wild Phaseolus relatives has revealed improved heat resistance as a result of successful genetic introgression between two Phaseolus sister species, demonstrating better performance under heat stress conditions. The allele diversity from wild materials increases the adaptability of domesticated plants via enhanced biotic and abiotic stress resistance, especially in the context of climate change and potential pathogen outbreaks. The results from this study broaden our understanding of genetic crossing using bridging interspecific lines applied to heat-tolerant populations through recognizing the wild introgression segment using GBS sequencing on introgression lines.
These results represent an important contribution to common bean genetic improvement. In the short term, QTL for heat tolerance may advance on-going work for improved bean crop adaptation in lowland environments in Central America, the Caribbean, and East, southern and West Africa. In the medium to long term, characterizing introgression from sister species P. acutifolius and P. parvifolius can open new perspectives for managing a range of abiotic and biotic factors that limit bean productivity and production. These sister species evolved in hot, dry environments which may become more prevalent in the future in bean production regions, and for adaptation to which genetic diversity in P. vulgaris is limited. This study's findings will facilitate broadening this introgression.
Funding
Research was funded by Norwegian Agency for Development Cooperation (NORAD) through the Global Crop Diversity Trust under the grant GS19002 entitled "Using Bean populations derived from P. acutifolius to advance toward generation of new bean varieties and discerning the traits and genetic base associated to heat tolerance".
|
2023-05-12T13:35:15.296Z
|
2023-05-12T00:00:00.000
|
{
"year": 2023,
"sha1": "75e4ded0117eeec65f42cb059107c9e5210daddf",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3389/fpls.2023.1145858",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "acc8f86075dcaeba9345f91f040e4b45aede9606",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
216056337
|
pes2o/s2orc
|
v3-fos-license
|
Radar for projectile impact on granular media
From the prevention of natural disasters such as landslide and avalanches, to the enhancement of energy efficiencies in chemical and civil engineering industries, understanding the collective dynamics of granular materials is a fundamental question that are closely related to our daily lives. Using a recently developed multi-static radar system operating at $10$\,GHz (X-band), we explore the possibility of tracking a projectile moving inside a granular medium, focusing on possible sources of uncertainties in the detection and reconstruction processes. On the one hand, particle tracking with continuous wave radar provides an extremely high temporal resolution. On the other hand, there are still challenges in obtaining tracer trajectories accurately. We show that some of the challenges can be resolved through a correction of the IQ mismatch in the raw signals obtained. Consequently, the tracer trajectories can be obtained with sub-millimeter spatial resolution. Such an advance can not only shed light on radar particle tracking, but also on a wide range of scenarios where issues relevant to IQ mismatch arise.
I. INTRODUCTION
As large agglomerations of macroscopic particles, granular materials are ubiquitous in nature, industry and our daily lives [1,2]. Despite of its importance, a fundamental understanding of granular dynamics from the perspective of transitions from a solid-to a liquid-like state (e.g., when and where does an avalanche start) is still far from complete. One of the main challenges in deciphering the dynamics of granular materials arises from the fact that most granular particles are opaque. In the past decades, there have been substantial progresses in imaging granular particles [3]. Optical means for imaging particles in three dimensions (3D), such as refractive index matching [4], are limited to certain combinations of particles and interstitial liquids. X-ray tomography [5] and Magnetic Resonance Imaging [6] are also frequently used to identify the internal structures of granular materials. However, the limited time resolution of scanning technique as well as the huge amount of data to be processed hinder the investigation of granular dynamics, which is better resolved with sufficiently high temporal resolution. Note that the mobility of single particles can influence dramatically the collective behavior, owing to its discrete nature as well as heterogeneous distributions of force chains inside [7]. Therefore, it is desirable to have a technique capable of tracking granular particles with high temporal resolution.
Since the beginning of last century, radar technology has been continuously developed and benefiting us in many different ways: From large scale surveillance radar systems that are crucial for aircraft safety and space exploration [8], to small scale systems for monitoring insects [9]. Considering the capability of radar tracking technique, it is intuitive to ask: How small can an object be accurately tracked by a radar system? Can it be as small as a tracer particle with a size comparable to a grain of sand? Recently, we introduced a small scale continuous-wave (CW) radar system working at X-band to track a spherical object with a size down to 5 mm [10]. In comparison to other techniques, the continuous trajectory of a tracer particle obtained by the radar system helps in deciphering granular dynamics greatly, owing to the high temporal resolution. Here, advantages and disadvantages of this technique, particularly how to handle the possible sources of uncertainty will be discussed in details. Figure 1 shows the experimental set-up that utilizes the radar system to track a free falling projectile into a granular bed. The multi-static radar system operates at 10 GHz (X-band) with one transmission (Tx.) antenna pointing in the direction of gravity (defined as −z direction) and three receiving (Rx.) antennae mounted symmetrically around the z axis. Polarized electromagnetic (EM) waves, after being scattered by the tracer particle, are captured by the Rx. antennae.
II. PARTICLE TRACKING SET-UP AND RECONSTRUCTION ALGORITHM
A metallic sphere with a diameter d =10 mm is used as the tracer. The cylindrical container has an inner diameter of 15 cm and a height of 20 cm. To avoid unnecessary multiple scattering, the sample holder is constructed with mostly foamy materials as possible. The granular sample (see Fig. 9 and relevant text for more details) is filled in the container till few centimeters below the rim and gently tapped until the initial packing density of 60.2% is achieved. The tracer is initially held by a thin thread wrapped around and released by gently pulling the thread such that the initial velocity of the falling sphere is close to 0. This design enables a defined and reproducible initial condition for a comparison among various experimental runs. The raw IQ signals from the AD converter (NI DAQPAD-6015) are recorded and further processed with a Matlab program to obtain the reconstructed trajectories.
IQ-Mixers play an essential role in accurate ranging a target, as it measures the phase shift (i.e., time delay) of the received signal with respect to the emitted one. Suppose the latter can be described as a cos(2πf 0 t) and the former as b cos(2πf t + θ), where a and b are the magnitudes of the corresponding signals, f 0 and f are the transmitted and received signal frequencies, the output signals of an IQ-Mixer is Subsequently, the relative movement of the tracer is obtained from the phase shift of I + Qi in a complex plane, where I (in-phase) and Q (quadrature) correspond to the two outputs of a IQ mixer. With the help of an IQ mixer, the change of the absolute traveling distance for the ith antenna L i = l 0 + l i can be obtained, where l 0 and l i are the distance between Tx. antenna and the target and that from the target to the Rx. antenna, respectively. If L i varies by a distance of one wavelength, the vector I + Qi rotates 2π. As IQ mixers provide analogue signals representing the mobility of the tracer, the time resolution of the radar system is only limited by the analogue-digital (AD) converter. This is an advantage of using continuous wave radar systems for particle tracking.
Although distance measurements rely only on the phase information, the sensitivity and accuracy of the system depend on IQ signal strength. In order to have a sufficient signal to noise ratio, the directions of the horn antennae (Dorado GH-90-20) are adjusted with the help of a laser alignment and range meter (Umarex GmbH, Laserliner) to face the target area. The three Rx. antennae are arranged symmetrically about the axis starting from the Tx. antenna pointing along the direction of gravity, so that the received signals have similar signal-to-noise ratios. According to the specification of the antenna, the main lobe of its radiation pattern has an opening angle of ∼ 15 degrees. Thus, we estimate the field of 'view' of the radar system has a volume of about 30cm × 30cm × 30cm, taking into account the average working distance of the antennae. Nevertheless, as will be shown below, the field of view can be extended after a proper correction of the IQ mismatch. The distance between each antenna and the center of the coordinate system is also measured by the laser meter during the adjustment process. The polarization of the antennae are adjusted to maximize the raw I and Q signals. The whole system is covered with microwave absorbers (Eccosorb AN-73) to reduce clutter and unwanted noises from the surrounding.
From the measured distances L ≡ (L 1 , L 2 , L 3 ), the reconstructed trajectory can be obtained with a coordinate transformation where the vector r is chosen to be 0 as it contributes only to a constant offset to the reconstructed trajectory, the transformation matrix reads with θ i and φ i the tilt and azimuth angles of the ith antenna, respectively. The transformation matrix is determined from a calibration process using the same tracer particle moving in a given circular trajectory in the horizontal plane. More descriptions of the calibration and coordinate transformation processes can be found in [10].
III. HOW TO ENHANCE SPATIAL RESOLUTION?
Although extremely high temporal resolution can be achieved with the CW radar system, the spatial resolution relies strongly on the adjustment of the system, as well as the calibration and reconstruction algorithms. There are three main sources of uncertainty in the calibration and reconstruction process: (i) Fitting error arising from the calibration process; (ii) Reflection and multiple scattering from surroundings; (iii) Mismatch between signals obtained by I and Q channels. In the following part of this session, details on how to handle various sources of uncertainty, particularly for the case of IQ mismatch, will be discussed. First, the antennae parameters determined from the calibration process are essential for the accurate reconstruction of the tracer trajectories. Following the algorithm described above, circular motion of the tracer particle in the calibration process leads to harmonic oscillations of L i . Based on a first order approximation [10], the tilt and azimuth angle θ i and φ i of the Rx. antennae can be determined from fitting. Note that although the antenna parameters can be directly measured by laser assisted alignment tools, the outcome can not only serve as a rough initial guess, since an accurate determination of the exact location where the electromagnetic waves are emitted is nontrivial. Thus, we need to apply the aforementioned fitting algorithm to a reference trajectory. We choose a circular trajectory in the horizontal plane (defined as x − y plane) for the following reasons: (a) The center of the three dimensional Cartesian system can be defined as the center of rotation with the rotating axis pointing toward the +z direction, along which the Tx. antenna is aligned. In another word, the coordinate system is defined by the calibration circle. (b) A circular trajectory with different radius R and rotation frequency f is implemented with a Styrofoam tracer holder attached to a stepper motor. Styrofoam, which shares similar material properties as granular sample, is chosen as they are transparent to EM waves and rigid enough to support the tracers. Here the tracer is directly embedded into the rotating arm. Concerning the accuracy of determining antennae parameters, the circular trajectory has to be well constructed. Due to the low rigidity of Styrofoam, sources of errors such as eccentricity of the trajectory due to the coupling with the motor shaft or the relative motion of the tracer with respect to the Styrofoam arm may occur and lead to higher relative error in the radius of the reference circle, which we consider to be one source of error. As shown in Fig. 2, the fitted outcome of tiling angle of Rx. antenna 1 deviates systematically away from the expected value as R decreases. Note that the calibration outcome should not depend on the radius and angular frequency of the circular motion. The above comparison suggests that the radius has to be larger than ≈ 40 mm for a reliable determination of the antennae parameters for the current configuration.
Second, there always exist scattered signals from the surrounding environment, thus it is necessary to have EM absorbing materials to enhance the signal-to-noise ratio. In addition, microwave absorbers also play an important role in isolating the system from the surrounding environment as the Rx. antennae may response to people walking around. In practice, the most efficient way of signal enhancement is to record a background signal at the same configuration of the experimental setup and subtract the background signal for the I and Q signals of all three channels. As shown in [10], this step is essential for the case that there are mechanical components other than the tracer moving periodically with time, as such kind of movements may lead to strong distortions to the signals from the tracer. Note that the smallest size of tracer that can be tracked is to a certain extent associated with the signal-to-noise, therefore special care has to be taken to remove the background noise for a better performance of the tracking system. The third and perhaps the most serious source of error arises from the strong IQ mismatch, which is partly owing to the fact that the range of tracer movement (up to 0.5 m) is on the same order of magnitude as the working distance (about 1.5 m), consequently the fluctuations of both I and Q signals are relatively strong, leading to unrealistic fluctuations in the reconstructed trajectory. As an example, Figure 3(b) shows the reconstructed trajectory of a tracer falling freely under gravity. The oscillations in the horizontal direction can sometimes approach the diameter of the tracer. One possible reason in connection with the above description is the amplitude fluctuations in the IQ plane. The other possible reason is the the offset of IQ signals: The tracer movement corresponds to the phase shift in the IQ plane, which is determined by the corresponding tracer path, (ideally it should be an arc). For either case, there will be a systematic deviation of the measured phase angle with respect to the ideal one.
In order to demonstrate the above analysis, we introduce artificially the aforementioned two sources of error into the raw signal. As shown in Fig. 4(a), the raw signal of Q from the Rx. antenna 1 oscillates as a function of time, representing the fact that the tracer is moving away from the antennae when it falls down freely along the z direction. The oscillation amplitude may fluctuates as time evolves, leading to one source of error. For the ideal case, one expects a constant amplitude (i.e., constant radius in the IQ plane), as indicated by the red curve. Using the ideal signal for reconstruction, one obtains a constant x = 0, i.e., no unrealistic oscillations in the x direction. If we artificially add a small offset to the I signal or multiple the Q signal with a factor slightly larger than 1 (i.e., introduce a slight distortion to the circular trajectory in the IQ plane). In either case, one observes clear oscillations in the x direction, demonstrating how IQ mismatch leads to unrealistic fluctuations in the reconstructed trajectory.
As Fig. 5 shows, the raw IQ signals are typically not ideal in the sense that the IQ signals are not always orthogonal with each other. This mismatch may arise from the DC offsets of either I or Q signal, gain and phase imbalance. How to correct such kind of errors has been extensively discussed in, for instance [11] or [12], particularly along with the development of telecommunication and non-invasive motion detecting techniques [13]. The distortions are typically attributed to device imperfections as well as clutter. However, for the system being used here, there are additional errors arising from the mobility of the tracer itself, which can not be readily corrected with an additional calibration of the hardware. Moreover, distortion may also arise from the interaction of the scattered signal from the tracer with that from objects not completely transparent to EM waves. In that case, the existence of 'mirrored' particles may lead to additional uncertainty.
Here, we use the following approach to correct IQ mismatch arising from multiple sources of errors. It works best when the object moves in a distance covering multiple wavelengths. As illustrated in Fig. 6, the correction process is composed of the following steps: First, we identify the time segment of the raw data V raw that contains the movement of the tracer particle via finding the start and end of the fluctuations. Second, the peaks (red circles) and valleys (blue circles) of individual fluctuations are determined by finding the local extreme values in the selected data. Third, the bias error V bias [green line in (a)] is estimated as the mean value of the spline fits of peaks and valleys (dashed lines). In order to avoid unrealistic extrapolations, the bias error starts to vary only from the first peak. Fourth, the bias error is removed and the corrected signal V raw−bias is segmented by zero crossings. Finally, the data in individual segments are rescaled by local maxim and minim to correct gain mismatch.
As shown in Fig. 5, this approach can effectively find time dependent correction factors due to tracer movement. For the corrected data of channel 3 (dark blue circles), there exists a slight deviation from a perfect circle, indicating the existence of a small phase error. This arises presumably from the fact that perfect polarization cannot be achieved for all three Rx. antennae. Further investigations are needed to check whether this error can be avoided by using circular polarized EM waves or by correcting the phase error between I and Q signals in the Matlab program. After the correction of IQ mismatch, the corresponding phase angles are obtained by φ = arctan(Q/I). Because φ is a modulo of 2π, a further correction on the phase jump is needed to obtain the continuous phase shift Φ. In this step, a threshold is introduced to determine whether a phase jump occurs or not and in which direction the jump takes place. As the phase shift of the ith channel Φ i ∝ L i , the variation of Φ with time (see the blue curve in Fig. 7) indicates that the target object moves initially slow and accelerates while moving away from the antennae. As demonstrated by a comparison between corrected Φ and uncorrected Φ uncorr phase shift, the aforementioned correction method can effectively reduce unrealistic fluctuations (see also Fig.3(b)) in the reconstructed curves. More quantitatively, the magnitude of oscillations enhances with the speed of the projectile and it can reach ±1.5 cm. After correction, it reduces to less than 1.0 mm. As shown in the inset of Fig. 7, the distance L 1 obtained from Rx. antenna represents exactly what is expected: The object falls freely with a growing velocity and bounces back when reaching the container bottom, suggesting that the coefficient of restitution, which measures the relative rebound over impact velocities, can be determined with the radar system. In comparison to the standard high speed imaging technique [14], the radar tracking technique requires less data collection and processing efforts.
IV. VALIDATION OF CORRECTION ALGORITHM
As the goal of this investigation is to explore the possibility of obtaining an object moving inside a granular medium using microwave radar. We proceed with the following two steps: First, we focus on a spherical projectile falling in free space without the presence of granular materials. We compare the reconstructed trajectories of the object from different initial falling heights with the expected parabolic curve. As shown in Fig. 8, the falling curves agree with the expected curve well, demonstrating that, after a proper correction of IQ mismatch, the radar system can be used for particle tracking. In particular, the tracer falls under gravity over a distance up to 50 cm, which is about 1/3 of the working distance. After a proper correction of the signal fluctuations for both IQ channels, the correct information on the phase (i.e., distance) can be extracted to a satisfactory level. This outcome also suggests that the configuration built for releasing the tracer with minimized initial velocity and also for a better signal-to-noise ratio works well.
Second, we replace the lower part of the free space with a granular medium composed of the EPP particles (see Fig. refsetup), which are expected to be transparent to EM waves. Expanded polypropylene (EPP) particles (Neopolen, P9255) are used as the granular sample. As the particles are porous with 95% air trapped inside, its dielectric constant (relative to vacuum) is 1.03, very close to air. Therefore, they are practically transparent to electromagnetic (EM) waves. As the snapshot in Fig.9 shows, the EPP particles have an ellipsoidal shape with a length-to-width ratio of ≈ 1.4 and an average volume of 16.5 ± 2.7 mm, where the uncertainty corresponds to the standard deviation of the volume distribution. Here, the volume is estimated assuming that the particles are of prolate spheroid shape (i.e., a = b < c with a, b and c semi-axes of an ellipsoid). The density of the particles is 92 kg · m −3 , which can typically be tuned in the expansion process. Other material properties include: average particle weight 1.2 mg, bulk density 55 kg/m 3 , tensile strength 880 kPa, thermal conductivity 0.035 W/(mK).
Two intruder shapes are used for the test experiment: A sphere with a diameter of 1.45 cm and a cylinder with a diameter 1.55 cm and height 3.10 cm, both made of Styrofoam with the tracer embedded. For each type of projectile, three free-falling experiments are conducted with the same initial falling heights. As shown in Fig. 10, both projectiles follow the same trajectory before they touch the surface of the granular medium as expected. While penetrating through, cylindrical projectile experiences a slightly larger granular drag and consequently land at a relatively shallow depth in comparison to the spherical one. Note that the smoothness of the tracer trajectory while impacting on the granular layer demonstrates that the EPP particles chosen here are indeed transparent to EM waves and there is no influence from possible multiple scattering of EM waves by granular particles. For the case of spherical projectile, the reconstructed trajectories of ten repetitions give rise to the same parabola for the free falling region. This outcome validates the correction protocol introduced here. From the projectile trajectory inside a granular medium, one can obtain the acceleration as well as total force acting on it.
Sphere in free space Fig. 10.: Reconstructed trajectories of projectile impact into a granular bed composed of EPP particles. The initial falling height is fixed at 50 cm with respect to the floor. The green, red and blue data points represent the three scenarios: Free falling into the empty container without granular filling, impact into a granular bed by a spherical tracer and by a cylindrical tracer particle. The typical error bar (∼ 1 mm) of the position data is smaller than the symbol size. The vertical dashed line corresponds to the time when a projectile touches the surface of the granular medium.
Note that mechanical properties of granular particles can be determined experimentally or from the specifications provided by the producers. Subsequently those information can be used in numerical simulations using discrete element methods to have a direct comparison with experimental results. Thus, radar tracking technique can be used here to explore the 'microscopic' origin of the drag force induced by granular particles in combination with numerical simulations. Interested readers may refer to [15] for a recent example.
V. CONCLUSION
To summarize, this investigation suggests that advances in radar tracking technology can be helpful in the investigation of granular dynamics. Using an X-band continuous wave radar system, we are able to track a centimeter sized metallic object in 3D, which enables, for instance, a measurement of the coefficient of restitution of the particle. In comparison to other particle imaging techniques already being used for granular particles [3], continuous-wave radar tracking has the advantage of high time resolution and low data collection and processing requirements. With the rapid development of radar technology, this approach is also expected to be more cost effective and accurate.
Moreover, we show that the accuracy of the radar tracking technique depends strongly on a proper correction of IQ mismatch, which arises predominately from the mobility of the tracer itself. A practical approach has been proposed to correct the instantaneously changing bias as well as gain errors in the raw IQ signals. Finally, we validate this approach through an analysis on the reconstructed trajectories of projectiles falling under gravity in free space as well as impacting into a light granular medium. In comparison to our previous investigation [10], this approach enables more quantitative studies of an object moving in a three dimensional granular medium.
Further investigations will focus on particle tracking with various types of granular materials, particularly how to deal with distorted signals arising from multiple scattering of the surrounding granular particles that are not completely transparent to EM waves. As the algorithm does not rely on the frequency band chosen, it would also be interesting to employ radar systems working at a higher frequency band to achieve better spatial resolution.
|
2020-04-23T01:01:00.346Z
|
2020-04-21T00:00:00.000
|
{
"year": 2020,
"sha1": "794d33985d2ba3d021248d06fad1d30b07de9c02",
"oa_license": "CCBY",
"oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/6A1B665F4D730640C0D01E1F4CC203C4/S1759078720000586a.pdf/div-class-title-radar-for-projectile-impact-on-granular-media-div.pdf",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "eb18d863cd66e4651f4593cceda5cd1d18c627f4",
"s2fieldsofstudy": [
"Physics",
"Engineering"
],
"extfieldsofstudy": [
"Geology",
"Physics",
"Engineering",
"Computer Science"
]
}
|
229425692
|
pes2o/s2orc
|
v3-fos-license
|
Development of starter cultures carrier for the production of high quality soumbala, a food condiment based on Parkia biglobosa seeds
1 Département Technologie Alimentaire, Institut de Recherche en Sciences Appliquées et Technologies, Centre National de la Recherche Scientifique et Technologique, (DTA/IRSAT/CNRST), 03 BP 7047 Ouagadougou, Burkina Faso. 2 Laboratoire de Biochimie et d‟Immunologie Appliquée (LaBIA), Département de Biochimie-Microbiologie, Université Joseph Ki-Zerbo, 03 BP 7021 Ouagadougou, Burkina Faso. 3 Department of Food Science, Faculty of Sciences, University of Copenhagen, Rolighedsvej 26, 1958 Frederiksberg C, Denmark.
INTRODUCTION
Fermented food condiments obtained by the fermentation of proteagenous seeds, are well appreciated in Africa for their high nutritional value and organoleptic properties. In Burkina Faso, the well-known of these fermented food condiments is soumbala, obtained by spontaneous alkaline fermentation of African locust bean (Parkia biglobosa) seeds (Parkouda et al., 2009). Soumbala is also well known and used in Côte d"Ivoire, Guinea, and Mali. It is known under different names such as dawadawa/iru in Nigeria and Ghana (Onzo et al., 2014;Ajavi et al., 2015), afitin/sonru/iru in Benin (Azokpota et al., 2006) and nététu in Senegal (N' Dir et al., 2000).
The production of soumbala includes successive cleaning of the seeds, a first cooking which often lasts more than 24 h, a dehulling of the cooked seeds, a second cooking which lasts between 1 and 2 h and then a spontaneous fermentation of 48-72 h (Sawadogo-Lingani et al., 2003). Bacillus subtilis group species were identified as the dominant Bacillus involved in the spontaneous fermentation of soumbala (Ouoba et al., 2004). Despite increasing consumption today in Burkina Faso, soumbala still faces competition from imported seasonings. This is partly due to the use of unsuitable fermentation methods in the production of traditional soumbala leading to a product with poor organoleptic and sanitary quality resulting sometimes in the presence of pathogenic bacteria and biogenic amines (Parkouda et al., 2010).
Several studies carried out on soumbala and other fermented condiments of Burkina Faso provided evidence on how to isolate and characterize some Bacillus species with potential uses as starter cultures in controlled fermentation to improve its hygienic, nutritional and organoleptic quality (Ouoba, 2003;Kaboré, 2012;Compaoré, 2013). However, the form in which these potential starter cultures can be easily transferred to soumbala production units has not been proposed yet. As a consequence, soumbala processing units are still producing soumbala in a traditional way with uncontrolled fermentation. The objectives of this study were, therefore on one hand to assess the possibility of using the dehulled seeds of African locust bean as local carrier material for the transfer of starter cultures of Bacillus spp. to soumbala production units, and on the other hand to compare the biochemical and microbiological characteristics of soumbala prepared with starter cultures used in single or in combination.
African locust bean seeds
African locust bean seeds were purchased with a soumbala producer in Ouagadougou, Burkina Faso, stored in polypropylene bags and kept in the pilot plant of Département Technologie Alimentaire (CNRST/IRSAT/DTA) at room temperature.
Microorganisms
The starter cultures used in this study included two strains of B. subtilis (B7 and B9) isolated from soumbala, one strain of Bacillus amyloliquefaciens (I8) isolated from bikalga (fermented seeds of Hibiscus sabdariffa) and one strain of B. subtilis (B3) originating from maari (fermented seeds of Adansonia digitata). These strains were identified based on molecular methods (Rep-PCR, ITS-PCR, M13-PCR, 16S rRNA and gyrB gene sequencing) and selected as starter cultures in previous studies based on their technological properties among other proteolytic, saccharolytic, lipolytic and antimicrobial activities (Ouoba et al., 2003a(Ouoba et al., , 2003b(Ouoba et al., , 2007Kaboré et al., 2012;Compaoré et al., 2013b). All the strains were kindly provided by the laboratory of microbiology of CNRST/IRSAT/DTA where they were stored in a -80°C freezer.
Preparation of the carrier material
African locust bean seeds were first dried and cleaned by winnowing to remove light impurities. They were then dehulled using a mechanical dehuller (prototype CNRST/IRSAT, Ouagadougou, Burkina Faso, 1997). Following dehulling, the cotyledons were separated from the hulls by winnowing and manual sorting. The cotyledons were then collected for use as carrier for the production of the ferments.
Preparation of the inocula
The stock cultures were sub-cultured in Brain Heart Infusion (BHI) agar (Liofilchem, 610007, Italy) and incubated for 24 h at 37°C. From BHI agar plates, the Bacillus strains were sub-cultured for 18 h at 37°C in 10 ml of BHI broth (Liofilchem, 610008, Italy). After incubation, the cultures were centrifuged at 5 000 g for 10 min and the pellet resuspended in 5 ml of sterile diluent containing 8.5 g/l NaCl and 1.5 g/l peptone (Difco 218971, Becton Dickinson & Co, Sparks, MD, USA). The number of cells was then estimated by microscopy using a counting chamber (Neubauer, Wertheim, Germany) and dilutions were made in sterile diluent to obtain a rate of inoculation of 10 5 -10 6 cells/ml. Four different inocula were prepared: Inoculum of B. subtilis B7, inoculum of B. subtilis B9, inoculum of B. subtilis B3 and inoculum of B. amyloliquefaciens I8.
Cultivation of the starter cultures on the carrier material
The African locust bean cotyledons were weighed and washed before being boiled for 6 h. After cooking and draining, the cotyledons were distributed (500 g) in baskets and autoclaved at 121°C for 20 min. After cooling at 45 to 50 °C, each basket was inoculated with each inoculum (2% v/w) in single and left to ferment for 48 h at room temperature (35 -38°C). One non-inoculated basket served as control.
The fermented cotyledons from each basket were dried in an oven at 60 to 65°C for 24 h before being aseptically ground using a *Corresponding author. E-mail: compaclara@yahoo.fr. Tel: +226 70047896.
Author(s) agree that this article remain permanently open access under the terms of the Creative Commons Attribution License 4.0 International License blender of mark XPREP (Model MX1200XT11CE, USA). The resulting powder was aseptically packaged (5 g) in sterile plastic bags and stored at room temperature. Four ferments FB7, FB3, FI8 and FB9 in powder form were then prepared from inoculum of B. subtilis B7, B. subtilis B3, B. amyloliquefaciens I8 and B. subtilis B9, respectively. Samples were collected after autoclaving, after inoculation (0 h of fermentation), at the end of the fermentation (48 h) and after the grinding of dried product to determine pH and growth of Bacillus for each single starter culture fermentation batch. The experiment was performed on three separate occasions and 16 samples were taken at each trial. In total, 48 samples were collected for microbiological analyses.
Production of soumbala with the ferments
The production of soumbala was carried out with non-dehulled African locust bean seeds following the traditional processing described by Sawadogo-Lingani et al. (2003) with slight modifications as follows: the seeds were cleaned, cooked for 18 h and dehulled manually with mortar and pestle; the dehulled seeds were cooked again for 1 h, drained, distributed in baskets, autoclaved at 121°C for 20 min and cooled to 45 to 50°C. Seven parallel fermentation batches were prepared as follows: the batches 1, 2, 3 and 4 containing 1 kg of sterilized cotyledons each, were inoculated with 5 g of each single ferment (FB7, FB9, FB3 and FI8); the batches 5, 6 and 7 containing 2 kg of sterilized cotyledons each were inoculated with 10 g of mixed ferments (FB7+FB3, FB7+FI8 and FB7+FB9). The batches were then left to ferment for 48 h at room temperature (35 -38°C). Traditional spontaneously fermented soumbala was prepared according to Sawadogo-Lingani et al. (2003) to serve as a control (batch 8). After the fermentation, fermented cotyledons were sun-dried and kept in a dry place. Samples were collected at 0 h, at the end of the fermentation (48 h) and after drying. The different types of soumbala produced were: (1) Soumbala with single ferment: SB7, SB3, SI8, and SB9 (2) Soumbala with mixed ferments: SB7 + B3, SB7 + I8, SB7 + B9 (3) Spontaneously fermented soumbala: SN.
The experiment was conducted in triplicate and 24 samples were taken at each assay. In total 72 samples were collected for microbiological analyses. Physicochemical analyses were performed only on the final dried products (24 samples).
Microbiological analyses
For all samples, 10 g were aseptically homogenized with 90 mL of sterile diluent by using a stomacher (Stomacher 400 lab blender, England) at normal speed for 2 min to obtain 10 -1 dilution (ISO 6887-1, 2017). Serial dilutions were made from the homogenate using 9 mL sterile diluent. From appropriate ten-fold dilutions, Bacillus strains were enumerated by pour plate technique using BHI Agar incubated aerobically at 37°C for 72 h. After incubation, plates with 15 to 300 colony forming units (CFU) were counted (ISO 4833, 2003) and results expressed as Log CFU/g.
Biochemical analyses
Ten grams of sample were homogenized with 20 mL of distilled water in a stomacher bag for 1 min at normal speed. The pH of the homogenate was determined using an electronic pH meter (Hanna, Romania) calibrated with standard buffer solutions pH 4.0 and 7.0.
Moisture content was determined by drying the sample at 105 ± 2°C for 12 h according to ISO 712 (2009); total ash content was determined by incineration in a muffle furnace (Nabertherm, Germany) at 550°C for 4 h, according to ISO 2171 (2007); crude protein content (N×6.25) was determined by the Kjeldahl method after acid digestion (AFNOR NF V03-050, 1970); crude fat content was determined with Soxhlet apparatus using n-hexane according to ISO 659 (1998). Total carbohydrates content was determined by spectrophotometric method at 510 nm using sulfuric orcinol as reagent (Montreuil and Spik, 1969). For amino acids profile determination samples were first defatted using Soxhlet method (ISO 659. 1998). The amino acid profile was carried out by high performance liquid chromatography (HPLC) using Waters PICO-TAG method (Kristofferson, 2011) which consists of three steps: hydrolysis of samples, sample derivatization pre-column and HPLC-reverse phase analysis. The identification and determination of the concentrations of the different amino acids were done from the Empower software by comparing retention times obtained with retention times of the standards.
Statistical analysis
All the data were submitted to Analysis of Variance (ANOVA) with the statistical software XLSTAT-Pro 7.5.2 and the means were compared using a Fischer test for post-hoc comparisons with a probability level p ˂ 0.05. The curves and the standard deviation were obtained using Microsoft Excel 2007.
Microbial growth during the production of ferments
No microbial growth was observed after autoclaving of African locust bean cotyledons for all samples (results not shown). Figure 1 shows the growth ability of the different starter cultures inoculated in single in the cooked dehulled African locust bean seeds. At the onset of the fermentation (t = 0 h), Bacillus counts ranged between 4.33 and 4.87 Log CFU/g. At the end of the fermentation (t = 48 h), a significant increase of the Bacillus load was observed in all samples with values ranging between 7.11 and 9.70 Log CFU/g. However, the starter cultures showed variable ability to grow in the cooked dehulled seeds of P. biglobosa. Bacillus amyloliquefaciens I8 (originating from bikalga) counts were the lowest, followed by B. subtilis B3 (originating from maari). On the contrary, B. subtilis B7 and B. subtilis B9 (originating from soumbala) yielded the highest counts. In the final dried and ground products (ferments), the Bacillus counts had increased to between 8.21 and 10.37 Log CFU/g and the highest microbial counts were observed for ferments prepared with B. subtilis B9 and B7.
Microbial growth during controlled fermentation of soumbala using single and mixed ferments
The growth of Bacillus ferments inoculated in single during the production of soumbala is as shown in between 7.92 and 9.54 Log CFU/g. The increase of Bacillus count was also observed in final dried products between 8.38 and 10.11 Log CFU/g. The highest Bacillus counts were obtained with ferments of B. subtilis B9 (10.11 Log CFU/g) and B. subtilis B7 (9.84 Log CFU/g). Regarding the soumbala obtained by spontaneous fermentation (SN), its Bacillus load after drying was 8.93 Log CFU/g. Figure 3 shows the growth capacity of the Bacillus ferments used in mixture during the controlled fermentation of soumbala. Used in mixture, the starter cultures" loads increased from 7 Log CFU/g at the onset of the fermentation to 8 Log CFU/g at the end of the fermentation, while for spontaneous fermentation, microbial population increased from 2 to 8 Log CFU/g. After drying, the Bacillus counts were between 9.61 and 9.78 Log CFU/g for soumbala produced with ferments and 8.93 Log CFU/g for spontaneous soumbala.
Proximate composition of soumbala produced with ferments of starter cultures
Results from Table 1 show the proximate composition of the different samples of soumbala. The pH of soumbala produced with ferments of starter cultures varied from 7.17 ± 0.01 to 7.25 ± 0.11 for single culture soumbala and from 7.31 ± 0.04 to 7.37 ± 0.02 for mixed cultures soumbala. However, soumbala from spontaneous fermentation had a pH of 7.21 ± 0.01. The lowest pH value (7.15 ± 0.01) was obtained with single starter culture B. amyloliquefaciens I8 (originating from bikalga) while the highest pH (7.37 ± 0.02) was recorded with the combination of B. subtilis B7 and B9 (originating from soumbala). There was no significant difference (p ˃ 0.05) between the pH of the different soumbala produced using combined starter cultures. The moisture content of controlled fermented dried soumbala ranged between 5.67 ± 0.51 and 8.46 ± 0.67% whereas the spontaneous dried soumbala showed a moisture content of 6.19 ± 0.39%. Dried soumbala obtained with mixed starter culture SB7+I8 showed the highest moisture content (8.46 ± 0.67%). Single culture inoculated soumbala and mixed culture inoculated soumbala ash content ranged from 1.95 ± 0.16 to 2.11 ± 0.07%/DM and 1.77 ± 0.05 to 1.90 ± 0.04%/DM, respectively. Meanwhile, ash content of spontaneous soumbala was 2.11 ± 0.02%/DM. Soumbala obtained with a mixed culture of B. subtilis B7 and B. amyloliquefaciens I8 showed the lowest ash content (1.77 ± 0.05%/DM). Analyses showed that there was no significant difference (p ˃ 0.05) between ash content of spontaneous soumbala and soumbala from starter B7 (SB7), I8 (SI8) and B9 (SB9). Protein content varied between 41 and 43%/DM with spontaneous soumbala giving the weakest rate (41.19 ± 0.89%/DM) compared to mixed culture soumbala (SB7+I8) which gave the highest rate (43.78 ± 0.13%/DM). There was significant difference (p ˂ 0.05) between protein contents of all samples of soumbala. Regarding the crude fat content, it ranged from 37.46 ± 0.30%/DM (obtained with SB7+B3) to 40.67 ± 0.17%/DM (obtained with SB3) for soumbala produced with starter cultures and was Table 1. Proximate composition of spontaneous soumbala and soumbala produced using ferments in single or mixed culture.
Amino acid profiles of soumbala produced with ferments of starter cultures
Amino acid profiles (in g/100 g DM) of soumbala produced with ferments of Bacillus starter as well as soumbala from spontaneous fermentation are presented in Table 2. The different soumbala presented variable content in essential amino acids. The highest contents in valine (1.038), leucine (1.138), isoleucine (0.772), phenylalanine (0.722), tyrosine (1.064) and proline (0.641) were obtained with soumbala fermented using the ferment produced with the starter B7. However, these amino acids were observed in low concentrations in soumbala produced with the combination of starter cultures B7 and B9. The soumbala SI8 presented the lowest content in histidine (0.102) while the highest content was observed for soumbala SB9 (0.208).
Threonine, methionine and alanine were found in highest concentrations in soumbala SB7+I8 with respective values of 0.109, 0.077 and 0.506. Regarding lysine, the highest content was obtained with SB7 (0.791) while the lowest content was found in SI8 (0.520).
DISCUSSION
The increase of Bacillus loads during the production of ferments indicates that the starter cultures used in the study are able to use African locust bean cooked cotyledons as substrate for their growth. However, the fermentation capacity varied among the strains. The highest loads observed with the starter B7 and B9 are probably due to the fact that these strains were previously isolated from the fermentation of the same substrate and are therefore more able to use this substrate for their growth. Indeed, the autochthonous character of these starters gives them a better implantation during the fermentation process (Fessard, 2017). The low concentrations of Bacillus in the ferments prepared with starter cultures B3 and I8 may be explained by their non-autochthonous character. Therefore, African locust bean seeds cotyledons may not be a favorite substrate for their growth. The Bacillus loads (9.63 -9.70 log CFU/g) found in the ferments prepared with starter cultures B7 and B9 were close to those of Agbobatinkpo et al. (2012) who also Table 2. Amino acid profiles of spontaneous soumbala and soumbala produced using ferments in single or mixed culture (g/100 g DM). found Bacillus loads of 9.7 log CFU/g in yanyanku and ikpiru, two food additives (obtained by the fermentation of H. sabdarifa seeds) used in Benin for the fermentation of African locust bean seeds into sonru and iru. In addition, during the controlled fermentation of Afitin with B. subtilis starter cultures, the maximum load of Bacillus after fermentation was 9.5 log CFU/g (Ahonoukoun, 2014). The increase in Bacillus load during the controlled fermentation of soumbala with the ferments demonstrates the fermentation capacity of these ferments. However, ferments produced with starter cultures B7 and B9 demonstrated the strongest fermentation capacity, with the highest loads when used in monoculture (9.84-10.11 Log CFU/g) and in mixed culture (9.79 Log CFU/g). Similar range of bacterial counts in soumbala or similar products have been reported previously (Sawadogo/ lingani et al., 2003;Parkouda et al., 2009;Amoa-Awua et al., 2014;Ajavi et al., 2015;Guissou et al., 2020).
Amino acid Different type of soumbala
In this study, the pH of soumbala produced with ferments of starter cultures was alkaline, like that of traditional spontaneous soumbala. This result is in agreement with those recorded for similar African fermented condiments by other authors (Azokpota et al., 2006;Akabanda et al., 2018;Mohammadou et al., 2018;Ibrahim et al., 2018). This alkaline pH is due to the proteolytic activity of the fermenting microorganisms, which degrade proteins and release ammonia in the medium (Mohammadou et al., 2018). The results reported here corroborate those of Agbobatinkpo et al. (2012) in Benin, during the study of the fermentation ability of yanyanku and ikpiru, who found an average pH ranging from between 7.1 and 7.3 for African locust bean cotyledons fermented with or without additives. However, Sawadogo et al. (2003) and Guissou et al. (2020) during spontaneous fermentation of P. biglobosa seeds to produce soumbala found lower pH values in the dried products. The low water content observed in the various soumbala would promote their conservation (Ajavi et al., 2015).
The content of ashes obtained for the different soumbala (1.77 -2.11%) was lower than those found by Agbobatinkpo et al. (2012) which were 2.6 to 3.2%. This difference could be explained by the addition of ash solution during the preparation of the additives yanyanku and ikpiru used for the fermentation of P. biglobosa seedbased condiments in Benin, or by the difference in ash content of the seeds used in each country. The spontaneous soumbala as well as soumbala produced with ferments of starter cultures also presented lower concentrations of ashes compared to the results recently presented by Guissou et al. (2020).
The protein levels obtained in this study were higher than those obtained for sonru and iru fermented with yanyanku and ikpiru additives, which average was 35% (Agbobatinkpo et al., 2012). The variation of the protein contents may be due to the proteolytic activity of the fermenting strains (Mohammadou et al., 2018) and also to the difference in the physicochemical composition of African locust bean seeds according to the localities. Results demonstrated that controlled fermented soumbala as well as spontaneous fermented soumbala were rich in proteins (> 40%). Therefore, soumbala could be a source of protein that could help poor population to meet their requirement for this nutrient, particularly in developing countries. High amount of protein was also noted for other alkaline fermented products and was related to Bacillus counts (Terlabie et al., 2006;Mohammadou et al., 2018).
The different soumbala prepared with the ferments of Bacillus spp. presented interesting fat contents (37 -40%). The fat content of soumbala prepared using single ferment is comparable to that reported by Guissou et al. (2020), which was 40.47%. The carbohydrate contents found are also in agreement with that reported by Guissou et al. (2020).
Results showed that soumbala produced with ferments of Bacillus spp. contained more essential amino acids than the traditional spontaneous soumbala. Soumbala produced using the starter culture B7 had the highest levels of valine, leucine, isoleucine and phenylalanine. Similar results were previously obtained by Ouoba et al. (2003b) in soumbala produced by controlled fermentation using the same B. subtilis as starter culture. As reported by the same authors (Ouoba et al., 2003b), it was also found that soumbala produced with starter culture B9 contained high content of histidine compared to soumbala produced with starter culture B7.
The presence of high amounts of lysine is particularly interesting because lysine is a limiting amino acid in cereals and seeds that constitute the staple diet of the majority of African populations . The soumbala produced with this starter culture could then be used to fortify foods. The presence of non-essential amino acids such as tyrosine, proline and glycine at significant content in certain samples is also of interest since these amino acids could be essential in some human physiological circumstances (Ouoba et al., 2003b). Variable concentrations of amino acids in African fermented condiments have been reported in other studies (Parkouda et al., 2015;Akabanda et al., 2018;Ibrahim et al., 2018).
Conclusion
In the present study, four Bacillus strains (B. subtilis B7, B. subtilis B9, B. subtilis B3 and B. amyloliquefaciens I8) previously isolated from spontaneous fermentation of three different condiments and selected as starter cultures were successfully developed on dehulled African locust bean seeds used as carrier material to produce ferments. These ferments have been used separately or in combination to control the fermentation of African locust bean seeds into soumbala, which present interesting microbiological and nutritional characteristics. The obtained results indicate that dehulled African locust bean seeds are a promising carrier material for the transfer of Bacillus starter cultures to soumbala production units. These results may help to standardize the soumbala production process as well as its quality. However, further investigations need to be performed to evaluate the performance of these ferments in real environment and assess their stability during storage.
|
2020-12-03T09:05:45.823Z
|
2020-11-30T00:00:00.000
|
{
"year": 2020,
"sha1": "3bc89f194ad99c2312c5a65a196b7419b8f28271",
"oa_license": "CCBY",
"oa_url": "https://academicjournals.org/journal/AJB/article-full-text-pdf/52F11E865508.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "e5e4e5ec254ff71cda2c9cc5fb08a4c2ac48d30d",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Chemistry"
]
}
|
236349796
|
pes2o/s2orc
|
v3-fos-license
|
Applied Machine Learning Algorithms for Courtyards Thermal Patterns Accurate Prediction
: Currently, there is a lack of accurate simulation tools for the thermal performance modeling of courtyards due to their intricate thermodynamics. Machine Learning (ML) models have previously been used to predict and evaluate the structural performance of buildings as a means of solving complex mathematical problems. Nevertheless, the microclimatic conditions of the building surroundings have not been as thoroughly addressed by these methodologies. To this end, in this paper, the adaptation of ML techniques as a more comprehensive methodology to fill this research gap, covering not only the prediction of the courtyard microclimate but also the interpretation of experimental data and pattern recognition, is proposed. Accordingly, based on the climate zoning and aspect ratios of 32 monitored case studies located in the South of Spain, the Support Vector Regression (SVR) method was applied to predict the measured temperature inside the courtyard. The results provided by this strategy showed good accuracy when compared to monitored data. In particular, for two representative case studies, if the daytime slot with the highest urban overheating is considered, the relative error is almost below 0.05%. Additionally, values for statistical parameters are in good agreement with other studies in the literature, which use more computationally expensive CFD models and show more accuracy than existing commercial tools.
Introduction
According to the latest forecasts, two trends will become progressively reinforced over the present century. The first one is the gradual increase in average surface temperatures mainly due to global greenhouse gas emissions [1]. The second is the concentration of the population in cities [2]. This combination of factors, rising temperatures, and high population concentration will accentuate other environmental problems related to human thermal comfort, such as the so-called Urban Heat Island (UHI) effect. Urban Heat Islands (UHIs) are defined as urban areas with higher air temperatures than their surrounding rural areas [3]. The causes of the UHI effect are classified differently by Givoni [4] as due either to meteorological factors or to urban parameters [5].
Several urban dynamics converge to generate this overheating. Apart from domestic and industrial anthropogenic impacts, some of these factors are related to built-up topography and urban features: firstly, the constructed zones offer more surface area for heat absorption, radiating it slowly during the night; secondly, the canyon effect [6], which causes the thermal energy to remain in the ground by the influence of multiple horizontal reflections and absorption of incoming radiation provoked by tall buildings. UHI is also linked to a capsule of city gases that absorbs heat from the sun. In the city, buildings obstruct the wind and the capsule remains in place [7]. Finally, the urban albedo, which could be defined as the aptitude of construction materials to reflect solar radiation [8].
Considering the need to achieve the medium-term goal of nearly zero-energy buildings and cities [9], different passive strategies have been evaluated to counteract this urban overheating without resorting to energy-dependent cooling systems [10]. Like other animal colonies, cities are usually adapted to the climate as kinds of human termite mounds, perforating the urban fabric to regulate direct solar radiation. On a different scale than other public spaces, such as urban canyons and squares, courtyards have traditionally acted as passive cooling resources in cities around the world and not exclusively in hot and warm climates. One study on low-rise housing in the Netherlands shows how courtyards improve the energy efficiency of the building [11]. Previous research performed on courtyards in Spain has quantified the courtyard tempering effect, which enables improving thermal comfort and helping to reduce cooling energy consumption in buildings [12]. Due to the growing interest in strategies capable of achieving more climate-resilient cities, many studies have examined the microclimatic performance of the courtyard. Furthermore, several literature reviews compiling research on this topic have been published [13][14][15]. The courtyard microclimate can be explained in terms of the thermodynamic effects that occur within it, i.e., convection, radiation, stratification, and flow patterns. Among the different parameters affecting these microclimatic conditions, most of the studies emphasize the importance of courtyard geometry [13][14][15][16][17][18][19][20][21][22][23][24][25][26][27][28][29], in many cases considered the Aspect Ratio (AR), which is the ratio between the height and the width of the courtyard.
The perforation of the urban block with courtyards responds to light, ventilation, and thermal needs. Different field monitoring campaigns in the existing literature have proved the thermal tempering potential of courtyards to lower the outdoor temperature, in some cases by up to 15 °C [23]. Many simulation methods and tools are currently available for the thermal performance modeling of indoor spaces [22]. Notwithstanding, the alternatives for simulating outdoor ones are more limited. This is mainly due to the complexity of these outdoor spaces' thermodynamics, which involve multiple variables and entails enormous challenges to be modeled with enough accuracy. However, new software means have emerged in recent years that are capable, to some extent, of simulating their microclimatic conditions [31]. One of the most widely used tools is ENVImet, based on CFD simulation [32]. Other outdoor modeling software alternatives are Urban Weather Generator (UWG), based on energy conservation principles [33]; SOLWEIG, which can simulate spatial variations of 3D radiation fluxes and mean radiant temperatures [34]; Open FOAM, which has been used in previous research to simulate urban wind flows [35]; FreeFem++, employed to perform courtyard microclimate modelling [17,36]; and ANSYS Fluent, which has been applied for the simulation of wind flows in outdoor spaces [17,36]. Most of these tools present adequate accuracy for predicting urban outdoor microclimates, but they tend to show a larger error range when they are used to model the microclimate of smaller-scale spaces, such as courtyards, with greater dependence on the built environment [12,36].
Consequently, in this work, a new tool is proposed to predict this specific microclimate inside courtyards based on Machine Learning (ML) techniques [37,38]. In the computers and information era, a large amount of data are being generated in many different fields, such as science, finance, engineering, and industry. Thus, statistical problems have grown in size and complexity and the statistical analysis tries to understand these data. This is what is called learning from data or ML. Some examples of ML problems are the following: predict the price of a stock for 6 months from now, based on company performance measures and economic data; identify the numbers in a handwritten ZIP code from a digitized image, or estimate the amount of glucose in the blood of a diabetic person from the infrared absorption spectrum of that person's blood [38]. ML models have been shown previously to be useful for predicting and assessing structural performance [39].
ML problems are categorized as supervised or unsupervised. In supervised learning, the aim is to predict the value of an outcome measure based on a certain number of input measures (also known as features, attributes, or covariates). It is called supervised because of the presence of the outcome variable to guide the learning process. In unsupervised learning, there is no outcome measure, and the goal is to describe the associations and patterns among a set of input measures. Mathematical optimization has played a crucial role in supervised learning [37][38][39][40]. Support Vector Machine (SVM) and Support Vector Regression (SVR) are some of the main applications of mathematical optimization for supervised learning [41][42][43][44][45][46]. These are geometrical optimization problems that can be written as convex quadratic optimization problems with linear constraints, solvable by some nonlinear optimization procedure.
The present paper's main goal is to implement the ML methodology as a suitable and accurate system for predicting courtyard thermal patterns. To achieve this, the most relevant features regarding courtyards' thermoregulatory performance according to the literature, i.e., geometry and outdoor temperature, have been considered. The advantages of using ML techniques over conventional modeling tools are twofold: on the one hand, they allow the identification of the fundamental variables, simplifying the calculation processes; on the other hand, they are perfectible methodologies that make it possible to increase the accuracy of predictions by providing feedback from monitored data by increasing the size of the training dataset. In fact, despite presenting work based on an extensive set of field-monitoring campaigns, the case studies monitored could be considered an initial limitation of the study. Nevertheless, the proposed methodology achieves an accuracy level comparable to, and in some cases superior to, other outdoor thermal modeling methods. The overall structure of the paper can be framed in a threephase procedure. Firstly, the case studies used to validate the thermal predictions are selected and monitored. Secondly, simulations based on SVR and correlated employing MATLAB interpolations are performed. Finally, different error ranges are verified and compared with other tools simulation errors in the thermal patterns' prediction of the courtyard microclimate. Note that the interpolation technique is applied when characteristic parameters are within an appropriate range, defined by training data. Outside of this range, other prediction techniques are needed.
Materials and Methods
Regarding specifically the application of the ML methodology, it was sequenced into four steps. First, the reference study cases are defined and characterized. Second, the field monitoring campaigns are characterized. Third, the problem setup is detailed. Fourth, the SVR method is described. In particular, the following variables are considered: time (hours), outside courtyard measured temperature (CMT), wind speed and direction, with the aim of searching for a function that provides the temperature inside the courtyard all along the week. This problem was solved using the statistical software R. Finally, using the library of predicted data obtained from the ML method, the measured temperature inside a given courtyard is predicted, based on its climate zone, year´s season, and ARs. This will be done in two phases by an interpolation technique implemented in the scientific software MATLAB.
Location, Climate and Cases Study
In this research, the thermal performance of 22 selected courtyards in a total of 12 different locations in three different Thermal Ranges (TR) are analyzed as case studies.
The study was carried out in Mérida (Badajoz, Spain), Córdoba (Córdoba, Spain) and Seville (Seville, Spain), located in south-western Spain. All three cities are characterized by a hot climate in summer and a mild climate in winter. The specific Spanish regulations CTE-DB-HE [47], characterize them as C4, B4, and B4, respectively. The letter (A-D) represents the winter climate severity ranging from A for mild temperatures to D for very cold climates, and the number (1-4) represents the summer climate severity, being 1 for mild climates and 4 for very hot climates.
The selected case studies are intended to be analyzed in the warm season, so they all belong to the same climatic zone in summer. According to the Köppen classification, the selected cities are defined as Csa, with dry summers with low rainfall and very hot summers. Many case studies were analyzed over an extended period, always exceeding the minimum two-week monitoring period established by previous research [48].
Previous studies have shown the influence of outdoor temperature and geometry on the thermal tempering potential of the courtyard [49] and their thermal sensation [50], so for this research, a selection of case studies with different outdoor temperatures and different AR (Equation (1), defined in (1)) are analyzed. Therefore, two values, ARI and ARII, are defined. In Table 1, the main characteristics of the case studies selected for this research are shown, including the longitude, latitude and meters above sea level (MASL) of each case study.
Field Monitoring Campaign
As previously mentioned, in this research, numerous monitoring campaigns have been carried out in courtyards with diverse geometries (AR) and with different outdoor temperatures (TR). For both boundary conditions, AR and TR, the selected ranges are based on previous studies [23].
Some campaigns were carried out over several months to select similar outdoor temperature ranges in all case studies. One week was selected as a representative sample for each courtyard. During the monitoring campaigns, outdoor climatological parameters were analyzed, and simultaneously, the temperature inside the courtyards was recorded. According to the U.S. National Weather Service [51], dry-bulb temperature (DBT), can be measured using a normal thermometer freely exposed to the air but shielded from solar radiation and moisture. The thermometer will be affected by thermal radiation from the courtyard walls, so we will refer to the DBT as the Courtyard Measured Temperature (CMT) rather than the air temperature. In the case of outdoor environment analysis, portable weather stations model PCE-FWS 20 were used, the technical data of which are shown in Table 2. The weather station was located on the roof of the building, fully exposed, with no nearby high-rise buildings that could affect data collection. Data, such as courtyard measured temperature and wind speed and direction, were recorded with a measurement interval of 15 min.
Simultaneously, the temperatures in the courtyards of the selected case studies were recorded with sensors' model TESTO 174 H and TESTO 174 T, whose technical data are shown in Table 2. The sensors were placed vertically suspended from the roof of the building on the north-facing façade of the courtyard so that solar radiation would not influence the results. In addition, they were protected with a reflective shield to prevent overheating and to allow ventilation of the measuring equipment ( Figure 1). As the sensors' measured temperature would vary throughout the courtyard due to several factors, including stratification and infrared radiation, all the sensors were placed at +1.00 m and +2.00 m, referring to the height of the courtyard inhabited by users.
Problem Setup
In this article, the variables selected to predict the value of the temperature inside a courtyard are the two most relevant according to the literature review, namely, TRconsidering climate location zone and year´s season, and AR-as a numerical parameter synthesizing courtyard geometry. To perform the modeling, two stages were considered.
In the first stage, work was accomplished with the data from 22 monitored courtyards in every hour of one week, from different periods of the year, in various courtyards located in the Spanish cities of Badajoz, Córdoba and Seville.
The SVM method was used to create the library with some of these training data along one week in different courtyards. After that, we consider courtyards with different characteristic parameters, such as ARI and ARII, which are not included in the training data and use interpolation techniques to obtain the prediction for a week.
Support Vector Regression Method
Support Vector Machines (SVMs) were introduced in the 90s by Vapnik and his collaborators [45] in the framework of statistical learning theory. Although originally, SVMs were thought to solve binary classification problems, they are currently used to solve various types of problems, for example, regression problems [44], on which this research has focused.
In this first stage, the predicted value of the measured temperature inside a courtyard using some information related to it has been obtained. In particular, the time (hour of the day), the outside CMT, the wind speed and direction have been considered. More specifically, the following has been considered: , , , , where, Searching for a function : → R, such that provides the temperature inside the courtyard was the goal in this step.
To find this function for each courtyard, the -collection of experimental data associated with it was used. The idea of the SVR method [6] is to obtain a function such that for every sample , , 1, … , , it is satisfied that | | , for some 0 small. Concretely, given , and 0 , the following optimization problem is considered: This problem was solved using the statistical software . In particular, we used the E1071 library [52], a software package designed to solve classification and regression problems, using Support Vector Machines, which can be easily installed in . The solution provides a possible candidate function as follows: where the constant ∈ can be computed by forcing the Karush-Kuhn-Tucker (KKT) condition [53]. The function , ′ exp ∥ ′ ∥ is called the radial basis kernel. It holds that | | , for all 1, … , . The quality of function depends on the choice of the parameters , and . In order to select the best parameters, Cross-Validation (CV) technique was used to obtain the parameter values: γ = 0.1, C = 10 and ε = 0.1, with a CV error around 1% for all test cases.
Predicted Temperature of a Courtyard
In the first stage, through the monitoring data and the SVR method, a library of predicted temperatures inside various courtyards located in different cities of the south of Spain was obtained. In this second stage, by using this library, the predicted temperature inside a given courtyard was obtained.
In this section, given that the definition of AR is two-dimensional, two ARs were measured: the first one, ARI was defined as the relation between the width and the height, and the second one, ARII was defined as the relation between the length and the height, as follows: where ℎ is the maximum height, represents the width and the length of the courtyard.
Once both ARs were fixed, the predicted temperature inside a given courtyard in two different ways was performed. First, the courtyards library was classified considering three different TRs, depending on the range of temperatures of the courtyards and an interpolation technique to predict the temperature inside a courtyard of the same class by using ARs data was used, as it is explained in Subsection 2.5.1.
Second, the courtyards library was classified into different groups, depending on the courtyards AR range and an interpolation technique to predict the temperature inside a courtyard of the same class by using the maximum and minimum temperature data was used. Two cases (AR.1 and AR.2) were considered: first, the classification by considering ARI, and second, ARII was performed.
Fixed Temperature Range, Interpolation Using the ARs
In this case, the courtyards library was classified into three different TR, depending on the range of temperatures inside the courtyard. These TRs correspond to statistical climatic records in the locations where case studies are placed. The first group corresponds to the hottest days of spring or autumn, the second, to a typical summer season and, the third, to a summer heatwave. TR1: 15 ∘ , 35 ∘ . TR2: 20 ∘ , 40 ∘ . TR3: 25 ∘ , 45 ∘ . In the following Table 3, the courtyards are classified within these different TR. Note that some courtyards are in more than one TR because the temperature range in the courtyard changed from one week to another. This is because the courtyard, as a thermal tempering device, performs differently depending on the outdoor temperature. Courtyards TR1 CS1, CS2, CS3, CS4 TR2 CS5, CS6, CS7, CS8, CS9, CS10, CS11, CS12, CS13, CS14 TR3 CS16, CS17, CS18, CS19, CS20, CS21, CS22
Thermal Range
For a given courtyard, its range of temperature is first estimated, being classified as TR1, TR2 or TR3, and its AR, being classified as ARI or ARII.
Once courtyards are classified, the temperature prediction is verified through the SVR method; by an interpolation technique, it can be obtained a prediction of the temperature inside a courtyard of the same class. To achieve these data, MATLAB function scatteredInterpolant was used, which performs interpolation on a 2D dataset of scattered data. In particular, it returns the interpolant for the given dataset such that we can evaluate at a set of query points in 2D to produce interpolated values Tq = F (ARIq, ARIIq), obtaining the temperature inside the courtyard .
Fixed the ARs, Interpolation Using Minimum and Maximum Temperatures
In this section, two different cases, depending on whether we fix ARI or ARII, are considered.
First, the courtyards library was classified into two different classes, depending on : ARI.1: 0,1 . ARI.2: 1,2 . In the following Table 4, the courtyards are classified within these different classes. Note that CS4 has not been taken into account, as its ARI is out of the considered ranges (3.41). Thus, for a given courtyard, we measure the ARI and classify it into ARI.1 or ARI.2. Then, given the minimum and maximum temperature, and , respectively, of some courtyards in the same class and their corresponding predicted temperatures through the SVR method, by an interpolation technique implemented in the scientific software MATLAB, it can be obtained a prediction of the temperature inside a courtyard of the same class. To do the interpolation, we have used again the MATLAB function scatteredInterpolant, which performs interpolation on a 2D dataset of scattered data. In this case, we obtained , , , , obtaining the temperature inside the courtyard .
Second, we classified the courtyards library into two different classes, depending on : ARII.1: 0,1 . ARII.2: 1,2.5 . In the following Table 5, we classify the courtyards within these different classes: Thus, for a given courtyard, first it was measured the ARII, being classified as ARII.1 or as ARII.2. To do the interpolation, the same procedure as in the case of ARI was followed, using now ARII instead.
Fixed Temperature Range, Interpolation Using AR
In this section, it is shown the predicted temperature obtained by the method proposed in Section 2.5.1 in one courtyard of each temperature range class. The predicted temperature in comparison to the monitored temperature inside the courtyard, as well as the outdoor temperature, are both represented. In addition, a quantitative analysis was carried out. On the one hand, it was evaluated the relative error of the predicted temperature with respect to the monitored temperature in different discrete norms: ), the monitored temperature (resp., the predicted temperature) at time , 1, … , (hours, (h)). Moreover, the percentage in time for which the obtained absolute error within the predicted and the monitored temperature is less than or equal to a fixed tolerance 2 was evaluated. On the other hand, the following statistical parameters were computed: the correlation coefficient , the Root Mean Square Error ( ) and the Mean Absolute Percentage Error ( ). The formulas for these parameters are as follows: where, in the formula for the correlation coefficient , the mean monitored temperature (resp., the mean predicted temperature) is denoted by . (resp, . ). The values of the relative and absolute errors and the statistical parameters are shown in Tables 6 and 7 for the CMT in each selected courtyard of each temperature range class. For the class TR1, the courtyard CS1, located in Badajoz was considered. The prediction is performed for the dates 20 to 26 May. In the graph (Figure 2), simulation versus monitoring results of this courtyard with mild and very irregular temperatures is shown. There is hardly any thermal gap, and the prediction shows good accuracy. The obtained results are specified in Tables 6 and 7 (first row). For the class TR2, the courtyard CS5, located in Seville, was considered. The prediction is performed for the date 7 to 13 September. The obtained results are represented in Figure 3 and Tables 6 and 7 (second row). Note that the prediction for the second half of the last day is not represented in this plot. This is due to the fact that some of the training data used for this prediction had fewer points than the 168 needed for the whole week. However, to be consistent with the other cases, we decided to keep the whole week in this plot. For the class TR3, the courtyard CS17, located in Córdoba, was considered. The prediction is performed for the date 26 July to 1 August. Unlike the previous case shown in Figure 2, in this one (Figure 4), the outside temperature is higher and there is a large thermal gap. The predicted results show similarly good accuracy, particularly on days of maximum outdoor temperature. The obtained results are detailed in Tables 6 and 7 (third row).
Fixed AR, Interpolation Using Minimum and Maximum Temperature
In this section, the predicted temperature obtained by the method proposed in Section 2.5.2 in one courtyard of each AR range class is shown. First, the ARI range class is considered, representing the predicted temperature in comparison to the monitored temperature inside the courtyard, as well as the outdoor temperature. Additionally, a quantitative analysis was carried out. On the one hand, the relative error of the predicted temperature with respect to the monitored temperature in different discrete norms ( and ) was evaluated, as done in Section 3.1. Moreover, the percentage in time for which the obtained absolute error within the predicted and the monitored temperature is less than or equal to a fixed tolerance 2 was also evaluated. On the other hand, the following statistical parameters were computed: , and . The values of the relative and absolute errors and the statistical parameters are shown in Tables 8 and 9 for the courtyard measured temperature in each selected courtyard of each ARI range class. For the ARI.1, the courtyard CS16, located in Córdoba, was considered. The prediction is performed for the date 26 July to 1 August. The obtained results are represented in Figure 5 and Tables 7 and 8 (first row). For the class ARI.2, the courtyard CS1, located in Badajoz, was considered. The prediction is performed for the date 20 to 26 May. The obtained results are represented in Figure 6 and Tables 7 and 8 (second row). Finally, the ARII range class was considered and the predicted temperature was represented in comparison to the monitored temperature inside the courtyard as well as the outdoor temperature. As before, a quantitative analysis was carried out. On the one hand, the relative error of the predicted temperature with respect to the monitored temperature in different discrete norms ( and ) was evaluated, as done in Section 3.1. Moreover, the percentage in time for which the obtained absolute error within the predicted and the monitored temperature is less than or equal to a fixed tolerance 2 was also evaluated. On the other hand, the following statistical parameters were computed: , and . The values of the relative and absolute errors, and the statistical parameters are shown in in Tables 10 and 11 for the courtyard measured temperature in each selected courtyard of each ARII range class. For the class ARII.1, the courtyard CS4, located in Badajoz, was considered. The prediction is performed for the date 20 to 26 May. In this case, as can be seen in Figure 7, the courtyard has a different thermal performance than in the previously described case studies. This is mainly due to the overheating that occurs in the early morning hours due to the low AR. The predicted results do not show such a tight accuracy under these conditions. The obtained results are detailed in Tables 9 and 10 (first row). For the class ARII.2, the courtyard CS9, located in Seville, was considered. The prediction is performed for the date 4 to 10 September. The obtained results are represented in Figure 8 and Tables 9 and 10 (second row).
Relative Errors Calculation
The main goal of this work is the accurate thermal modeling of the courtyard for its optimization as a resilient strategy against climate change and urban overheating. Therefore, the specific performance of courtyard thermodynamics was considered for the evaluation of the model errors. The courtyard´s thermal tempering performance increases as a function of the Thermal Gap (from now on, TG), that is, the difference between the exterior monitored temperature and the monitored temperature inside the courtyard. TG usually increases as the outside temperature rises. Accordingly, in this section, relative errors from two different and representative case studies with different TRs are selected, comparing statistical parameters more in detail.
The first selected case study corresponds to the predicted temperature inside the TR1 courtyard CS1 (Figure 2), and the second case corresponds to the predicted temperature inside the TR3 courtyard CS17 (Figure 4). These cases were selected since, in the first case, monitored and predicted temperatures inside the courtyard are rather close to the exterior monitored one, while in the second case, monitored and predicted temperatures inside the courtyard are quite far from the exterior monitored one.
Conversely, the relative and absolute errors, as well as the statistical parameters considered in the previous section, were computed. Then, the daily computations all along the week were performed. The obtained results are given in Tables 12 and 13.
On the other hand, bearing in mind the obtained results, the best predicted day in each week was selected. In the first case, the day that gives better performances is the 6th one, while in the second case, it is the 5th one. Then, the relative error of the predicted temperature was computed hourly and represented in two ways. In the first way, with respect to the TG and in the second way, with respect to the monitored temperature. The graphics corresponding to CS1 and CS17 are included in Figure 9a,b, respectively. The graphic represented in Figure 9a corresponds to the relative error of the predicted temperature with respect to TG, and the graphic in Figure 9b corresponds to the relative error of the predicted temperature with respect to the monitored temperature inside the courtyard. In addition, the segment of the day in which critical urban overheating is concentrated is indicated in each graph. These hours, according to climate records [54], are between 13:00 and 19:00. On the left, considering CS1 plotted in Figure 9a, it can be observed that the relative error with respect to TG is always below 3%, except for two peaks, corresponding to time slots where the exterior temperature and the monitored temperature inside the courtyard almost coincide. Considering CS17 in Figure 9a, it was obtained a very low relative error with respect to TG in the central time slot of the day, that is, between 13:00 and 19:00, where TG is large. On the right, regarding the relative error with respect to the courtyard measured temperature (CMT) in Figure 9b, it can be observed that the plotted relative error for CS1 is below 0.1%, while for CS17, this relative error is always below 0.05%. For both case studies, if the daytime slot with the highest urban overheating is considered, the relative error is always below 0.05%.
Discussion
In this section, the results that were obtained in Section 3 are discussed. Regarding the results obtained in Subsections 3.1 and 3.2, on the one hand, it can be appreciated in Tables 6, 8 and 10 that the values for the relative errors in different discrete norms are around 5% and in almost all cases are below 10%, and the percentage in time for which the obtained absolute error w.r.t. the CMT is less than or equal to 2 is superior to 80%, except for the cases of Example 3.0.2, the ARI.1 range class and the ARII.1 range class. For the first critical case, reasonable values for the relative errors in different discrete norms (within 5% and 8%) were obtained, and the percentage in time for which the obtained absolute error w.r.t. the CMT is less than or equal to 2 is 61.45%. However, that case is rather special since it can be observed at relatively high temperatures w.r.t. the other experiments. In any case, if the tolerance parameter is increased to 3 for that case, a higher percentage of up to 80.72%, can be obtained. For the second critical case, the relative errors in different discrete norms are within 10% and 13%, and the percentage in time for which the obtained absolute error w.r.t. the CMT is less than or equal to 2 is 58.33%. In any case, if the tolerance parameter is increased to 3 for that case, we obtain the higher percentage 74.40%. On the other hand, the values for the statistical parameters that indicate that the simulation is accurate are → 1, → 0, → 0 [36,[42][43][44][50][51][52]. The values of these parameters for the courtyard measured temperature in the present courtyards for each simulation confirm that the used strategy is rather accurate. In particular, in Tables 7, 9 and 11 it can be observed that the correlation coefficient is quite close to 1 for all range classes (superior to 0.85, except for the cases Example 3.0.2 ARII.1 and ARII.2 range classes for which it is within 0.6 and 0.8). The values are around 1.5 and the values are around 5%, except for the critical cases identified above for which the values are around 2.5 and the values are within 5% and 10%. Finally, in Section 3.3, relative and absolute errors as well as the statistical parameters in two selected cases were computed daily. The case where the predicted CMT inside the courtyard is rather close to the exterior one, and the case where the predicted CMT inside the courtyard is quite far from the exterior one were chosen. The obtained results are given in Tables 12 and 13, respectively. It can be observed that the values for the relative errors in different discrete norms are around 6% in the first case, and around 3% in the second case, and in almost all cases are below 7%. Moreover, the percentage in time for which the obtained absolute error w.r.t. the CMT is less than or equal to 2 is superior to 80% all the days except for the 7th day of the second case, arriving to 100% in the 6th day of the first case and on the 3rd and 5th day of the second case. With respect to statistical parameters, the correlation coefficient R is quite close to 1, being larger than 0.89 in all cases. The values are around 1.25 in the first case and 1 in the second one, and the values are around 5% in the first case and 3% in the second case. Thus, mostly, the results obtained in Section 3.3 daily in these selected cases improve the global results computed for the whole week in Sections 3.1 and 3.2.
In brief, apart from the critical cases identified above, the values of the statistical parameters considered are in a similar range than those obtained in [36] for a similar problem. In that work, the authors performed a very accurate courtyard thermal simulation based upon a Computational Fluid Dynamics (CFD) FreeFEM 3D model, which is much more computationally expensive than the ML technique SVR used in this work. In particular, the computation of one-week temperature through the SVR method takes around one minute, while the CFD method takes around four minutes per one day of simulation.
Conclusions
In the present work, the applicability of a supervised ML model as a suitable tool for predicting microclimatic performance inside courtyards has been evaluated. For this purpose, among the ML models developed as supervised learning, Support Vector Machines (SVM) were selected. The model was fed and validated with empirical data from 22 case studies in southern Spain.
The results provided by this strategy showed good accuracy when compared to monitored data. In particular, we selected two representative and highly meaningful case studies with different TGs. The final results for both cases showed that, when the daytime slot with the highest urban overheating is considered, the relative error is almost below 0.05%. Additionally, values for statistical parameters are in good agreement with other studies in the literature that use more computationally expensive CFD models and show more accuracy than existing commercial tools. Indeed, the present strategy shows a Root Mean Square Error (RMSE) around 1 for the two representative case studies selected, which is in a similar range to the values obtained in [36] for a similar problem by a more computationally expensive CFD model, while corresponding values for existing commercial software are typically around 3 .
Based on the results obtained, it can be stated that the new application proposed for the ML method is useful for the development of design and measurement tools capable of modeling the complex microclimate of courtyards. Furthermore, the accuracy of the predictions for the analyzed case studies increases as a function of the courtyard thermal tempering potential linked to the intensification of the outdoor temperature.
The enhancement of the proposed methodology with the inclusion of other complementary microclimatic strategies, such as shading devices or vegetation as new ML features as well as establishing a balance between an over fitted and under fitted ML model considering the optimal number of training data, can be considered as future ways to develop this research.
|
2021-07-27T00:06:24.641Z
|
2021-05-18T00:00:00.000
|
{
"year": 2021,
"sha1": "474cbb8d442a0266de005dc65a6ddf819106d012",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2227-7390/9/10/1142/pdf?version=1621858920",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "49160738677569e152b09f76914455fd773d3365",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
267632346
|
pes2o/s2orc
|
v3-fos-license
|
Causal associations between prostate diseases, renal diseases, renal function, and erectile dysfunction risk: a 2-sample Mendelian randomization study
Abstract Background Previous observational studies have found a potential link between prostate disease, particularly prostate cancer (PCa), and kidney disease, specifically chronic renal disease (CKD), in relation to erectile dysfunction (ED), yet the causal relationship between these factors remains uncertain. Aim The study sought to explore the potential causal association between prostate diseases, renal diseases, renal function, and risk of ED. Methods In this study, 5 analytical approaches were employed to explore the causal relationships between various prostate diseases (PCa and benign prostatic hyperplasia), renal diseases (CKD, immunoglobulin A nephropathy, membranous nephropathy, nephrotic syndrome, and kidney ureter calculi), as well as 8 renal function parameters, with regard to ED. All data pertaining to exposure and outcome factors were acquired from publicly accessible genome-wide association studies. The methods used encompassed inverse variance weighting, MR-Egger, weighted median, simple mode, and weighted mode residual sum and outlier techniques. The MR-Egger intercept test was utilized to assess pleiotropy, while Cochran’s Q statistic was employed to measure heterogeneity. Outcomes We employed inverse variance weighting MR as the primary statistical method to assess the causal relationship between exposure factors and ED. Results Genetically predicted PCa demonstrated a causal association with an elevated risk of ED (odds ratio, 1.125; 95% confidence interval, 1.066-1.186; P < .0001). However, no compelling evidence was found to support associations between genetically determined benign prostatic hyperplasia, CKD, immunoglobulin A nephropathy, membranous nephropathy, nephrotic syndrome, kidney ureter calculi, and the renal function parameters investigated, and the risk of ED. Clinical Implications The risk of ED is considerably amplified in patients diagnosed with PCa, thereby highlighting the importance of addressing ED as a significant concern for clinicians treating individuals with PCa. Strengths and Limitations This study’s strength lies in validating the PCa-ED association using genetic analysis, while its limitation is the heterogeneity in study results. Conclusion The results of this study suggest a potential link between PCa and a higher risk of ED.
Introduction
Erectile dysfunction (ED) is a condition characterized by the inability to achieve or sustain a satisfactory penile erection during sexual intercourse. 1 ED has been observed to be more prevalent in middle-aged and elderly populations, with the likelihood of experiencing it increasing as individuals age. 2 However, there is a growing concern about the rising prevalence of ED among young men, with estimates suggesting that it may affect up to 30% of this population. 3The etiology and pathogenesis of ED are highly complex and are generally believed to be associated with unhealthy lifestyles, chronic diseases, and psychological factors. 4mong the various men's health topics on the Internet, prostate disease and ED are among the most popular search topics.This clearly indicates the public's significant concern regarding these particular conditions. 5In the field of urology and andrology medicine, several previous studies have indicated a correlation between prostate disease and ED, 6,7 suggesting the presence of shared underlying factors. 8In patients with prostate cancer (PCa), the presence of inflammation and increased levels of reactive oxygen species are linked to reduced blood flow necessary for achieving a penile erection. 9n addition, various treatment options currently available, such as radical prostatectomy and radiotherapy, carry the risk of causing ED. 10 This makes it difficult to directly study the relationship between prostate disease and ED.At the same time, it has been discovered that kidney disease, which is also a urinary condition, is linked to ED. Meta-analysis has revealed a high prevalence of ED in individuals with end-stage renal disease, 11 and it has been found that treatments such as kidney transplantation can significantly improve erectile function. 12ue to the potential shared microvascular pathophysiological pathways between renal disease and ED, the interaction between these 2 conditions is complex and bidirectional. 13owever, it is crucial to acknowledge that a considerable number of these studies were cross-sectional analyses, which possess certain limitations in terms of methodological consistency, heterogeneity, and potential underrepresentation.Additionally, it is important to highlight that no definitive causal association between these diseases has been established.
Mendelian randomization (MR) has emerged as a widely utilized method for causal inference.By leveraging germline genetic variation as an instrumental variable, this approach significantly mitigates the potential for bias stemming from reverse causation and residual confounding factors. 14Consequently, MR enables the identification of robust and impactful causal associations.The application of MR tools enables the validation of existing cross-sectional study associations, thereby establishing causal relationships and discerning the directional causality between diseases.Furthermore, our study signifies the pioneering investigation into the potential correlation between partial measures of renal function and ED.Our research has contributed to the ongoing validation of key risk factors for ED, and has aimed to modestly raise awareness among clinicians and preventive medicine practitioners regarding the prevalence of this condition.Through our findings, we hope to modestly support the implementation of targeted etiological prevention measures, potentially contributing to a reduction in the overall risk of developing ED.
Eliminating the influence of confounding factors is a significant challenge for current observational studies, thereby hindering the ability to infer a cause-and-effect relationship between prostate disease, kidney disease, and ED.By using MR analysis, we can overcome this limitation and obtain more reliable results based on available data. 15Considering the risk factors of ED, such as advancing age, type 2 diabetes, smoking, and other relevant factors, it becomes evident that these diseases and disease indicators are significantly influenced.Does the potential association between prostate diseases, renal diseases, renal function, and ED remain independent and causal?
Study design
This study employs a 2-sample MR design utilizing summary statistics from genome-wide association studies (GWASs) and extensive biobanks.The utilization of MR analysis facilitates the assessment of causality between exposure and outcome. 16onsidering the association between PCa and ED serves as a prime illustration.Within cross-sectional studies of PCa patients, the treatment options, such as radical prostatectomy and radiation therapy are frequently administered, potentially leading to the rapid onset of ED.The diverse treatment modalities available, such as radical prostatectomy and radiotherapy, 17 carry a potential risk of inducing ED themselves.Furthermore, the procedure for confirming the diagnosis, such as prostate puncture, may also contribute to the development of ED.These factors pose challenges to the examination of causal links between PCa and ED within current cross-sectional studies, thereby emphasizing the suitability of MR methods for such explorations.
We obtained data from the Integrative Epidemiology Unit (IEU) OpenGWAS database, which was curated by the Medical Research Council IEU based at the University of Bristol (https://gwas.mrcieu.ac.uk/). 18Participants provided written informed consent in previous studies approved by ethical review boards.As the data used were from public databases, no additional ethical approval was required.Details pertaining to study characteristics, participant information, and ethical statements for each dataset were extracted meticulously from the respective original publications or websites.Additionally, this study adhered to the STROBE-MR (Strengthening the Reporting of Observational Studies in Epidemiology using Mendelian Randomization) guideline, which enhances the reporting of MR studies in epidemiology. 19he prostate diseases assessed in this study encompassed PCa and benign prostatic hyperplasia (BPH), while the renal diseases encompassed CKD, immunoglobulin A (IgA) nephropathy, membranous nephropathy, nephrotic syndrome, and kidney and ureter calculi.Additionally, the renal function parameters assessed in this study encompassed enzymatic creatinine in urine, urinary albumin excretion, microalbumin in urine, potassium in urine, sodium in urine, serum creatinine (eGFRcrea), serum cystatin C (eGFRcys), and levels of kidney injury molecule 1. Considering the primary focus of our study on prostate disease and kidney disease, we incorporated and analyzed all associated conditions.However, it is important to note that certain diseases such as prostatitis, renal cell carcinoma, congenital kidney malformations, and others experienced a dearth of available single nucleotide polymorphisms (SNPs) in the existing published public data.Consequently, we were unable to include these particular diseases in our study due to insufficient genetic information.Furthermore, all study participants were of European ancestry, ensuring that there was no sample overlap between the exposure and outcome traits.The overview of our MR analyses is shown in Figure 1.
Data source
The PCa GWAS data utilized in this study were derived from a meta-analysis that integrated findings from 7 preceding PCa GWAS datasets. 20In total, the meta-analysis encompassed a sample size of 140 254 individuals, comprising 79 148 cases and 61 106 controls.The pooled GWAS data for BPH were obtained from round 7 of the FinnGen consortium.The study utilized samples from the Finnish Biobank, comprising a total of 13 118 cases and 72 799 controls. 213][24] The extensive dataset curated by the UK Biobank served as the source for the GWAS data pertaining to microalbumin in urine, enzymatic creatinine in urine, and sodium in urine.This valuable resource comprises an impressive cohort of approximately 500 000 participants from the United Kingdom, which provides a robust foundation for our analyses. 257][28] Moreover, the GWAS data for serum cystatin C, kidney injury molecule 1 levels, and serum creatinine are encompassed as part of the CKDGen consortium. 29he ED GWAS dataset, the largest of its kind for studying ED in individuals of European ancestry, combined 3 cohorts and recruited a total of 223 805 subjects (6175 cases and 217 630 controls). 30
Genetic instrument selection
We employed stringent criteria based on the European 1000 Genomes panel to extract SNPs associated with exposure factors.First, we considered SNPs that demonstrated genomewide significance (P < 5 × 10 −8 ) and exhibited independence (r 2 < 0.001) from other SNPs, with a clumping distance of 10 000 kb.Second, as part of the harmonization process, the palindromic SNPs were excluded from the instrumental variables (IVs).Third, to ensure that the IVs exclusively influence the risk of ED through exposure factors, we conducted a comprehensive analysis using the PhenoScanner database (http://www.phenoscanner.medschl.cam.ac.uk/).We carefully examined and removed any potentially confounding factorrelated SNPs associated with increasing age, type 2 diabetes, metabolic syndrome, smoking, anxiety, depression, bipolar disorder, sleep disorder, and insomnia.These confounding risk factors were identified based on current guidelines for ED 31 and by referencing published articles on MR. 32 Finally, variables with F values <10, which serve as indicators of IV intensity, were excluded in order to minimize potential bias resulting from weak instrumental detection.
Statistical analyses
We conducted 2-sample MR analyses to evaluate the causal associations between prostate disease, renal disease, renal function, and ED.This MR study was undertaken based on 3 fundamental premises: (1) the strong association of genetic variation with exposure; (2) the absence of any association between genetic variation and potential confounders; and (3) genetic variations are independent of outcome, except by means of exposure.We utilized inverse variance-weighted (IVW) MR as the main statistical method, while also employing weighted median MR and mode-based, MR-Egger, and MR-PRESSO analyses to investigate the relationship between exposure factors and ED.Cochran's Q test assessed SNP heterogeneity, with P < .05indicating high heterogeneity.Additionally, the intercept term within the MR-Egger regression approach was employed as a means to discern directional pleiotropic effects, with a significance level of P < .05being indicative of the presence of pleiotropy.Moreover, a leaveone-out sensitivity analysis was performed to investigate the potential impact of individual SNPs in introducing bias and affecting the overarching causal effect.The analysis was conducted using the TwoSampleMR package (version 0.5.6)within the R statistical computing environment (version 4.2.1;R Foundation for Statistical Computing).We utilized GraphPad Prism (version 9; GraphPad Software) as our preferred software for generating visual representations.A 2-tailed P < .05 was considered statistically significant.
Ethics and Informed Consent
In our current study, we solely relied on publicly accessible summary data, and the ethical approval as well as consent from participants were obtained through the original GWAS.Each of the studies contributing to the GWAS had obtained informed consent from study participants.
Selected genetic instruments (IVs)
Detailed information regarding the 5 methods employed can be found in Supplementary Table 1.Subsequent to the exclusion of LD SNPs (r 2 > 0.001 within 10 000 kb), palindromic 2 and 3).Table 1 presents the essential characteristics of the dataset utilized in this study.In this set of screening criteria, we reserved 118 index SNPs for the genetic prediction of PCa, 10 for BPH, 6 for IgA nephropathy, 4 for membranous nephropathy, 4 for nephrotic syndrome, 9 for kidney and ureter calculi, 5 for microalbumin in urine, 32 for urinary albumin excretion, 13 for potassium in urine, 20 for creatinine in urine, 29 for sodium in urine, 42 for serum creatinine (eGFRcrea), 4 for nephrotic syndrome, 4 for serum cystatin C (eGFRcys), and 12 for levels of kidney injury molecule 1.
Causal effects of prostate diseases on ED
Following the completion of our MR analysis, we specifically identified a statistically significant association between PCa and an increased risk of ED.The association was found using the IVW method (odds ratio [OR], 1.125; 95% confidence interval [CI], 1.066-1.186;P < .0001)(Figure 2), and was supported by the weighted median method (OR, 1.117; 95% CI, 1.033-1.208;P = .006)and weighted mode method (OR, 1.119; 95% CI, 1.025-1.222;P = .013).No evidence of pleiotropy was observed in the MR-Egger regression (intercept = 0.009; P = .063).However, the presence of heterogeneity was detected in both the IVW (Q = 149.995;P = .021)and MR-Egger (Q = 145.575;P = .033)models, as supported by the findings presented in Table 2 and the funnel plot.Furthermore, our analysis conducted using leave-oneout methodology did not identify any influential SNPs, and the impact of each SNP on psoriasis risk is visualized in the forest plot presented in Supplementary Figure 1.However, the analysis presented in Supplementary Figure 2 indicates that there is no causal relationship between BPH and ED (P > .05).
Causal effects of renal diseases and renal function on ED
The analysis depicted in Figure 2 and Supplementary Figures 3 to 16 revealed no causal relationship between CKD, IgA nephropathy, membranous nephropathy, nephrotic syndrome, kidney and ureter calculi, or the investigated 8 final renal function parameters with ED (all P > .05).The MR-Egger intercept results (Table 2) indicated the absence of pleiotropy (P > .05),and the Cochran's Q test and funnel plot provided no evidence of heterogeneity.Moreover, the robustness of the MR estimation was confirmed through leave-one-out analysis.
Discussion
In our 2-sample MR analysis, we found evidence suggesting a potential correlation between PCa and an increased risk of ED.However, no significant associations were observed between BPH, CKD, IgA nephropathy, membranous nephropathy, nephrotic syndrome, kidney and ureter calculi, and the investigated renal function parameters.
In 1987, Mandel and Schuman 33 conducted a case-control study to investigate the cross-sectional association between PCa and the risk of ED.Although the results did not reach statistical significance, Mandel and Schuman's pioneering work laid the foundation for further research in this field.In 2011, Chung et al 34 conducted a substantial survey, revealing that individuals with ED exhibited a 1.42-fold increase in the risk of developing cancer during a 5-year followup period compared with the control group, after adjusting for numerous demographic and sociological factors (95% CI, 1.03-2.09;P = .039).A recent study conducted among elderly veterans in the United States indicated a correlation between enhanced sexual function and a decreased overall risk of PCa. 35The current body of research primarily focuses on assessing the impact of erectile function as a precursor to PCa development, with a limited number of studies approaching the topic from the perspective of PCa as the point of origin.Our study serves to address critical gaps in existing research, presenting novel findings by establishing causal associations for the first time in this particular domain.Previous studies have found a link between ED and PCa risk, with common risk factors such as age, race, chronic disease, reduced ejaculation leading to the accumulation of harmful substances, and psychological factors being key contributors. 36The impact of inflammation and oxidative stress on the risk of ED in PCa patients is worth considering.PCa initiation and progression exhibit a notable correlation with chronic inflammation and infection. 37Additionally, infiltration of inflammatory cells, whether acute or chronic, triggers the augmentation of prostatic proliferative inflammatory atrophy-a state characterized by atrophic lesions within the prostate. 38n individuals with PCa, the prolonged inflammatory state and elevated levels of reactive oxygen species are associated with increased arterial stiffness, impaired arterial elasticity, and reduced blood flow necessary for penile erection. 9These conditions contribute to the development of vascular endothelial dysfunction, further diminishing the ability to achieve and maintain erections.
In a comprehensive study encompassing men across various stages of CKD, the prevalence of ED was observed to be 72.3%, 81.5%, and 85.7% in CKD stages 3, 4, and 5, respectively. 39This aligns with findings from 2 substantial population studies conducted in Brazil and China, which produced similar results.Costa et al 40 demonstrated a prevalence of ED reaching 71.0% among CKD stages 4 and 5 patients 50 years of age and older, while Ye et al 41 reported an ED prevalence of 80.6% among 176 peritoneal dialysis patients.Indeed, our study's lack of positive findings does not dismiss the wellestablished high prevalence of ED among patients with CKD.Our findings do not support the conclusion that CKD is associated with ED.The complications that often accompany CKD may offer a partial explanation for the differing results obtained compared with previous studies.CKD frequently coexists with cardiovascular disease, with the latter commonly characterized by endothelial dysfunction and oxidative stress.The impaired production of nitric oxide in endothelial cells limits its beneficial effects, resulting in restricted smooth muscle dilation within the corpus cavernosum. 42Additionally, heightened levels of reactive oxygen species promote arterial stiffness, contributing to the progression of atherosclerosis and vascular endothelial dysfunction, ultimately leading to reduced blood flow to the penis. 43Moreover, it is significant to acknowledge that patients with CKD commonly experience lowered erythropoietin levels and exhibit heightened levels of prolactin, which ultimately leads to an increased incidence of anemia and decreased levels of hemoglobin.Previous studies have demonstrated that these factors can influence erectile function by impacting hormone secretion. 44Finally, psychological factors play a significant role in the development and progression of ED following CKD.
In our study, we found no significant associations between several key renal function measures and ED.This intriguing finding presents a contradiction to the conclusions drawn from previous cross-sectional observations.Notably, studies conducted in Japan and China have previously reported a higher likelihood of diabetic patients with macroalbuminuria or elevated urinary albumin-to-creatinine ratio achieving lower International Index of Erectile Function scores in comparison with normoalbuminuric patients. 45,46The decline in renal function and estimated eGFR have been implicated in the disruption of hormonal balance within the hypothalamicpituitary-gonadal axis, resulting in reduced libido.Specifically, this hormonal imbalance is characterized by lowered levels of both total and free testosterone. 47Furthermore, as creatinine levels rise and eGFR declines, ED-related factors such as hyperparathyroidism, autonomic neuropathy, and vascularrelated diseases are likely to advance. 48Hence, we propose that forthcoming cross-sectional or cohort studies should consider incorporating testosterone and free testosterone as potential confounding factors.
Interestingly, a recent meta-analysis conducted by Pyrgidis et al 49 unveiled a noteworthy finding regarding the impact of kidney transplantation on erectile function.The study revealed a statistically significant improvement in erectile function among individuals who had undergone kidney transplantation (Relative Risk, 2.53; 95% CI, 1. 44-4.44).This finding sheds new light on the potential benefits of kidney transplantation beyond its primary therapeutic goals.Drawing from their expertise, it can be deduced that kidney transplantation holds greater potential for restoring erectile function in patients.This effect is mediated through mechanisms such as enhanced hormone secretion, the cessation of dialysis dependence, and an overall improvement in quality of life, rather than by increasing the eGFR. 49A meta-analysis conducted by Kang et al 50 revealed that renal transplant recipients experience an increase in serum testosterone levels as well as a decrease in prolactin and luteinizing hormone levels.
There are several limitations of this study.First, in our study, we specifically focused on prostate disease and kidney disease, incorporating and analyzing all related conditions.However, it is crucial to highlight that certain diseases, such as prostatitis, renal cell carcinoma, congenital kidney malformations, and others, lacked sufficient available SNPs in the existing published public data.As a result, we were unable to include these specific diseases in our study due to insufficient genetic information.Furthermore, conducting a bidirectional MR analysis is not currently feasible, and performing MR analysis with ED as an exposure factor still encounters challenges in terms of SNP availability.Therefore, it is common for the existing published MR studies to primarily utilize ED as an outcome instead of an exposure.Based on the present findings, our belief is that future research should primarily focus on bridging the gap in sample size and sample quality of SNPs in the ED population.Additionally, it is crucial to further investigate the pathophysiological mechanisms and key loci associated with the established link between PCa and ED.The association SNPs presented in this study (Supplementary Table 2) can serve as a foundation for forthcoming research in this area.
Conclusion
The results of this study indicate a potential association between PCa and an elevated risk of ED.Further research is necessary to gain a better understanding and confirm these findings.
Figure 1 .
Figure 1.Schematic diagram of the Mendelian randomization design, rationale, and procedures.
Table 1 .
Details of the GWASs included in the Mendelian randomization.
SNPs, duplicated SNPs, and weak-effect SNPs (P > 5 × 10 −8 or F < 10), we manually eliminated SNPs associated with potential confounders such as increasing age, type 2 diabetes, metabolic syndrome, smoking, anxiety, depression, bipolar disorder, sleep disorder, and insomnia.Five SNPs used for genetic prediction of PCa, 1 for IgA nephropathy, 4 for urinary albumin excretion, 1 for potassium in urine, 3 for sodium creatine in urine, 2 for serum creatinine (eGFRcrea), and 1 for serum cystatin C (eGFRcys) were excluded due to the presence of confounding factors.(SupplementaryTables
Table 2 .
Heterogeneity tests and pleiotropy test for causality between exposure and erectile dysfunction.
|
2024-02-14T05:09:08.504Z
|
2024-02-01T00:00:00.000
|
{
"year": 2024,
"sha1": "9aae6d68e9a2f12f724ac67ef7d9fbdfb528a149",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "9aae6d68e9a2f12f724ac67ef7d9fbdfb528a149",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
119088560
|
pes2o/s2orc
|
v3-fos-license
|
Fluctuations of the number of participants and binary collisions in AA interactions at fixed centrality in the Glauber approach
In the framework of the classical Glauber approach, the analytical expressions for the variance of the number of wounded nucleons and binary collisions in AA interactions at a given centrality are presented. Along with the optical approximation term, they contain additional contact terms arising only in the case of nucleus-nucleus collisions. The magnitude of the additional contributions, e.g., for PbPb collisions at SPS energies, is larger than the contribution of the optical approximation at some values of the impact parameter. The sum of the additional contributions is in good agreement with the results of independent Monte Carlo simulations of this process. Due to these additional terms, the variance of the total number of participants for peripheral PbPb collisions and the variance of the number of collisions at all values of the impact parameter exceed several multiples of the Poisson variances. The correlator between the numbers of participants in colliding nuclei at fixed centrality is also analytically calculated.
Introduction
At present the considerable attention is devoted to the experimental and theoretical investigations of the multiplicity and transverse momentum fluctuations of charged particles in high energy AA collisions (see [1]- [7] and references therein). One expects the increase of the fluctuations in the case of freeze-out close to the QCD critical endpoint of the quark-gluon plasma -hadronic matter phase boundary line [8,9].
The aim of the present paper is to draw an attention to another factor leading to the increase of the fluctuations in the case of AA interactions. Namely the increase of the fluctuations of the number of participants and binary collisions due to multiple contact nucleon interactions in nucleus-nucleus collisions.
Clear that these fluctuations lead to fluctuations in the number of particle sources and so directly impact on the multiplicity and transverse momentum fluctuations of produced charged particles and also on the correlations between them (see, for example, [10]- [17]).
In the present paper the analytical expressions for the variance of the number of wounded nucleons and binary collisions in given centrality AA interactions are obtained taking into account the multiple contact NN interactions (so-called loop contributions). The calculations are fulfilled in the framework of classical Glauber approach [18], having a simple probabilistic interpretation [19,20]. In contrast with purely Monte-Carlo simulations the analytical calculations enable to understand the origin of increased values of the fluctuations.
As a result we demonstrate that the multiple contact NN interactions in AA scattering lead in particular to the fact that, e.g. for PbPb collisions at SPS energies, the variance of the total number of participants for peripheral collisions and the variance of the number of collisions at all values of the impact parameter exceed a few times the Poisson ones.
The paper is organized as follows. In section 2 in the framework of classical Glauber approach we present the analytical expression for the variance of the number of wounded nucleons in one of the colliding nucleus at a fixed value of the impact parameter. Along with the well known optical contribution (which depends only on the total inelastic NN cross-section) in the case of nucleus-nucleus collisions there is the additional contact term, depending on the profile of the NN interaction probability in the impact parameter plane.
In section 3 we calculate the correlator between the numbers of participants in colliding nuclei at fixed centrality and as a consequence find the variance of the total (in both nuclei) number of participants.
In section 4 in the framework of the same approach we present the analytical expression for the variance of the number of NN binary collisions in given centrality AA interactions. Along with the optical approximation term it also contains other terms, which occur the dominant ones. These terms also correspond to the multinucleon contact interactions and arise only in the case of nucleus-nucleus collisions.
The derivations of all formulas are taken into the appendices A, B and C. All over the paper the results of numerical calculations are presented with the purpose to illustrate the obtained analytical results. We control also the results of our analytical calculations comparing them with the results obtained by purely Monte-Carlo simulations of the nucleus-nucleus scattering.
Note that we restrict our consideration by the region of the impact parameter β < R A + R B , where the probability of inelastic interaction σ AB (β) of two nuclei with radii R A and R B is close to unity.
Variance of the participants number in one nucleus
At first we consider the variance V [N A w (β)] of the number of participants N A w (β) (wounded nucleons) at a fixed value of the impact parameter β in one of the colliding nuclei A. In the framework of pure classical, probabilistic approach to nucleus-nucleus collisions, formulated in [18], we find for the mean value and for the variance of N A w (β) the following expressions (see appendix A): where P (β) = 1 − Q(β). For Q(β) and Q (12) (β) we have (all integrations imply the integration over two-dimensional vectors in the impact parameter plane): Here T A and T B are the profile functions of the colliding nuclei A and B. The σ(a) is the probability of inelastic interaction of two nucleons at the impact parameter a. We'll imply that σ(a), T A and T B depend only on the magnitude of their two-dimensional vector argument. Hence f B (a) = f B (|a|) and Q(β) = Q(|β|). The formula (1) and the first term in formula (2) correspond to the naive picture (so-called optical approximation) implying that in the case of AA-collision at the impact parameter β one can use the binomial distribution for N A w (β) (see, for example, [21,22]): with some averaged probability P (β) of inelastic interaction of a nucleon of the nucleus A with nucleons of the nucleus B. At that the P (β) is considered to be the same for all nucleons of the nucleus A. In the optical approximation one has The whole expression (2) for the variance is the result of more accurate calculation (see appendix A), when at first one calculates the probabilities of all binary NN-interactions, taking into account the impact parameter plane positions of nucleons in the nuclei A and B and only then averages over nucleon positions: where Here X is the average value of some variate X at fixed positions of all nucleons in the nuclei A and B; A and B denote averaging over positions of these nucleons with corresponding nuclear profile functions.
In the limit r N ≪ R A , R B the formulae (5) and (6) reduce to with Note that in this limit the Q(β) and hence the mean value (1) and the first term of the variance (2) depend only on the integral inelastic NN cross-section σ NN , but the Q (12) (β) entering the second term of the variance (2) depends also on the shape of the function σ(b) through the integral I(a) (12). Note also that using of the simple approximation with the δ-function: σ(b) = σ NN δ(b) for NN interaction gives the same result (as going to the limit r N ≪ R A , R B ) only for the optical part of the answer, which is expressed through Q(β). If someone tries to use the approximation σ(b) = σ NN δ(b) to calculate Q (12) (β), he will get I(a) = σ 2 NN δ(a) and , what leads to infinite Q (12) (β) at B ≥ 2. Meanwhile, for any correct approximation of σ(b) with σ(b) ≤ 1 (in correspondence with its probabilistic interpretation in classical Glauber approach) we find a finite answer for Q (12) (β). and (12) using respectively the black disk (14) and Gaussian (15) approximations for NN interaction; • and -results of independent MC simulations using for NN interaction the black disk (14) or Gaussian (15) approximation; * -the optical approximation result (8) (the first term in formula (2)); + -the Poisson variance: . The curves are shown to guide eyes.
In the quantum case in Glauber approximation due to unitarity one has where the γ(b) is the amplitude of NN elastic scattering. This leads to the restrictions: 0 ≤ σ tot (b) ≤ 4, 0 ≤ σ el (b) ≤ 4 and 0 ≤ σ in (b) ≤ 1. So in the quantum case the σ(b) also admits a probabilistic interpretation [19,20]. In our numerical calculations we have used for σ(b) the "black disk" approximation: and Gauss approximation: In both cases σ NN = πr 2 N . For the nuclear profile functions T A and T B we have used the standard Woods-Saxon approximation: with R A = R 0 A We see also in Fig.1 that for peripheral AA collisions at large β, when P (β) becomes small (P (β) ≪1, Q(β) ≈ 1), the optical approximation (7) reduces to the Poisson distri- (8). So only due to the contact term the variance of the N A w (β) is larger than the Poisson one for peripheral PbPb collisions (at β > 7 fm) in a correspondence with the indications, which one has from the experimental data on the dependence of multiplicity fluctuations on the centrality at SPS and RHIC energies [1,4].
The week dependence of the results on the form of NN interaction at nucleon distances is also seen. In the case of using the black disk (14) approximation for σ(b) the results lay systematically slightly higher, than in the case of using the Gaussian (15) approximation with the same value of σ NN .
In Fig.2 we see that the mean value N A w (β) (1), in contrast to the variance, coincides with the optical approximation result (8) and depends only on σ NN in the limit r N ≪ R A , R B . The MC simulations also confirm this result.
We would like to emphasize that the nontrivial term in the expression (2) for the variance arises only in the case of nucleus-nucleus collisions. At A = 1 or B = 1 it [23,24,25] for details).
vanishes. At A = 1 due to explicit factor A − 1 in (2) and at B = 1 due to fact that in this case Q (12) (β) = Q 2 (β). This corresponds to the well known fact that for nucleusnucleus collisions the Glauber approach doesn't reduce to the optical approximation even in the limit r N ≪ R A , R B (see, for example, [23]).
The additional term, which arises in the expression for the variance (2) in the case of nucleus-nucleus collisions, depends, as we have mentioned, not only on the integral value of inelastic NN cross-section σ NN = db σ(b), but also on the shape of the function σ(b), i.e. on the details of NN interaction at nucleon distances, which are much smaller than the typical nuclear distances. In quantum Glauber approach it corresponds to the fact that in the case of AA collisions, in contrast with pA collisions, the loop diagrams of the type shown in Fig.3 appear and one encounters the contact terms problem (see, for example, [23,24,25]).
The second term in formula (2) is the manifestation of this problem at the classical level. In the case of a tree diagram the "lengths" of the interaction links in the transverse plane are independent. As a consequence the result expresses only through P (β) -the probability of the interaction of a nucleon of the nucleus A with nucleons of the nucleus B averaged over its position in nucleus A. The P (β) is the same for any nucleon of the nucleus A. In the case of the loop diagram in Fig.3 the "lengths" of the interaction links in the transverse plane are not independent and the result can't be expressed only through the averaged probability P (β) and the correlation effects have to be taken into account.
Variance of the total number of participants
Now we pass to the calculation of the variance of the total number of participants at a fixed value of the impact parameter β. Clear, that for the mean value we have simply: and by (9) for the variance In naive optical approach there is no correlation between the numbers of participants in colliding nuclei at fixed value of the impact parameter: Figure 4: The correlator between the numbers of wounded nucleons in colliding nuclei, calculated by analytical formulae (19)-(22) and by independent MC simulations. The notations are the same as in Fig.1.
More accurate calculations fulfilled in accordance with (9) and (10) where and The Q(β) and f B (a) are the same as in formulae (3), (5) and (11). Recall, that in our The results of numerical calculations of the correlator (19) by formulae (20)- (22) for PbPb collisions at SPS energies together with the results obtained by independent MC simulations of these collisions are presented in Fig.4.
Comparing Fig.4 with Fig.1 we see that the contribution of the correlator to the variance of the total number of participants at intermediate values of β is about half of the variance for one nucleus V [N A w (β)] and is approximately equal to the contribution of the first optical term in (2). At large values of the impact parameter (β ≥ 10 fm) the relative contribution of the correlator (19) to the total variance (18) is even greater. The results are again in a good agreement with the results obtained by MC simulations. (The small difference in the region 8-10 fm arises from the use of approximate formulae (11) and (22).) Optical approx. Poisson Figure 5: The same as in Fig.1, but for the variance of the total number of wounded (11), (12) with taking into account the contribution of the correlator (18)- (22); + -the Poisson variance: In Figs.5 and 6 we present the final results for the variance of the total number of participants in PbPb collisions at SPS energies, taking into account the contribution of this correlator. (In Fig.6 the same, as in Fig.5, but for the scaled variance: We see in particular that the calculated variance of the total number of participants V [N w (β)] is a few times larger than the Poisson one in the impact parameter region 8-12 fm.
Variance of the number of binary collisions
In this section we present the results of the calculation of the variance of the number of NN-collisions at a fixed value of the impact parameter β in the framework of the same classical Glauber approach [18] to nucleus-nucleus collisions. The details of calculations one can find in the appendix C.
As a result we found that the formula for the mean number of binary collisions again coincides with the well-known expression given by the optical approximation (compare with the formula (29) below): where has the meaning of the averaged probability of NN-interaction. Numerically the mean value of the number of collisions as a function of the impact parameter β are shown in Fig.7.
In contrast to the mean value, the formula obtained for the variance of N coll (β): differs from the optical approximation result (see below eq. (30)). It depends not only on the χ(β) (24), but also on and The χ 1 is obtained from χ 1 by permutation of A and B. (Recall, that we consider the T A and T B depend only on the magnitude of their two-dimensional vector argument.) At A = B we have χ 1 = χ 1 . Note also that in the limit r N ≪ R A , R B the χ, χ 1 , χ 1 and hence the variance (25) depend only on σ NN , but not on the form of the function σ(b) (it was not the case for the variance of the number of the wounded nucleons, see section 2 after the formula (12)). For comparison we list below the optical approximation results, which assumes the binomial distribution for N coll (β) with the averaged probability χ(β) of NN-interaction (see, for example, [21,22]): In this case one has N coll (β) opt = ABχ(β) (23) and (24) and by independent MC simulations as a function of the impact parameter β (fm). The notations are the same as in Fig.1.
Note also that in the case of pA interactions (A = 1 or B = 1) our result (25) for the variance of the number of collisions coincides with the formula (30) obtained in the optical approximation.
In Figs.8 and 9, as an illustration we present, the results of our numerical calculations of the variance of the number of collisions by analytical formulae (24)-(27) in the case of PbPb scattering at SPS energies together with the results obtained from our independent Monte-Carlo simulations of the scattering process. (In Fig.9 the same as in Fig.8, but for the scaled variance: We see that the calculated variance of the number of collisions at all values of the impact parameter β is a few times larger than the Poisson one, whereas the variance given by the optical approximation practically coincide with the Poisson one (see the remark after formula (30)). The results obtained by independent Monte-Carlo simulations confirm our analytical result. (The small difference again can be explained by the use of approximate formulae (24), (26)
and (27).)
We have also analyzed the dependence of the fluctuations on the diffuseness of the nucleon density distribution in nuclei. To study this dependence the calculations with a smaller (0.3 fm) than standard (0.545 fm) value of the Woods-Saxon parameter κ (16) were performed, what corresponds to the model of nucleus with a sharper edge (see Figs.10 and 11). The calculations confirm that one would expect from simple physical considerations, more compact distribution of nucleons in nuclei does not change the mean number of wounded nucleons, but reduces its fluctuations, because in this case the number of wounded nucleons is more strictly determined by the collision geometry. As a result, the scaled variance of the number of wounded nucleons decrease with κ (compare the Figs.6 and 10).
As for the number of binary NN-collisions, in this case due to more compact distribution of nucleons in nuclei the mean number of collisions increases along with its variance. Therefore the scaled variance of the number of binary collisions weakly depends on the variation of the parameter κ (compare the Figs.9 and Fig.11). Important that in both cases the contribution of the contact term plays the crucial role.
Discussion and conclusions
It's shown that although the so-called optical approximation gives the correct results for the average number of wounded nucleons and binary collisions the corresponding variances can't be described within this approximation in the case of nucleus-nucleus interactions.
In the framework of classical Glauber approach the analytical expression for the variance of the number of participants (wounded nucleons) in AA collisions at a fixed value of the impact parameter is presented. It's shown, that along with the optical approximation contribution depending only on the total inelastic NN cross-section, in the case of nucleus-nucleus collisions there is the additional contact term contribution, depending on In classical Glauber approach this contact contribution arises due to taking into account the interactions between two pairs of nucleons in colliding nuclei (a pair in one nucleus with a pair in another). It's found, that the interactions of higher order, than between two pairs of nucleons, don't contribute to the variance. Whereas the expression for the mean number of participants was proved to be exact already in the optical approximation, which bases on taking into account only the averaged probability of interaction between single nucleons in projectile and target nuclei.
These results are obtained in the framework of pure classical (probabilistic) Glauber approach [18]. However it's possible to suppose, that in the quantum case the one-loop expression for the variance and the "tree" expression for the mean number of participants and binary collisions will be exact.
Using obtained analytical formulae, the numerical calculation of the variance of the participants number in PbPb collisions at SPS energies was done as an example. Demonstrated that at intermediate and large impact parameter values the optical and contact term contributions are of the same order and their sum is in a good agreement with the results of independent MC simulations of this process.
When calculating the variance of the total (in both nuclei) number of participants the correlation between the numbers of participants in colliding nuclei is taking into account. The analytical expression for the correlator at a fixed value of the impact parameter is obtained. The results of numerical calculations of the correlator for the same process of PbPb collisions show that at intermediate and large values of the impact parameter its contribution to the variance of the total number of participants is about half of the variance in one nucleus, again in good agreement with independent MC simulations.
As a result for peripheral PbPb collisions the variance of the total number of participants, calculated with taking into account the contributions of this correlator and the contact terms, occurs a few times larger than the Poisson one.
In the framework of the same classical Glauber approach the analytical expression for the variance of the number of NN binary collisions in given centrality AA interactions is also found. Along with the optical approximation term it also contains other terms, which occur the dominant ones.
Due to these additional terms the variance of the number of collisions at all values of the impact parameter is several times higher than the Poisson one, whereas the variance given by the optical approximation practically coincides with the Poisson one. Again the results obtained by the independent MC simulations confirm our analytical result.
Important that these additional contact terms in the expressions for the variances arise only in the case of nucleus-nucleus collisions. In the case of proton-nucleus collisions they are missing and the variances are well described by the optical approximation.
Note that we have used the simplest factorized approximation (31) for the nucleon density distribution in nuclei and do not take into account nucleon-nucleon correlations within one nucleus, which play a fundamental role, for example, in the description of particle production in nuclear collisions outside the domain kinematically available for a production from NN-scattering (so-called 'cumulative' phenomena) [26].
The additional contact contribution to the variance of the number of wounded nucleons, as we have found, arises due to interactions between two pairs of nucleons in colliding nuclei, which need to occur at the same position in the impact parameter plane. Taking into account nucleon-nucleon correlations within one nucleus must increase the probability of such configurations and hence the contribution of the contact term. However, numerical accounting of these effects is beyond the scope of the present paper.
Interestingly, the nontrivial contact terms in variances (missing in optical approximation) arise in our approch already in the framework of the exploited factorized approxima- Figure 11: The scaled variance of the number of binary NN-collisions. The same as in Fig.9, but for the nucleon density distribution in nuclei (16) with a smaller value of the Woods-Saxon parameter κ=0.3 fm.
tion for the nucleon density in nuclei, i. e. without taking into account nucleon-nucleon correlations within one nucleus.
The authors thank M.A. Braun and G.A. Feofilov for useful discussions. The work was supported by the RFFI grant 09-02-01327-a.
A Calculation of the variance of participants in one nucleus
The geometry of AB-collision is depicted in Fig.12. All a j and b k are the two-dimensional vectors in the impact parameter plane. In the framework of the classical (probabilistic) approach [18] the dimensionless σ(b) is the probability of inelastic interaction of two nucleons at the impact parameter value b (see also (13)). The T A and T B are the profile functions of the colliding nuclei A and B. We are implying that for heavy nuclei the factorization takes place: Convenient to introduce the abbreviated notation: All integrations imply the integration over two-dimensional vectors in the impact param- Figure 12: Geometry of AB-collision.
eter plane. In new notation the (10) takes the form Recall that here X means average of some variate X at fixed positions of all nucleons in A and B; A and B mean averaging over positions of these nucleons. We introduce the set of variates X 1 , ..., X A (each can be equal only to 0 or 1) by the following way: X j = 1, if j-th nucleon of the nucleus A interacts with some nucleons of the nucleus B and X j = 0, if j-th nucleon doesn't interact with any nucleons of the nucleus B. The number of participants (wounded nucleons) in the nucleus A in a given collision at the impact parameter β is equal to the sum of these variates: Then we have for the mean value: and for the variance of N A w (β): At first we calculate the mean value (35). We denote by q j and p j the probabilities that the variate X j will be equal to 0 or 1 correspondingly. Clear that for given configurations of nucleons {a j } and {b k } in nuclei A and B: where and X j = 0 · q j + 1 · p j = p j .
Note that p j and q j are the functions of a j , b 1 ,...,b B and β: Recall that we restrict our consideration by the region of the impact parameter β < R A + R B where the probability of inelastic nucleus-nucleus interaction σ AB (β) is close to unity. Otherwise one has to introduce in formula (37) for q j the factor 1/σ AB (β), where and σ AB = dβ σ AB (β) is so-called production cross section, which can't be calculated in a closed form. Substituting (37)-(39) into (35) we have Averaging at first on positions of the nucleons in the nucleus B, we find where we have introduced the short notation: Averaging now on positions of the nucleons in the nucleus A, we have which is the same for any j, as the a j is the integration variable: Then by (42) we find which coincides with formula (1) of the text, if one takes into account the connection (see (5) and (43)). We see that the result for the mean number of participants (46) is the same as in the optical approximation (8).
We calculate now by the same way the variance of N A w (β). By (36) we have: Note that the X j 1 X j 2 can't be reduced to the product X j 1 X j 2 . Just in this point the optical approximation breaks for AB collisions. Since by (39) X 2 j = X j = p j , then for the first sum in (48) we find: Because for the second sum in (48) using (45) we have: where we have introduced We calculate now Q (12) (β). Averaging again at first on positions of the nucleons in the nucleus B, we have where σ j 1 and σ j 2 are given by (43) and Then averaging on positions of the nucleons in the nucleus A one can rewright (51) as follows Q (12) where by (52) which coincides with the formula (2) of the text if we take into account (47), (53) and (54).
B Correlation between the numbers of participants in colliding nuclei at fixed centrality The calculations are similar to ones in appendix A (we use the same notations). Along with the set of variates X 1 , ..., X A we introduce in the symmetric way the set of variates X 1 , ..., X B (each can be again equal only to 0 or 1). X k = 0(1) if k-th nucleon of the nucleus B doesn't interact (interacts) with nucleons of the nucleus A. Then similarly to (34) for the number of participants (wounded nucleons) in a given event in the nucleus B we have: Then and similarly to (39) where the P jk (1, 1) is the probability that the both variates X j and X k will be equal to 1. For the probability P jk (1, 1) one finds where σ jk is the probability of the interaction of the j-th nucleon of the nucleus A with the k-th nucleon of the nucleus B (see formula (38)) and ρ jk is the probability of the interaction of the j-th nucleon of the nucleus A with at least one nucleon of the nucleus B except the k-th nucleon (correspondingly ρ jk is the probability of the interaction of the k-th nucleon of the nucleus B with at least one nucleon of the nucleus A except the j-th nucleon): Combining (56)-(59) and acting as in appendix A we find the formulae (19)-(22) of the text.
C Fluctuations of the number of collisions
In this appendix we calculate the variance of the number of NN-collisions in AB-interaction at fixed value of centrality in the framework of the approach under consideration.
To calculate the number of collisions we define the set of the variates Y 1 , ..., Y A , which can take on a value from 0 to B. If in the given event the j-th nucleon of the nucleus A interacts with n nucleons of the nucleus B, then Y j = n. The number of NN-collisions in the given event at the impact parameter β can be expressed through these variates as follows: Clear that again (see appendix A): To calculate P (Y j = n) for n = 1, ..., B we introduce {k 1 , ..., k n } -the sampling from the set {1, ..., B} and {k n+1 , ..., k B } -the rest after sampling. Then First we again calculate the mean value of the number of collisions: For a given configuration {a j } and {b k } we have: Using (62) and averaging on positions of the nucleons in the nucleus B, one finds We use the same notations as in appendix A (see (43)). Averaging then on positions of the nucleons in the nucleus A, we finally find: where which coincides with the formulae (23) and (24) of the text. Comparing (66) and (29) we see that the result for the mean number of collisions is the same as in the optical approximation.
In the rest of the appendix we calculate the variance of the number of collisions. To calculate the variance: one has to calculate To calculate the first sum we denote by k ′ 1 , ..., k ′ n -the indices of the nucleons of the nucleus B, which interact only with the nucleon j 1 of the nucleus A. By k ′′ 1 , ..., k ′′ m we denote the indices of the nucleons, which interact only with the nucleon j 2 of the nucleus A and by k 1 , ..., k r we denote the indices of the nucleons, which interact with both nucleons j 1 and j 2 . By k 1 , ..., k B−n−m−r we denote the indices of the nucleons of the nucleus B, which don't interact with the nucleons j 1 and j 2 of the nucleus A. Then the probability p j 1 j 2 of such event in these notations is equal to where Using (74) and (75) we can rewrite p j 1 j 2 in the following form (1−σ j 1 k i −σ j 2 k i +σ j 1 k i σ j 2 k i ) .
(76) The probability P j 1 j 2 (n, m, r) that the nucleons j 1 and j 2 of the nucleus A interact separately with n and m nucleons of the nucleus B and at that else simultaneously with r nucleons of the nucleus B is equal to P j 1 j 2 (n, m, r) = p j 1 j 2 , where we have used the short notations: x = σ j 1 , y = σ j 2 , z = σ (j 1 j 2 ) .
For the components of the second sum (72) the similar but much more simple calculation gives Averaging now on positions of the nucleons in the nucleus A, we can rewrite (70) as Recalling now that σ 1 , σ 2 and σ (12) are given by the formulae (43) and (54) with χ(β), χ 1 (β) and χ 1 (β) defined by the formulae (24), (26) and (27) of the text. Using now the definition (69) and taking into account the formula (66) for N coll (β) we come to the expression (25) of the text for the variance of the number of collisions.
|
2012-03-13T09:12:24.000Z
|
2011-02-13T00:00:00.000
|
{
"year": 2011,
"sha1": "f305cde8cdb174d158b549d93eb44217688244e9",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1102.2582",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "f305cde8cdb174d158b549d93eb44217688244e9",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
40877939
|
pes2o/s2orc
|
v3-fos-license
|
What physicians should know about the management of chronic hepatitis B in children: East side story
Understanding the natural course of chronic hepatitis B virus (HBV) infection is very important for the management and treatment of chronic hepatitis B in children. Based on treatment guidelines, the management of HBV carriers and treatment of active hepatitis have been advancing and resulted in increased survival, as well as decreased risks of complications such as liver cirrhosis and hepatocellular carcinoma. Development of a continuing medical education (CME) program for primary physicians becomes an important responsibility of pediatric hepatologists. CME could prevent misdi-agnosis and unnecessary treatment that could lead to liver complications or antiviral resistance. In addition, education of patients and their parents is necessary to achieve better therapeutic outcomes.
INTRODUCTION
Chronic hepatitis B is defined as the presence of HBsAg and elevated serum alanine aminotransferase (ALT) levels for more than 6 mo, along with distinctive necroinflammation in the liver [1] .
More than 240 million people have chronic hepatitis B in the world, with the highest incidence in sub-Saharan African and East Asian regions [2] . In China, approximately 120 million people are hepatitis B virus (HBV) carriers [3] . Most people in HBV prevalent countries become infected with HBV during childhood, which becomes a financial burden of governments, especially in East Asia.
The universal vaccination program has reduced the prevalence rate of hepatitis B, especially in children. The seroprevalence of HBsAg in Korean children decreased to 0.2% of preschool children and 0.44% of early teenage students in 2007 owing to the universal vaccination and hepatitis B perinatal transmission prevention program [1] . In Taiwanese children (< 15 years of age), the rate of HB-sAg positivity dropped to 0.6% after the successful implementation of universal vaccination for 20 years, compared to 10% in the pre-vaccination era [4] .
However, treatment is not always easy because of inattention to the natural course of chronic hepatitis B in children and emerging antiviral drug resistance. It is essential to minimize the severity of active hepatitis and reduce the length of the immune clearance phase, and avoid inappropriate treatment or negligence. Spontaneous HBeAg seroconversion does not indicate histologic remission of chronic hepatitis. Currently, medications for the treatment of chronic hepatitis B infections are conventional interferon (IFN), lamivudine and adefovir in children. Global studies on potent antiviral agents such as entecavir and tenofovir in children are ongoing; however, these drugs need approved as the primary treatment option in children.
In the meanwhile, there are still a lot of misconception about the natural course, diagnosis and treatment of chronic hepatitis B, especially in children. "Children are not little adults" is a true sentence for chronic hepatitis B. In the East where vertical infection is a major transmission mode, it is difficult to determine the facts on the basis of western journal articles. The authors have reviewed published articles comparing Asian children to adults as well as to western children, especially in the management and treatment of chronic hepatitis B in children.
HEPATITIS B
After acute HBV exposure, 90% of infants of HBeAg seropositive mothers become chronic HBV carriers, whereas 25%-70% of children aged < 3-5 years and 5% of adults become chronic carriers [5][6][7] . The immature immune system in young children may be accountable for this.
Chronic hepatitis B in children with HbsAg positivity for at least six months is transmitted predominantly through vertical infection [7] . In this case, the initial phase is the immune tolerance phase, in which HBeAg is positive and the serum HBV DNA level is over 10 5 copies/mL (over 10 7 -10 10 copies/mL in practice) due to a high level of HBV replication; however HBV carriers are generally asymptomatic with normal aspartate aminotransferase (AST)/ALT values. Liver damage is minimal regardless of the active HBV replication rate during this phase, for HBV is a virus that does not attack hepatocytes directly [8] .
As the stage progresses onto the immune clearance phase (immune active phase), accompanied by necrosis of liver tissues, ALT rises due to the transaminase that comes from the damaged hepatocytes. This results in chronic active hepatitis with HBeAg positivity and fluctuation of serum HBV DNA level, complicated by necrosis and fibrosis of liver tissues. The majority of patients who have been vertically transmitted progress into chronic active hepatitis before the age of 35. Though liver damage is usually mild in children, liver cirrhosis or hepatocellular carcinoma may even develop as complications 2-7 years after vertical infection [9,10] .
At the end of the immune clearance phase and transition to the non/low replicative phase, removal of infected hepatocytes results in low HBV DNA and HBeAg seroconversion [11] . Also known as the inactive HBV car-rier state, serum aminotransferase normalizes and serum HBV DNA also declines to the level of less than 2000 IU/mL (10 4 copies/mL). Whilst 2/3 of untreated inactive carriers stay in the non (low) replicative phase for a prolonged period, this is important to note since approximately 20% of untreated inactive carriers may go through the reactivation phase, some through repetitive hepatitis flares, and 4% may revert to HBeAg positive chronic hepatitis B [12,13] . However, these kinds of acute exacerbation are unusual in children after HBeAg seroconversion [14] .
SPONTANEOUS HBEAG/ANTI-HBE SEROCONVERSION
It is rare for HBeAg to spontaneously seroconvert during the immune tolerance phase. Spontaneous HBeAg seroconversion occurs during the immune clearance phase. It takes place at different periods of life from childhood to the 5 th decade, but most commonly between the ages of 15 and 30 years. Ages ranging from thirties to forties are the most common in Asian countries where vertical transmission is the major route, due to the slower development of the immune clearance phase [12] .
The rate of spontaneous seroconversion is lower in children than in adults. Spontaneous HBeAg seroconversion has been shown to be less than 2% annually in children aged < 3 years, which increases to 4%-5% over the age of 3, and 10%-14% for the 10-14 year age group in Taiwan [15] . In South Korea, HBeAg clearance occurs in around one third of HBV infected children by 19 years of age [16] .
In Asia, the major mode of HBV infection is usually via vertical transmission which leads to a prolonged immune-tolerant phase. That may explain why the spontaneous seroconversion rate is low in Asia compared to western countries. Higher HBeAg seroconversion rates have been reported in children infected horizontally than in those infected perinatally [17,18] .
Although HBeAg seroconversion during treatment is the primary therapeutic target, spontaneous HBeAg seroconversion occurs through the immune clearance phase, after necroinflammation of the liver and subsequent removal of infected hepatocytes. It is during this phase that the plausibility of liver complications may increase, according to the severity and duration of active hepatitis. Even normalization of ALT and disappearance of HBeAg would not guarantee the prevention of liver cirrhosis or hepatocellular carcinoma, had it been untreated or neglected after a series of inappropriate herbal remedies which may have deteriorated the disease for a prolonged period ( Figure 1A). In general, spontaneous HBeAg seroconversion usually leads to good prognosis in adults; however, one third of active hepatitis (HBeAg positive or negative) may recur and some of these patients may even develop liver cirrhosis or hepatocellular carcinoma [12] . A study of European children with horizontally transmitted chronic hepatitis B, with an average of 5.2 ± 4.0 years of follow up, showed that most of them lost HBeAg and had a good prognosis, but some made the transition to HBeAg negative chronic hepatitis or progressed into hepatocellular carcinoma [19] .
It is extremely rare for HBsAg to spontaneously disappear in South Korea unlike western countries where the major route of infection is through horizontal transmission. In Taiwan, a country where the majority of hepatitis B is transmitted vertically as in Korea, the spontaneous HBsAg loss rate is also very low, 0.56%, compared to western countries [20,21] . Korean reports also show that HBsAg spontaneously disappears only in 1.5% of patients under the age of nineteen [16] .
CHRONIC HEPATITIS B CARRIERS
Most chronic hepatitis B patients are asymptomatic. Childhood chronic hepatitis B generally goes through the immune tolerance phase where ALT is normal and there is little or no inflammation in the liver tissue. This period persists for about 10-30 years but may also progress into the immune clearance phase in children [8] . In two follow up studies in Korea and Taiwan, spontaneous HBeAg seroconversion occurred in 11% and 24% of children under the age of 10, respectively [16,20] . In addition, these studies showed that 32% of those under the age of 19 turned HBeAg negative in South Korea [16] .
Since spontaneous HbeAg seroconversion occurs at the end of the immune clearance phase, the long-lasting immune clearance phase could lead to the development of liver cirrhosis or even hepatocellular carcinoma. Therefore, early management to shorten the duration of active hepatitis (immune clearance phase) is important ( Figure 1B). Monitoring ALT levels at intervals of at least six months is necessary to avoid patients going through unrecognized active hepatitis in their childhood, as well as to detect the earliest transition time to the immune clearance phase for patients in the immune tolerant phase. In a Korean study of children with chronic hepatitis B, the estimated rate of entry to the immune clearance phase was 4.6% for those under the age of 6, 7.1% for those between 6 and 12 years, and 28.0% for patients between 12 and 18 years, which was significantly higher than that observed for children under the age of 12 [22] .
If left untreated or treated inappropriately during the immune clearance phase, liver damage is unavoidable as it has already occurred and progressed, even though spontaneous HBeAg seroconversion has occurred, because the rate of HBeAg seroconversion is not significantly increased by long-term follow-up, treated or not [23] . The risk is known to increase proportionally with time until the point of HBeAg seroconversion [24][25][26][27] .
Laboratory interpretation
Positive HBeAg does not mean active hepatitis in children. HBeAg is positive in both the immune tolerance phase and the immune clearance phase. Serum HBV DNA, ALT, and HBeAg titer are the parameters for distinguishing the two phases. Active hepatitis is defined when serum transaminase AST/ALT is persistently increased and serum HBV DNA level is over 10 7 -10 10 copies/mL, regardless of HBeAg status.
If the HBV DNA level is high despite negative HBeAg, it may indicate the possibility of HBeAg negative chronic hepatitis B by a precore mutant virus or core promoter mutation [28] . These types of hepatitis are also classified as active hepatitis B if they accompany high levels of serum ALT and serum HBV DNA over 10 4 -10 8 copies/mL, and also need active treatment [29] . It is not prevalent but not negligible during childhood years and needs special attention during treatment and monitoring [28] . Quantitative serum HBV DNA tests are especially important so as to analyze viral response (VR), for it is impossible to assay the level of viral replication with HBeAg alone in HBeAg negative chronic hepatitis patients.
3584
April 7, 2014|Volume 20|Issue 13| WJG|www.wjgnet.com does not ameliorate near-normal liver tissues nor does it bring elimination of HBeAg. On the other hand, such unnecessary treatment would bring tolerance to the medication and may lead to treatment failure in the future when active hepatitis may flare. Treatment is therefore not given to HBV carriers with normal AST/ALT values in children [8] .
Consider treatment
Treatment should be considered, but not initiated immediately when HBeAg is positive and HBV DNA and ALT begin to increase. Treatment is considered in patients who have had serological evidence of HBV infection for at least 6 mo: that is, when HBsAg, HBeAg, HBV DNA are positive and ALT is consistently elevated; this would be regarded as the immune clearance phase of chronic hepatitis B which is appropriate for the initiation of treatment.
Special caution is needed before starting treatment immediately for HBV carriers with elevated liver enzymes. ALT elevation may not be due to chronic hepatitis B, but rather due to other systematic infections such as respiratory tract infection and urinary tract infection in infants and toddlers [8] . Moreover, as obesity is becoming a social problem nowadays, there is an increasing number of obese children and adolescents who may also have nonalcoholic fatty liver disease (NAFLD). Weight monitoring should be the priority in this case, and if ALT does not normalize after weight reduction, liver biopsy should be done to determine whether the severity of liver damage is associated with hepatitis B or NAFLD. HBV carriers with concomitant Wilson's disease or muscular dystrophy are some of the other plausible reasons that may account for high levels of ALT, and need to be included in the differential diagnosis in children.
Even with the apparent increase in ALT, starting treatment at less than twice the level of the upper limit of ALT may lead to a higher chance of drug resistance, which could result in higher treatment failure [37] .
Good predictor of treatment
Therapeutic response is better when serum ALT is high and HBV DNA low; that does not mean initiation of treatment should be left until this stage. Optimal treatment time should not be delayed until the end of the immune clearance phase when serum ALT is high and HBV DNA low, overlapping the period of HBeAg seroconversion. If left untreated until this period, the HBeAg seroconversion rate may increase by initiation of treatment at this moment because of the additional effect of spontaneous HBeAg seroconversion; however, it would bring the consequence of neglecting the chance to treat at the optimal time during the early immune clearance phase. Our ultimate goal does not lie in HBeAg seroconversion only. It lies in halting the replication of HBV, normalizing ALT and stopping the progression to liver cirrhosis or hepatocellular carcinoma so as to reduce complications and mortality before the liver damage becomes irrevers-
Liver biopsy
Liver biopsy is imperative for assessing the necroinflammatory grade and fibrosis stage of liver damage, and helps as a guide for making treatment decisions [30] . However, histological assessment is not essential for the initiation of treatment, especially in children [31] .
Biopsy results are usually near normal in HBV carrier children; however, that does not mean that they are always normal. Most liver biopsy samples from children with chronic hepatitis B show minor inflammation and fibrosis; nevertheless, cases exist with severity up to liver cirrhosis or hepatocellular carcinoma [32] . An HBV carrier whose ALT level has been normal since they were young would be in the immune tolerance phase and the liver tissue would be favorable without significant necroinflammation or fibrosis [33] . However, prognosis and liver status would be different from those of HBeAg negative carriers in the low or nonreplicative phase who have gone through active hepatitis. Histology of the liver in the nonreplicative phase may vary from minimal necroinflammation and fibrosis to liver cirrhosis depending on the period and severity of liver damage during the immune clearance phase. The fact that even HBeAg positive healthy carriers show signs of chronic hepatitis in 40% of cases indicates that the majority of Korean adult carriers go through chronic active hepatitis B without realizing it [34] . These findings are similar to those studied in western countries, where extended fibrosis and even liver cirrhosis have been found in more than half of liver biopsy samples of children (ages 1-19, mean age 9.8 years) with HBeAg positivity and increased ALT [35] .
As a non-invasive alternative test to liver biopsy, liver stiffness measurement (Fibroscan®) is frequently used for the evaluation of liver fibrosis, which is valuable in fibrosis staging in Asian patients with chronic hepatitis B [36] .
TREATMENT OF CHRONIC HEPATITIS B
While most children with chronic hepatitis B remain asymptomatic, it may progress into liver cirrhosis or hepatocellular carcinoma if left untreated during the immune clearance phase. A keen attention should be paid to the fact that a predominant portion of patients with chronic hepatitis B that progress into liver cirrhosis or hepatocellular carcinoma get infected during the childhood period. Therefore, persistent follow up should be mandatory in order not to miss the appropriate period for treatment. Active treatment should be considered with the onset of the immune clearance phase.
Wait and see
Treatment is not needed for children with normal ALT for it is not active hepatitis, although there are very high serum HBV levels along with positive HBeAg. HBeAg positivity does not indicate active hepatitis during the immune tolerance phase. Treatment during this period ible [38] . The immune clearance phase is when liver tissue necrosis results in fibrosis by active inflammation, the severity and prolonged length relates to higher rates of complications such as liver cirrhosis (Figure 1). Children and adolescents are of no exception.
Delayed treatment and liver complication
Complications of chronic hepatitis B (liver cirrhosis, hepatocellular carcinoma) can develop even in teenage carriers. It is known that the longer the immune clearance phase and the more frequently the ALT flare-up occurs, the more likely liver cirrhosis or hepatocellular carcinoma is to occur. If active hepatitis is left untreated during the immune clearance phase, there is no guarantee that it will not progress into liver cirrhosis or hepatocellular carcinoma even if spontaneous seroconversion of HBeAg or HBsAg occurs [18,19] .
Progression rate into liver cirrhosis in HBeAg positive patients is known to be related to the length of HBeAg positive period and the reactivation rate of hepatitis [27] . If the serum HBV DNA level is persistently over 10 4 copies/mL, it then becomes a risk factor for hepatocellular carcinoma [39] . Therefore, the longer the immune tolerance phase and the immune clearance, the more likely complications may occur from chronic hepatitis; and of the two, the immune clearance phase plays a key role. In the natural course of chronic hepatitis B, it may seem as if the severity of liver complications increases from the point where HBeAg is lost. However, when HBeAg is lost during the initial phase of the immune clearance phase, it leads to a decrease of the HBeAg positive period and high HBV load period, meaning a reduction in liver complications as well. Meanwhile, it is necessary at all costs to check serum HBV DNA after the loss of HBeAg to confirm whether or not it is HBeAg negative chronic hepatitis.
NUCLEOS(T)IDE ANALOGUE
The therapeutic efficacy of IFN-alpha in a multinational randomized controlled study was 33% at 48 wk after the initiation of 24 wk of treatment for children [40] . IFN therapy may have more chance of cure in the aspect of HBsAg clearance, especially in younger children < 5 years of age [41] . Currently, clinical trials are undergoing in children using pegylated interferon-α alone or combined with nucleos(t)ide analogue which may have promising results, especially in Western countries.
The predominant genotypes of HBV are B and C in China and South East Asian countries [23] . In adult studies, the therapeutic response to IFN was better for genotype B than C [42] . However, IFN is not very effective in genotype C predominant regions, where most children are vertically infected [43,44] . Children treated with lamivudine showed a significantly higher HBeAg and HBsAg seroconversion rate compared to IFN-treated children in a long-term follow-up study in Korea, where genotype C was predominant [45] .
Antiviral resistance
Drug resistance increases with the prolonged use of the nucleos(t)ide analogues. The resistance rate seen in children after two years is 23% [45] . Liver function may deteriorate after antiviral resistance to medication due to the YMDD mutation and may require substitution of the drug. A successful therapeutic response could be achieved when HBeAg seroconversion occurs before the advent of drug tolerance.
At present, entecavir and tenofovir are registered for use in children over 16 years and 12 years, respectively, though they are the first-line anti-viral agents according to most HBV treatment guidelines. In children, more effective virologic responses have been demonstrated with the use of either add-on adefovir or switching to entecavir monotherapy compared with those with lamivudineresistant chronic hepatitis B who switched to adefovir monotherapy [46] . However, resistance to lamivudine is a risk factor for entecavir resistance and adefovir is no longer the first line option.
How long to use
The nucleos(t)ide analogue is not a drug that should be stopped after a certain period of time, but rather a medication to be continued until HBeAg seroconversion occurs. If stopped prior to seroconversion, hepatitis is most likely to reactivate. The therapeutic goals for antiviral treatment in HBeAg positive patients are as follows: undetected HBV DNA in PCR (< 10 3 copies/mL), normalized serum ALT, at least one year of undetected HBeAg before discontinuing the medication, and persistence of anti-HBe [47,48] . HBV DNA suppression has a close correlation with recovery of liver tissues and HBeAg seroconversion [49] .
It is true that up to 10% of infected hepatocytes may survive after one year of treatment with lamivudine since lamivudine does not affect the covalently closed circular DNA (cccDNA) of HBV inside the nucleus of hepatocytes [50] . HBV DNA will decrease, however, to the level where it would be impossible for HBV DNA to be resistant to antiviral agents, if treated and suppressed for a sufficient period. Complete viral eradication means removal of remaining cccDNA from the hepatocyte, and this is made possible by the normal turnover of hepatocytes and reactivation of the host immune response [28] . This supports the 2011 KASL and 2012 EASL guidelines for continuing the nucleos(t)ide analogue for one year or more after disappearance of HBeAg [48,51] .
Special caution is needed in HBeAg negative chronic hepatitis patients when using the nucleos(t)ide analogue so that the medication is not halted shortly after HBV DNA clearance. HBeAg/anti-HBe seroconversion cannot be effectively measured in HbeAg negative patients for anti-HBe positivity from the initial time point. In my opinion, at least two or three additional years of antiviral treatment is required for such children after the occur-rence of HBV DNA clearance [52] .
HBsAg clearance
In adults, treatment of hepatitis rarely results in a loss of HBsAg, which is the ultimate goal of treating chronic hepatitis B. However, in a study conducted by Choe et al [45] , loss of HBsAg occurred in 42% in patients of preschool age after two years of lamivudine treatment. HBsAg disappeared in 13 of the 49 (26.5%) patients who experienced lamivudine-induced HBeAg seroconversion [53] . However, HBsAg loss in school children rarely occurred, similar to those in adults.
Pretreatment baseline ALT > 2 × ULN
The resistance rates to lamivudine were reported in children (pretreatment baseline ALT > 2 × ULN) to be 10% at 1 year of treatment and 23%-26% at 2 years after the initiation of treatment [37,45] . Breakthrough was only seen in 5.9% of the 2-year treatment group in preschool children with higher pretreatment baseline ALT [37] . Similar results have been shown by other studies in Korean children [54,55] . The results in children are lower than those in adult studies [56,57] .
On the other hand, in a western multicenter study, antiviral resistance analysis showed that the YMDD mutation rate was as high as 49% in 2-year treatment and 64% in 3-year treatment, in that order [58] . These outcomes were even worse than those of adults.
That study included an inappropriate portion of patients with active hepatitis. Only 51% and 11% of children with pretreatment ALT > 2 × ULN were enrolled for the 2 and 3-year treatment groups, respectively. Enrolling significant numbers of patients with a pretreatment ALT level < 2 × ULN explains such a high resistance rate in western children. Therefore, an elevated pretreatment ALT level (> 2 × ULN) is key to decreasing antiviral resistance.
Confirm the immune clearance phase
Primary treatment with the nucleos(t)ide analogue should be considered if patients have persistently elevated ALT levels > 2 × ULN for more than 6 mo, nonetheless it should be confirmed that patients are in the immune clearance phase. Prior to starting therapy, a comprehensive assessment of the patient's status is important to exclude other causes of abnormal liver function tests, such as reactive hepatitis in infants and nonalcoholic steatohepatitis in obese children.
Compliance
One of the most important predictive factors for the therapeutic response to the nucleos(t)ide analogue is good compliance [59] . A significant portion of viral breakthrough is due to poor adherence to the medication [60] . Education of children and their parents (guardian) is required to continue good compliance leading to ideal therapeutic outcome, based on trust between doctors and patients. In addition, the optimal dosage according to body weight should be adjusted by the clinicians as children grow rapidly.
On treatment monitoring and keep up-to-date by the most current guidelines
Treatment response at 24 wk is important to take which treatment strategy to use. Patients with a complete virologic response generally have a very low possibility of antiviral resistance [61] .
Proper monitoring for virologic breakthrough is needed, because early detection is critical to decide the most ideal intervention. If the nucleos(t)ide analogue is stopped before or immediately after HBeAg seroconversion, the likelihood of recurrence increases. Therefore, the 2011 KASL (Korean Association for Study of Liver) guidelines and 2012 EASL guidelines advise continuing the nucleos(t)ide analogue for one year or more after the disappearance of HBeAg to sustain a serological and/or virological response [48,51] .
CONCLUSION
In most children with chronic hepatitis B, treatment indications should be very carefully evaluated. Treatment should concentrate on suppressing viral replication in the early period of immune active hepatitis. High potent nucleos(t)ide analogues such as tenofovir and entecavir are anticipated to be available for use in younger children, which also have a high genetic barrier.
|
2018-04-03T04:56:41.041Z
|
2014-04-07T00:00:00.000
|
{
"year": 2014,
"sha1": "7bbaf564e39cff156d6b4036098237d708667a2b",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.3748/wjg.v20.i13.3582",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "e4c8fafc79b3f6f113ee115b28dc41e68ed93116",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
255929693
|
pes2o/s2orc
|
v3-fos-license
|
Cup Overhanging in Anatomic Socket Position or High Hip Center of Rotation in Total Hip Arthroplasty for Crowe III and IV Dysplasia: A CT-Based Simulation
Cup overhanging in total hip arthroplasty is a predisposing factor to iliopsoas impingement. In dysplastic hips, cup implantation was simulated in an anatomic hip center of rotation (AHCR) and in high hip center (HHCR). We sought to assess: (1) the percentage of prominent cups; (2) quantify the cup protrusion at different sites on frontal, axial and sagittal views. In 40 Crowe III-IV hips, using a 3D CT-based planning software, cup planning in AHCR and HHCR (CR height ≥ 20 mm) was performed for every hip. Cup prominence was assessed on every plane. HHCR cups were less anteverted (p < 0.01), less medialized (p < 0.001) and less caudal (p = 0.01) than AHCR sockets. AHCR cups were more frequently prominent on at least one plane (92.5% vs. 77.5%), with minimal agreement between the two configurations (k = 0.31, p = 0.07). AHCR cups protruded more than HHCR sockets in the sagittal (p = 0.02) and axial planes (p < 0.001). Axially, at the center of the cup, prominence 6–11 mm occurred in nine (22.5%) AHCR and one (2.5%) HHCR socket. In conclusion, while a routine high hip center should not be recommended, cup placement at a center of rotation height < 20 mm is associated with higher rates and magnitudes of anterior cup protrusion in severe dysplasia.
Introduction
Iliopsoas impingement is an uncommon cause of groin pain after total hip arthroplasty (THA), occurring in 2-5% of the implants [1,2]. Cup overhanging has been advocated as one of the most important causative factors: Dora et al. and Cyteval et al. reported that an anterior cup prominence between 5.8 and 12 mm is necessary, but not sufficient, to cause iliopsoas impingement on the acetabular component [3,4]. Cup prominence may be due to cup oversizing or version mismatch between the acetabular component and the native acetabulum; THA for developmental dysplasia of the hip (DDH) may be more prone to iliopsoas impingement, due to the anterior and superolateral bony deficiency and the small sized acetabulum [5]. There is a paucity of literature providing data about the incidence of iliopsoas impingement and possible causative factors in THA for DDH. Only a comparative study by Zhu et al. reported that iliopsoas impingement occurred in 2.6% of the THAs performed in highly dysplastic hips with no prior surgical release, whereas the incidence in non-dysplastic hips dropped to 0.8% [5]. Moreover, there is no study investigating the incidence of cup overhanging in DDH when the cup was positioned reproducing the anatomic hip center of rotation (AHCR) or when a high hip center of rotation (HHCR) was adopted. A recent systematic review about comparative studies investigating the center 2 of 11 or rotation height found out that, although HHCR had higher rates of dislocations and lower rates of neurological injuries, both the configurations achieved similar clinical and radiographic outcomes, even at long-term, and no recommendation about center of rotation height was formulated [6]. However, iliopsoas tendinopathy and cup overhanging were not mentioned [6].
Thus, we selected a series of Crowe III and IV dysplastic hips. Using a 3D CT-based pre-operative planning software, a cup implantation was simulated in AHCR and HHCR, in every hip. We sought to: (1) determine the percentage of prominent cups in high-riding dysplasia in AHCR and HHCR, (2) quantify the cup protrusion at different sites on frontal, axial and sagittal views in AHCR and HHCR.
Materials and Methods
The institutional review board approved the study (IRB 29/2021/Oss/IOR, 1 February 2021). From the hospital CT database, collecting scans from 2000, a random series of 45 pelvis CTs with native dysplastic hips graded III or IV according to Crowe was selected [5]. Other forms of congenital pathologies or lower degrees of dysplasia, as well as non-native hips, were excluded.
The demographic features of the selected patients were collected. Every CT scan was evaluated using a 3D CT-based pre-operative planning software, Hip-Op [7]. The software is a CT-based, 3D planning environment with a user-friendly graphic user interface, based on a multimodal display visualization. The implants and the patient anatomy are rendered in each view. The planner may select a prosthetic component from a library, and may evaluate the appropriate implant type, size and position by interactively moving and rotating the components in the view area. Pre-and post-planning measurements can be performed using pre-determined and customized techniques.
The anatomy of every native hip was evaluated by collecting data about the center of rotation height, neck-shaft angle, femoral offset, acetabular offset and acetabular anteversion. All the measurements were performed by the first author, using techniques previously detailed in other papers [7]. Inter/intra-observer reliability was proved in another paper [7].
In every hip, the first author simulated the implantation of a standard-sized acetabular cup (Continuum Zimmer, Warsaw, US) both in AHCR and HHCR position. In AHCR, the acetabular component was placed in the true acetabulum with the aim to place the center of rotation in the anatomic position; the anatomic position was cross-checked by assessing the center of rotation height, which should be <20 mm [6]. Differently, in HHCR, the cup center of rotation was positioned outside the true acetabulum (and the anatomic center of rotation of the hip), where the superolateral bony coverage was most adequate: a center of rotation height ≥ 20 mm from the inter-teardrop line (vertical distance) was required [6]. The 20 mm threshold was obtained from the vertical distance ranges provided in the comparative study by Nawabi et al.; it was a low threshold for the center of rotation height, in line with many dedicated comparative studies about AHCR and HHCR (and around 15 mm lower than the classical high hip center proposed by Harris et al.) [6]. For both the solutions, the cup abduction and the cup anteversion targets were set at 40-45 • and 10-20 • , respectively [7]. Some adjustments were allowed to maximize the cup coverage; however, a cup inclination higher than 50 • and a cup retroversion or a cup anteversion higher than 30 • were not admitted. A superior cup undercoverage of more than 33% of the socket surface or pelvic disjunction due to anterior or posterior column violation were not admitted as well. If the cup could not be placed in both AHCR and HHCR positions, the case was excluded.
The planner detailed the cup size, the cup positioning (anteversion, abduction, medialization, center of rotation height, distance between the anterior margin of the native acetabulum and the most anterior surface of the cup on the axial view at the center of the cup), in AHCR and HHCR [7][8][9].
In AHCR and HHCR, the outcome measurements for all the included cases were measured on the three planes, at 5 different sites, as follows: superior cup overhanging on the frontal view (CT slice passing at the center of the cup); anterior cup overhanging on the sagittal view (CT slice passing at the center of the cup); anterior cup overhanging on the axial view (CT slides passing at the superior edge of the cup, in the middle of the superior half of the cup and at the center of the cup). The correct segmentation was provided by the software [ Figure 1]. or posterior column violation were not admitted as well. If the cup could not be placed in both AHCR and HHCR positions, the case was excluded.
The planner detailed the cup size, the cup positioning (anteversion, abduction, medialization, center of rotation height, distance between the anterior margin of the native acetabulum and the most anterior surface of the cup on the axial view at the center of the cup), in AHCR and HHCR [7][8][9].
In AHCR and HHCR, the outcome measurements for all the included cases were measured on the three planes, at 5 different sites, as follows: superior cup overhanging on the frontal view (CT slice passing at the center of the cup); anterior cup overhanging on the sagittal view (CT slice passing at the center of the cup); anterior cup overhanging on the axial view (CT slides passing at the superior edge of the cup, in the middle of the superior half of the cup and at the center of the cup). The correct segmentation was provided by the software [ Figure 1]. The percentages of prominent cups on every plane, on every site were determined, in AHCR and HHCR. The percentages of cups with a prominence 6-11 mm and over 12 mm on every plane and on every site were assessed: the target values were provided in literature [3,4].
All the measurements were taken as distances, considering the most prominent bony landmarks and the most prominent part of the cup on each view.
The inter/intra-rater reliability of the first author as planner was assessed in a previous paper [7]. The percentages of prominent cups on every plane, on every site were determined, in AHCR and HHCR. The percentages of cups with a prominence 6-11 mm and over 12 mm on every plane and on every site were assessed: the target values were provided in literature [3,4].
All the measurements were taken as distances, considering the most prominent bony landmarks and the most prominent part of the cup on each view.
The inter/intra-rater reliability of the first author as planner was assessed in a previous paper [7].
Statistical Analysis
The analysis was performed using SPSS 14.0 (SPSS Inc., Chicago, IL, USA). Quantitative data were reported as average values, standard deviations and ranges of minimum and maximum; qualitative data were expressed as frequencies and percentages. The ordinal variables of the two cohorts were compared using the non-parametric Wilcoxon test. Cohen's kappa coefficient (κ) was used to measure the agreement of the two configurations at the 5 sites; the p-value for kappa was calculated to determine whether to reject or not the following null hypotheses. The Wilcoxon signed test rank was adopted to test the difference between the two configurations. The threshold for significance was set at p = 0.05.
Percentage of Cup Overhanging in AHCR and HHCR
Prominent cups on at least a single site were 37 (92.5%) in AHCR and 28 (77.5%) in HHCR. A cup protrusion could be detected on the three different planes, at five different sites, in six (15%) AHCR cases and two (5%) HHCR cases. Nine cups (22.5%) overhung in AHCR, but not in HHCR, and three cups (7.5%) were not prominent regardless the configuration. Considering every site, there was a minimal agreement in terms of cup overhanging between the two configurations (k = 0.31, p = 0.07).
No agreement between the two configurations could be detected (k = 0.03, p = 0.78) and no significant difference between the two configurations could be observed (p = 0.45).
Anterior cup overhanging on the sagittal view (at the center of the cup) was not detected in 18 (45%) AHCR cups and 32 HHCR cups (80%). Nine (22.5%) AHCR and five (12.5%) HHCR cups overhung between 6-11 mm, and only one (2.5%) AHCR cup protruded more than 12 mm. No agreement between the two configurations could be detected (k = 0.14, p = 0.36) and a significant difference between the two configurations could be noted (p = 0.02).
No anterior cup overhanging on the axial view at the superior edge of the cup was detected in 23 (57.5%) AHCR and 31 (77.5%) HHCR sockets. The cup protruded 6-11 mm in twelve (30%) AHCR and one (2.5%) HHCR cases, while five (12.5%) AHCR and three (7.5%) HHCR sockets overhung more than 12 mm. There was a minimal agreement between the two configurations (k = 0.28, p = 0.008) and a significant difference between the two configurations (p < 0.001) [ Figure 2]. between the two configurations (k = 0.28, p = 0.008) and a significant di the two configurations (p < 0.001) [ Figure 2]. Anterior cup overhanging on the axial view in the middle of the su cup did not occur in 16 (40%) AHCR and 29 (72.5%) HHCR cases. Cups mm totaled 17 (42.5%) in the AHCR cohort and 6 (15%) in HHCR gro one (2.5%) ACHR socket protruded more than 12 mm. There was a mo between the two configurations (k = 0.32, p < 0.001) and a significant di the two configurations (p < 0.001) [ Figure 3]. Anterior cup overhanging on the axial view in the middle of the superior half of the cup did not occur in 16 (40%) AHCR and 29 (72.5%) HHCR cases. Cups protruding 6-11 mm totaled 17 (42.5%) in the AHCR cohort and 6 (15%) in HHCR group, whereas only one (2.5%) ACHR socket protruded more than 12 mm. There was a moderate agreement between the two configurations (k = 0.32, p < 0.001) and a significant difference between the two configurations (p < 0.001) [ Figure 3]. Anterior cup overhanging on the axial view at the center of the cup was not detected in 17 (42.5%) AHCR and 31 (77.5%) HHCR sockets. Prominence between 6 and 11 mm occurred in nine (22.5%) AHCR cups and one (2.5%) HHCR sockets. There was a minimal agreement between the two configurations (k = 0.23, p = 0.01) and a significant difference between the two configurations (p < 0.001) [ Figure 4].
Discussion
Cup protrusion on at least one plane occurred frequently in high ri sia, regardless the hip center of rotation. AHCR and HHCR cup configur ferent patterns of cup prominence; sockets in AHCR had higher rates of p sagittal view and on axial view, at three different sites. Moreover, the AC higher percentages of severe protrusion, in terms of magnitude.
The study has notable limitations. First, there is no assessment of vance of cup overhanging: cup prominence is a necessary, but not a suff ing factor leading to psoas impingement [4,9]. Second, no anatomic desc terior acetabular wall was provided, thus the shape of the psoas valley (
Discussion
Cup protrusion on at least one plane occurred frequently in high riding hip dysplasia, regardless the hip center of rotation. AHCR and HHCR cup configurations led to different patterns of cup prominence; sockets in AHCR had higher rates of protrusion on the sagittal view and on axial view, at three different sites. Moreover, the ACHR sockets had higher percentages of severe protrusion, in terms of magnitude.
The study has notable limitations. First, there is no assessment of the clinical relevance of cup overhanging: cup prominence is a necessary, but not a sufficient predisposing factor leading to psoas impingement [4,9]. Second, no anatomic description of the anterior acetabular wall was provided, thus the shape of the psoas valley (which is another notable causative factor of iliopsoas tendonitis) was not investigated [10,11]. Moreover, the CT evaluation of cup prominence was not based on fixed bony acetabular landmarks, as recommended by Brownlie et al., with the aim to reduce measurement mismatches to a minimum [12]. On the other hand, this is the first study providing data about cup overhanging in high riding dysplasia, with two cup configurations simulated in the same hip; this condition cannot be reproduced in clinical setting, and comparative clinical trials about dysplasia are impaired by the unique anatomic features of every hip. Similarly, the severe anatomic variations of high-grade dysplastic hips precluded the use of standard anatomic landmarks and the description of minute findings, such as the psoas valley. This drawback was partially obviated by providing multiple evaluation sites on the axial view, covering many of the possible positions of psoas impingement [10][11][12].
It is well-known that acetabular dysplasia, with the poor anterior-lateral bone stock and relative acetabular retroversion, may challenge cup position and may predispose one to anterior cup overhanging, however the influence of cup center height on anterior cup protrusion has not been established [10,11]. In this study, in every hip of a random pool of 40 Crowe III and IV hips, CT simulations of cup placement in anatomic and high hip center were performed. A low threshold for HHCR definition was adopted (20 mm) [6]. Cup positioning was calibrated, aiming to reduce anterior cup overhanging to a minimum, while respecting the posterior-medial bone stock and achieving an acceptable combined anteversion. After the simulations, HHCR cups resulted in being significantly less anteverted, less medialized and less caudal than AHCR sockets (with no significant differences in cup size). Medialized and buried cups are less likely to cause protrusion and psoas impingement; thus, potentially, AHCR sockets would have been less likely to overhang [3,4,[10][11][12]. However, despite the more favorable cup position, AHCR cups were more frequently prominent on at least one plane (92.5% vs. 77.5%). Moreover, AHCR cups significantly protruded in the sagittal and axial planes (at every site), usually with more magnitude, than HHCR sockets. In four sites out of five, the two cup configurations had no or minimal agreement, demonstrating the striking difference between the two configurations in terms of cup overhanging. Thus, a significant difference in cup prominence was detected when were adopted two cup configurations, different in terms of hip center height. The high hip center, which provides superior-lateral bony coverage in DDH to support primary socket stability, was also effective in reducing the rates of anterior overhanging in comparison to anatomic center. This difference seems to overlook the single acetabular morphologies. However, the clinical impact of cup protrusion on iliopsoas impingement is still debated in DDH. While some authors noticed this association in a clinical setting (investigating selected cases of iliopsoas impingement after THA), Zhu et al. did not report an increased rate of iliopsoas tendonitis after THAs due to dysplasia [5,11]. In a comparison between THAs in dysplastic (Crowe II, III and IV) and non-dysplastic hips at a mid-term follow-up, the dysplastic cups, which were placed in anatomic or slightly elevated position, did not result in higher rates of iliopsoas impingement (2.6% versus 0.8%, respectively); however, the case series size was quite modest [5]. Thus, it can be concluded that, according to Dora et al. and Cyteval et al., cup protrusion is a necessary, but not sufficient predisposing factor in iliopsoas impingement after THA; in DDH, anatomic cup position may lead to an increased rate of cup overhanging and may consequentially raise the clinical risk for iliopsoas impingement. However, the choice of the center of rotation height should be carefully evaluated. A high hip center was associated with increased polyethylene wear, acetabular loosening, dislocation rate and inferior clinical outcomes [6,13]. Other authors achieved satisfying long-term outcomes in severe DDH using a slightly elevated center of rotation and appropriate offset reconstruction [6,[14][15][16]. While the appropriate center of rotation height in DDH has not been established, there is no evidence for a routine adoption of high hip center, and the possible anecdotical risk of iliopsoas impingement is not sufficient to shift the current paradigm.
Iliopsoas tendonitis should be considered a clinical consequence of cup overhanging in specific acetabular morphologies. Thus, while choosing the appropriate cup placement in high degree DDH (namely considering cup stability, bone stock preservation, biomechanical reconstruction, offset restoration and wear reduction), anterior cup overhanging should be checked, especially when the center of rotation height is <20 mm. When anterior cup overhanging occurs, opportune measures to reduce the risk of iliopsoas impingement should be taken according to the specific cases, as asymmetrically curved cups, cup reorientation or intra-operative iliopsoas release.
|
2023-01-17T18:18:47.960Z
|
2023-01-01T00:00:00.000
|
{
"year": 2023,
"sha1": "14f8f3fad32eebfe222bda23ab3efee02e681408",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2077-0383/12/2/606/pdf?version=1673500520",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0a12e14b95449cb779821813de0a4619aa427175",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
23547078
|
pes2o/s2orc
|
v3-fos-license
|
Recanalization Rates Decrease with Increasing Thrombectomy Attempts
BACKGROUND AND PURPOSE: Use of the Merci retriever is increasing as a means to reopen large intracranial arterial occlusions. We sought to determine whether there is an optimum number of retrieval attempts that yields the highest recanalization rates and after which the probability of success decreases. MATERIALS AND METHODS: All consecutive patients undergoing Merci retrieval for large cerebral artery occlusions were prospectively tracked at a comprehensive stroke center. We analyzed ICA, M1 segment of the MCA, and vertebrobasilar occlusions. We compared the revascularization of the primary AOL with the number of documented retrieval attempts used to achieve that AOL score. For tandem lesions, each target lesion was compared separately on the basis of where the device was deployed. RESULTS: We identified a total of 97 patients with 115 arterial occlusions. The median number of attempts per target vessel was 3, while the median final AOL score was 2. Up to 3 retrieval attempts correlated with good revascularization (AOL 2 or 3). When ≥4 attempts were performed, the end result was more often failed revascularization (AOL 0 or 1) and procedural complications (P = .006). CONCLUSIONS: In our experience, 3 may be the optimum number of Merci retrieval attempts per target vessel occlusion. Four or more attempts may not improve the chances of recanalization, while increasing the risk of complications.
T he Merci retriever was the first device cleared by the US Food and Drug Administration for mechanical thrombectomy of intracranial occlusions, 1,2 with more use in clinical practice than any other mechanical approach. The Merci retriever (Concentric Medical, Mountain View, California) is aimed at reopening large proximal intracranial arterial occlusions due to thromboembolism. Because the technique of mechanical thrombectomy is still in its infancy, ways to optimize its use and technical variables leading to success continue to emerge. One of these aspects centers on whether there is an optimum number of retrieval attempts that yields the highest recanalization rates for the average thromboembolic occlusion. Not uncommonly, our stroke team experiences cases of occlusions that are "resistant" to all recanalization attempts. Such an occlusion fails to recanalize regardless of the number of retrieval attempts. These instances may represent impacted and well-organized clot, underlying stenosis, or both. They may also signify iatrogenic uncorrectable injury to the vessel by the device. In such situations, mechanical thrombectomy alone tends to be unfruitful. In this article, we attempt to identify the optimum number of passes that usually reopens the typical arterial occlusion in acute ischemic stroke, essentially dichotomizing such occlusions from "resistant" ones. Exceeding this number might thus indicate to the operator that the probability of recanalization will decrease with increasing attempts.
Materials and Methods
All consecutive patients undergoing Merci retrieval for large-vessel occlusions were prospectively entered into the data base of our institution according to protocol approved by our local institutional review board. All patients or their proxies gave written informed consent. We included the intracranial ICA, the M1 segment of the MCA, and vertebrobasilar occlusions. Patients with tandem lesions were included if at least 1 of the occlusions involved the intracranial ICA, M1 segment of the MCA, or intracranial vertebral or basilar arteries.
We recorded the final AOL score from the angiogram. This system assigns a score of 0 to represent no recanalization; 1, incomplete or partial recanalization with no distal flow; 2, incomplete or partial recanalization with any distal flow; and 3, complete recanalization with any distal flow. 3 The number of retrieval attempts was then recorded from the report. If the report did not reflect the total number of attempts, the number of passes was deduced from the angiogram itself. The case was excluded if the report did not document the total number of passes and if the number of passes was unclear after reviewing the angiogram.
If the final AOL score was the same as that achieved on a previous attempt, the sequential number of that previous interim pass was documented as the final outcome number of attempts and not the total documented number of passes. For instance, if an AOL score of 2 was achieved on the third recanalization attempt and 2 more interim passes were made to no avail, 3 was recorded as the final outcome number of attempts, not 5. If the interim AOL score achieved after each pass was not specified in the report, the angiogram was reviewed. If the interim score was unclear after angiogram review as well, the total number of passes was used.
In situations with tandem lesions, each target lesion was compared separately on the basis of where the retriever was deployed. For instance, if the retriever was deployed distal to an M1 occlusion when both an M1 and ICA thrombus were present and 3 passes were made in the same manner, 3 final outcome attempts were recorded for both the M1 and the ICA.
The patient's age, sex, and time to first retrieval attempt were compared between the groups with similar final AOL scores (0 -1 versus 2-3), as well as rate of pre-Merci use of intravenous and comcomitant infusion of intra-arterial thrombolytics (for this cohort, solely recombinant tissue plasminogen activator). We also compared the groups with respect to the premorbid medical conditions of hypertension, diabetes mellitus, dyslipidemia, peripheral vascular disease, and atrial fibrillation. Patients with tandem lesions were only included in these comparisons if both occlusions had similar final AOL scores.
We constructed a decision tree separating 3 possible outcomes per retrieval attempt: successful result with completion of the procedure (or continuation without further success), unsuccessful attempt but the procedure continued, and unsuccessful attempt with termination of the procedure (Fig 1). From a decision-making perspective, we made 2 primary comparisons between the number of the pass and the recorded final AOL score to determine if an optimum number of attempts yielded the highest rate of AOL scores of 2 or 3. Following any given number of previous attempts, we first determined whether the prospective decision to proceed with a subsequent attempt could result in an improved recanalization result. Second, we performed an analysis of proportions to determine the odds of a prospective prespecified number of attempts resulting in a recanalization equivalent to that achieved by 6 attempts (the maximum number of attempts used in our cohort). In this comparison, the lowest number of attempts resulting in a rate of good recanalization that did not significantly differ from the rate following Ն1 attempt exceeding this number was then designated the "optimal" number.
We then separated the entire study group into 2 groups: those achieving good recanalization by the optimal number of attempts (optimal group) and those that did not recanalize or did so after more than the optimal number of attempts (suboptimal group). For tandem lesions, we included only those patients who were either optimal or suboptimal for both lesions. We compared the 2 groups with respect to demographic data, time to first retrieval attempt, and use of pre-Merci or concomitant thrombolytics. We compared the rate of procedural complications defined as blood or contrast extravasation into the subarachnoid space, intraventricular hemorrhage or air embolism on postretrieval CT, vessel rupture, dissection, and device fracture. The rate of parenchymal HT was compared between groups according to the previously established European Cooperative Acute Stroke Study definition. 4 Statistical analyses for categoric variables included the 2 test, the Fisher exact test when cell sizes were small, and ORs for selected comparisons. Median values with IQR were calculated for the number of retrieval attempts and AOL scores. Unevenly distributed data were compared by using the Mann-Whitney U test. All analytic procedures were conducted in R, Version 2.8.0. 5
Results
We identified a total of 97 patients with 115 proximal arterial occlusions. The median number of attempts per target vessel was 2.5 (IQR, 2), while the median final outcome AOL score was 2 (IQR, 2). There was no difference between the various final outcome AOL scores with respect to age, sex, premorbid medical history, time to first Merci pass, procedural complications, or use of concomitant intravenous or intra-arterial thrombolytics ( Table 1). The final outcome AOL scores are displayed with respect to each final outcome attempt number in Table 2 The results of the 2 primary comparisons were concordant. Up to 3 retrieval attempts correlated with AOL scores of 2 or 3. There was a substantial chance that proceeding to a third attempt would result in a better overall recanalization result than that achieved if the procedure was stopped after 2 attempts (65.2% versus 49.6%; OR, 1.90; 95% CI, 1.12-3.25). Proceeding to a fourth or any subsequent attempt would not produce an increased rate of AOL 2 or 3 scores (Table 3). Similarly, there was a high likelihood that better recanalization would result from 6 attempts when compared with a prespeci- fied total of Յ2 pulls ( After separating the cohort into 2 new groups, those achieving good recanalization by 3 attempts and all others, we found no difference in demographic data, premorbid medical conditions, time to first Merci pass, or use of concomitant thrombolytics. Procedural complications occurred more frequently when Ն4 passes were attempted (42.4% versus 14.8%, P ϭ .006). Thirteen patients experienced parenchymal HT. This was evident on immediate postprocedural imaging in 3 cases, while the remainder experienced delayed parenchymal HT within the 72 hours following intervention. In most of these cases, HT occurred between 24 and 48 hours after attempted thrombectomy. Parenchymal HT rates did not vary when groups were stratified according to final outcome AOL score or by the number of retrieval attempts.
Discussion
An optimal number of attempted retrievals with the Merci device has not yet been demonstrated. The initial Mechanical Embolus Removal in Cerebral Ischemia trial was limited to 8 attempts per case, which remains the limit at our institution. However, experience has shown that less than this number is often necessary. After the first several pulls, the operator often has an idea whether the target occlusion will ever be successfully recanalized. Much of this is the sensation transmitted through the retriever as well as how and where the coils of the retriever unravel on repeated passes. Anecdotally, some cases of occlusions are suspected to be "resistant" when the transmitted sensation is "gritty" or when the retriever consistently unravels in the same location while stretching the entire vessel. These cases are likely responsible for most of those requiring more than the optimal number of passes and are ultimately unlikely to recanalize with mechanical thrombectomy alone. These instances may be ones involving impacted clot, underlying stenosis, or both. Alternatively, these "resistant" cases may represent iatrogenic injury. The device may cause dissection not detectable on angiography, or the endothelium may be injured and the natural anti-thrombogenic mechanism of the artery, disrupted, leading to ongoing in situ thrombosis. Because the optimal number of passes represents the ideal case of the typical embolic occlusion, exceeding this number may indicate that the operator is dealing with such a "resistant" occlusion. Knowledge of such an optimum number might thus signal a point of diminishing returns and increasing complications.
We demonstrated that 3 Merci retrieval attempts tended to yield improved recanalization. After 3 attempts have been made, there is low probability that a subsequent attempt will improve the recanalization result obtained after 3 passes. Sim-ilarly, attempting only Յ3 attempts prespecified at the start of an intervention will likely result in an equivalent outcome to a procedure with no imposed limit. Nearly one-third (18/57) of patients proceeding to 3 attempts experienced successful recanalization, contributing an additional 13% to the overall rate of success.
Examination of the rates of recanalization (Fig 3) with aggregated attempts corroborates our evidence that diminishing return appears to occur after 3 pulls. A good recanalization rate of 47.4% followed 2 attempts and 65.2% followed 3, while each subsequent attempt successfully produced only an additional 7%, 2%, and 1% of successful outcomes, respectively.
We demonstrated that the rate of procedural complications increased when the optimal number was exceeded, regardless of final recanalization. Thrombectomy with the Merci retriever can disrupt the endothelium, particularly when an impacted clot is in the M1 segment or beyond. In such a situation, the absence of any points of dural fixation allows the vessel to stretch and likely disrupts the basement membrane in the subarachnoid space. Although the most common finding on postthrombectomy CT scans is subarachnoid extravasation thought to be contrast material, this extravasation is likely to have a hemorrhagic component as well, and thus no difference was drawn between subarachnoid hemorrhage and contrast extravasation. The clinical significance of this postthrombectomy CT finding is unclear at this time. Similarly, because clinical outcome was not a measure in this study, it is unclear whether the higher rate of the other procedural complications of dissection, device fracture, intraventricular hemorrhage, and perforation had a significant impact on morbidity and mortality.
The optimal number we describe should be considered for each target lesion. As is sometimes the case, more proximal larger lesions (such as an ICA or basilar occlusion) tend to be less impacted and can be completely recanalized, 6,7 yet they produce distal embolism that subsequently requires follow-up mechanical retrieval. Similarly, tandem lesions may initially require a larger Merci retriever to clear the proximal thrombus to allow a smaller device to access the distal thrombus. We attempted to account for both of these situations in our analytic design. By counting the attempts per each target lesion, we can more accurately describe the thrombectomy characteristics of the individual lesion rather than the overall difficulty of revascularizing the patient, much as we are addressing the AOL score and not the overall reperfusion. Thus, in the situation in which a tandem ICA occlusion occurs proximal to an M1 stenosis with superimposed clot, analysis of the characteristically easier ICA recanalization would likely reveal a positive recanalization outcome and a corresponding lower number of attempts, while the more difficult, if not impossible, distal lesion might demonstrate less favorable results.
One angiographic aspect that was not a planned analysis in our comparison is the specific thrombus location in the subgroup that had occlusions involving the M1 segment. Authors have shown that the more proximal the MCA occlusion, the less likely it is to recanalize. 8 This finding may have had an unforeseen impact on our results if there was an uneven distribution of proximal M1 occlusions in the suboptimal group.
Our study is intended to characterize recanalization and not overall reperfusion and thus compares only the AOL score and not the Thrombolysis in Cerebral Ischemia grade. 9,10 Even though they conceptually complement one another, we chose to assess only the AOL score because it more accurately describes the success in retrieving or macerating thrombus at a given location. Although individual lesion characteristics are best described with the AOL score, its main weakness is the broad range of each score and poor correlation with overall reperfusion. 10 For instance, an AOL score of 2 can represent anywhere from sluggish antegrade flow score through a subocclusive thrombus to rapid distal perfusion in the presence of minimal residual thrombus. Another inherent weakness that we could not correct for is a volume effect. The increasing volume at our institution during the past several years reflects the development of an extensive stroke network in the surrounding communities, mandatory diversion, and the rising public and Emergency Medical Service awareness of the availability of techniques such as mechanical thrombectomy. The annual number of strokes treated with the Merci device at our institution was 10 in 2003 and 25 in 2006. This potential volume effect may improve recanalization rates overall.
Although the same operators were involved for the entirety of the data-collection period, our study also does not account for improvements in thrombectomy technique and experience with time. Currently, our numbers are not large enough to detect improvement in recanalization results with time. Our study also does not account for advances in device technology such as those demonstrated in the Multi MERCI trial. 11
Conclusions
We describe a dichotomy in the ease of recanalization of typical occlusions and more resistant ones and define an optimum number that differentiates these 2 groups. Because time is essential in acute stroke therapy, awareness of this difference may prompt the endovascular operator to consider switching to another means of revascularization such as intracranial angioplasty and/or stent placement when thrombectomy fails after the third pass.
|
2017-06-22T21:05:55.684Z
|
2010-05-01T00:00:00.000
|
{
"year": 2010,
"sha1": "bedad05b042dc211a08c1641837e841f4dfa6d2c",
"oa_license": "CCBY",
"oa_url": "http://www.ajnr.org/content/31/5/935.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "bedad05b042dc211a08c1641837e841f4dfa6d2c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
91732366
|
pes2o/s2orc
|
v3-fos-license
|
Effect of plant hormones and zinc sulphate on rooting and callus induction in in vitro propagated Coscinium fenestratum (Gaertn.) Colebr. stem and their role in estimation of secondary metabolites
Coscinium fenestratum (CF) (Gaertn.) Colebr. is a hard woody climber medicinal plant belongs to the family, Menispermaceae. The plant is commonly known as tree turmeric because the stem content yellow berberine, the major active constituent. The plant is widely available in Western Ghats region but became endangered due to over exploitation and very slow germination rate. Hence, the alternate in vitro tissue culture has established for reduced germination rate and to extract more amount of plant constituents from the stem callus. Full strength MS medium supplemented with 2, 4-D at 0.1 mg/l and kinetin at 2 mg/l gave callus growth in 46 days and the growth mechanism is observed first time through SEM study of callus. Thereafter, rooting of callus occurred in IBA and kinetin in combinations in half strength MS medium whereas direct rooting of stem occurred with IBA and zinc sulphate (IBA at 2 mg/l and ZnSO4 at 3 mg/l) supplemented in half strength MS media along with coconut water, within 18 days. Thereafter, extracted callus is identified with TLC, followed by estimated with HPLC and HPTLC and resulted methanol extract of CF showed higher content of berberine than aqueous extract.
Introduction
Plant tissue culture is an old practice for the in vitro development of the desired plant parts from which increased number of plant secondary metabolites is isolated very easily. The main objective of plant tissue culture is to develop plantlets or callus for such plant species which are endangered or threatened due to over exploitation or the rate of seed germination is very longer time and to enhanced the active constituents by sub-cultured as per required times. This method helps to grow the plants aseptically or repeated number of multiplication of young parencymatous cells that accumulate impurity free major plant constituents and isolation of same becomes very easy.
Looking at the above objectives, the present study was selected a plant species which is endangered but economically important, i.e., Maramanjal. Scientifically, the plant is known as Coscinium fenestratum (CF) (Gaertn.) Colebr. (Family: Menispermaceae) is a large dioecious woody climber tree. The plant is indigenous to the Indo-Malayan region, distributed in Sri Lanka, India, Malaysia, Vietnam, Myanmar, Singapore and Thailand (Tushar et al., 2008). In India, the plant is widely spread across the Western Ghats regions of Tamil Nadu (Kanniyakumari, Tirunelveli and Nilgiri districts), Kerala (Thiruvananthapuram, Wynaad, Thrissur, Idukki and Palakkad districts) and Karnataka (Kodagu, Udupi, Dakshina and Uttara Kannada districts) (Sumy et al., 2000;Mohanan and Sivadasan, 2002). This tree is also known as False Calumba or Tree Turmeric due to presence of yellow colored alkaloid berberine (an isoquiinoline), a medicinally active compound with numerous bioactivities (Jayaweera, 2006;Warakagoda and Subasinghe, 2014). Other constituents like protoberberine, jatrorrhizin e, magnoflorine, berberrubine, thalifendine, palmitine, sitosterol, palmitic acid, oleic acid and oxyberberine are present in stem and roots of this plant (Siwon et al., 1980;Pinho et al., 1992;Agusta, 2003;Anonymous, 2005). Due to presence of various constituents, the plants have various therapeutic activities like treating digestive disorders, chronic fevers, wounds, ulcers, jaundice, burns, skin diseases, abdominal disorders, diabetes, fever and general debility (Warrier et al., 1994;Agusta, 2003). Apart from that, it is also used in cosmetic industry and other ayurvedic products (soap, bath gels, face wash and bath oil), and have antidiabetic (Shirwaikar et al., 2005), anti-inflammatory (Caius, 1992), antioxidant and anthelmintic activities (Das et al., 2018). These activities are possible when the plants are grown abundantly and sufficient raw materials are procured and dried woody stem has high demand in the crude drug market. Generally, propagation occurs through sexual method, i.e., done naturally by seeds but the plant takes around 13-15 years to mature. Seed germination takes longer time (6-8 months) due to hard seed coat (Harinarayanan et al., 1994;Tushar et al., 2008) and, hence the seed germination was found around 30%. Research article revealed that around 12% of the fresh fruits contain non-viable seeds (Senerath, 1991) and also vegetative propagation through stem cuttings are unsuccessful (Gunatillake et al., 2002). These all together resulted relatively slow growth rate, degradation of natural habitats, habitat specificity, no proper domestication through cultivation, illegal over exploitation and destructive collection natural populations. These difficulties are only overcome through in vitro cultivation. There are few reports on cell and suspension culture of this plant by which shoots and roots are developed (Parthasarathy, 2007;Staden et al., 2008) and also reported increased amount of berberine content, estimated by HPLC method (Talat et al., 2009;Senarath, 2010). In another study, it is revealed that petiole and leaf explants of CF formed callus on vermi-compost extract media along with coelomic fluid (Kashyap et al., 2016), but no such literature is available on development of stem callus on MS medium, containing coconut water in combination with plant growth hormones, followed by impact of zinc sulphate for early rooting of the stem as well as estimation of the berberine content through HPLC and HPTLC methods. Further, very scanty reports or no such reports on SEM study (Scanning Electron Microscopy) on proliferation of parencymatous cells for callus growth of explants. In view of that, the present study has undertaken to establish the stem callus in MS medium, supplemented with various concentrations of plant hormones, followed by SEM study for the mechanism of callus growth and estimation of berberine content through various chromatographic methods such as TLC, HPLC and HPTLC.
Plant material, surface sterilization and explants selection
Seeds of C. fenestratum are collected from Dr. P.E. Rajasekharan, Principal Scientist, Plant Biotechnology Department, Indian Institute of Horticultural Research, Hessaraghatta, Bangalore. Prior to use, seeds are ex vitro germinated in plastic cup container filled with sand: coir dust (1:1) medium, placed inside laboratory and drained using 0.5 g/l topsin fungicide solution to avoid fungal infection of seeds (Figure 1). After 24 h, surface sterilized and pretreated seeds by various solvents like 3% potassium nitrate, 2000 mg/l gibberelic acid (GA 3 ), 2250 mg/l GA 3 and kept for germination ( Figure 2). Germinated plants are used as explants for the present study.
Preparation of culture medium
Various combinations of MS media are prepared with double distilled water, using sugar concentration of 3g/l along with different concentrations of required growth hormones. 0.7% agar is used as gelling agent. The media is then dispensed into culture tubes, 100 ml EM flask, 200 ml bottles with 15, 40 and 60 ml media and coconut water of 2.5, 5 and 7.5 ml, respectively on plugs or Laxbro plastic caps and made air tight. They are then sterilized in an autoclave at 121°C temperature and 103.4 KPa pressure for 20 min.
Preparation of explants for inoculation and incubation
The stems of germinated explants is washed thoroughly with running tap water for 15 min then treated with bavistin (0.1%) for 2000 15 min. The treated plant materials are subjected to sterilization using 0.1% mercuric chloride for 5 min. and repeated washing with sterile double distilled water under aseptic environment. A known weight of stem part is dissected and inoculated to the medium. The inoculated tubes, flasks and bottles are then incubated in the culture room at 25 ± 2°C under fluorescent light with an intensity of 60 UE/m2/sec. A photoperiod of 16 h of light and 8 h of darkness is maintained through an automatic timer and a constant relative humidity of 65-75% using an air-cooling system is also maintained.
Growth measurement
In each replication of different treatments for an individual experiment, uniform explant tissues are inoculated and incubated at 25 ± 2°C. A fixed number of replications are taken for the record in each time. Weight of the callus and its maintenance are also observed 60 days after inoculation and after every 16 days of sub-culturing, respectively.
(a) Effect of different combinations of auxin and kinetin on callus induction
Different strengths of MS basal medium along with auxins (IAA, IBA, NAA, 2, 4-D) and cytokinin (kinetin) at various concentrations are used in combinations. Stem at 1.5 to 2 cm length are sterilized and inoculated in culture tubes containing 15 ml media. Observations are recorded after 45 days of inoculation.
(b) Effect of different combinations of auxin, cytokinins and ZnSO 4 for the rooting of callus
MS half strength medium prepared and supplemented with IBA at 0.1 mg/l and cytokinins at 1 and 2 mg/l are used in combinations. Further, IBA at various concentrations and ZnSO 4 at 1, 2 and 3 mg/l are used for organogenesis of callus, using coconut water in the media. Observation upto 60 days, the rooting of callus under different treatments as well as direct rooting of the stem parts is recorded.
Mechanism of proliferation of callus
Growth of callus is occurred due to the multiplication of parenchymatous cells. Hence, callus is known as an abnormal, uncontrolled growth of parenchyma tissues. This proliferation was clearly visible under SEM study.
Preparation of extract for chemical analysis
Obtained calli (5 g dried) is refluxed 3-4 h with methanol and solvent water and analyzed for various phytochemicals present in the calli as per the standard method (Harborne, 1973;Evans, 2002).
TLC identification
Presence of berberine was identified in methanol and aqueous callus extracts by using mobile phase n-butanol, acetic acid and water (8:1:1) and silica gel G as stationary phase. Standard berberine hydrochloride is used for comparison. The plates are then derivatised in iodine chamber ( Figure 5).
Estimation of berberine from extracts
HPLC and HPTLC were carried out for estimation of berberine content in both the extracted callus samples. HPTLC (CAMAG) is used for separation and estimation of berberine content in both the extracts. n-butanol, acetic acid and water (8:1:1) are used as mobile phase and silica gel GF 254 as stationary phase. The results are compared against standard berberine hydrochloride at wavelength 450 nm. The linearity of the concentration observed from 12-80 nanogram (r 2 = 0.9988). Sample and standard prepared 1 mg/ml concentration.
Further, efficient HPLC (Shimadzu, India) system is applied for estimation of berberine content. The condition for HPLC as column: SS Wakosil II C-18 (250 X 4.6 mm), mobile phase: methanol and water (90: 10), flow rate 1.0 ml per min and compound detected at 220 nm. Sample and standard prepared 1 mg/ml concentration using methanol as solvent. Sample further diluted within the limit of standard area for the calculation.
Statistical analysis
Callus growth was analyzed by one-way ANOVA study, followed by Tukey's multiple comparison post test and p< 0.05 is considered as significant.
Seed germination
Seed of CF is very hard and germination rate is also very slow due to high content of alkaloids in the seed. Hence, various techniques are adopted earlier for breaking the dormancy (Anil Kumar et al., 2010;Ramasubbu et al., 2012;Warakagoda and Subasinghe, 2015). The same method is tried in this experiment and is resulted GA 3 at dose of 2500 ppm showed better result than others. Germination of seed occurred faster within 4 months but same in concentration 3000 ppm showed lower germination which is similar to the earlier reports.
Growth measurement i. Effect of different combinations of auxin and kinetin on callus induction
The effect of different combinations of auxins and kinetin on callus induction of C. fenestratum in different strengths of MS medium is given in Table 1. The results suggest that the growth of callus initiated in full strength of MS media, supplemented with 2, 4-D at 0.1 mg/l and kinetin at 2 mg/l. The highest growth of callus is observed in full strength MS media (5.48 cm) than half strength MS media (2.10 cm) (Negative results are not reflected in Table 1). It was revealed that a combination of 2,4-D, BAP and kinetin (2.0, 2.0, 1.0 µM, respectively) in MS medium enhances the callus production in C. fenestratum (Khan et al., 2008) in 46 days. Our present study is also followed the same trend for the callus development ( Figure 3).
ii. Effect of different combinations of auxin, cytokinins and
ZnSO 4 for the rooting of callus MS half strength medium prepared and supplemented with IBA at 0.1, 0.2, 0.3, 1, 2 and 3 mg/l and kinetin at 1 mg/l are used in combinations. Further, IBA at concentration 2 mg/ml and ZnSO 4 at various concentrations 1, 2 and 3 mg/l are used for organogenesis of callus, using coconut water in the media. It is seen that IBA is effective against rooting of callus whereas added application of ZnSO 4 at concentration 3 mg/ml along with IBA 2 mg/ml in combinations showed direct rooting in stem within 18 days in half strength MS medium mixed with coconut water (Table 2, Figure 4). Literature survey revealed that root initiation occurred after the 3rd week of transferring the shoots to the rooting medium and the rooting percentage greatly depends on strength of MS medium (Senarath, 2010). Our results showed rooting of stem occurred within 18 days which is lesser time than that of reported earlier. This may be due to added zinc sulphate which acts synergistic activity along with IBA. It was reported that 50 µm zinc sulphate concentration significantly caused boosting of leaf area and both fresh and dry weight of shoot and root development (Nejad et al., 2014) which is also positive in our present study.
Mechanism of proliferation of callus
Scanning Electron Microscopy is carried out first time to observe the proper proliferation of parenchyma cells for growth of callus. It is seen that uncontrolled abnormal growth ( Figure 5). There are various scale used for observation of growth mechanism of callus.
TLC identification of CF extracts
Two different extracts, viz., methanolic and aqueous extracts were prepared and resulted higher percentage yield in methanol extract (24.8% w/w) than aqueous one (16.3% w/w), but both contain saponins, alkaloids, flavonoids, resins, etc. Thereafter, TLC study has carried out with the various solvent systems and revealed the n-butanol, acetic acid and water is the mobile phase that identified and separated chemical constituents when compared with the standard berberin hydrochloride. The R f is calculated and resulted 0.63 ( Figure 6) after derivatised in iodine chamber.
Std= standard berberine; MECF= methanol extract of CF; AECF = aqueous extract of CF Earlier scientific literatures evident that berberine identification was carried out using various TLC solvent systems (Rojsanga et al., 2006;Krishna et al., 2011), looking at that we are also used different solvent system with standardized ratio for the separation and identification of berberine in the CF extracts.
Estimation of berberine in the extract
HPTLC method is applied for the separation and estimation of the berberine content in both the MECF and AECF using same solvent system as TLC used. Results showed that Rf values are coincided at 0.63 when scanned at 450 nm. The finger printing as well as the tracks are showed in Figures 7 A, B, C and D. Many literatures also revealed the estimation of berberin through HPTLC method using various solvent systems (Rojsanga et al., 2006;Krishna et al., 2011;Jayaprakasam and Ravi, 2014).
HPLC method
Further, effective separation and estimation of CF extracts are carried out using HPLC method using methanol and water solvent system at 220 nm. Finally the berberine content is estimated by calculated Rt of standard and sample (Rt = 3.074 min.) (Figures 8 a, b and c). We observed that both the HPTLC and HPLC graphs, the amount of berberine content is estimated for both the extracts and the results are tabulated in Table 3.
%RSD Berberine present (mg) Extract Methods
From these two methods, it is observed that methanol extract of CF showed higher amount of berberine content than aqueous extract. HPLC showed better amount of berberine (1.34 mg) content than HPTLC (1.23 mg) for MECF. Earlier literature also revealed the same results where methanolic extract showed higher percentage of berberine content than other extract (Jayaprakasam and Ravi, 2014). Further, it is also revealed that percentage yield of extract also higher in methanol extract than others (Arawwawala and Wickramaarachchi, 2012;Akowuah et al., 2014) and the same trend followed in the present study where methanol extract of callus showed higher percentage of yield than aqueous extract. It is also proved that content of active constituents are also depends on the percentage of yield (Dent et al., 2013;Das et al., 2016;Das et al., 2017). Our present study is also reported the similar results that influence direct correlation with the amount content of the active constituents.
Conclusion
The present study established the in vitro culture of C . fenestratum stem in full strength MS medium, supplemented with various plant growth regulators (combinations of 2, 4-D at 0.1 mg/l and kinetin at 2 mg/l) in various concentrations within 46 days. First time, SEM study is reported the mechanism for callus growth via abnormal proliferation of parenchyma cells. Thereafter, organogenesis of stem callus developed in combinations of IBA and kinetin combinations in half strength MS medium. Further, direct rooting of stem in half strength MS medium is observed within 18 days when IBA combined with zinc sulphate is used along with coconut water in the medium (IBA 2 mg/l and ZnSO 4 3 mg/l combination). Grown callus further extracted with methanol and aqueous solvents and estimated for berberine content by HPLC and HPTLC where methanol extract showed higher content of berberine than aqueous extract. Further, study required hardening of plant in huge quantities for conversion of endangered species to cultivated plant and to discover new biomolecules for various therapeutic activities.
|
2019-04-03T13:05:59.932Z
|
2018-06-01T00:00:00.000
|
{
"year": 2018,
"sha1": "3ad5410288ba5868ce1834725894ead14074d407",
"oa_license": null,
"oa_url": "https://doi.org/10.21276/ap.2018.7.1.10",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "d56d0510da0593747f4ed418c6b20299185badff",
"s2fieldsofstudy": [
"Environmental Science",
"Agricultural and Food Sciences",
"Biology"
],
"extfieldsofstudy": [
"Chemistry"
]
}
|
260991859
|
pes2o/s2orc
|
v3-fos-license
|
The emerging role of E3 ubiquitin ligase RNF213 as an antimicrobial host determinant
Ring finger protein 213 (RNF213) is a large E3 ubiquitin ligase with a molecular weight of 591 kDa that is associated with moyamoya disease, a rare cerebrovascular disease. It is located in the cytosol and perinuclear space. Missense mutations in this gene have been found to be more prevalent in patients with moyamoya disease compared with that in healthy individuals. Understanding the molecular function of RNF213 could provide insights into moyamoya disease. RNF213 contains a C3HC4-type RING finger domain with an E3 ubiquitin ligase domain and six AAA+ adenosine triphosphatase (ATPase) domains. It is the only known protein with both AAA+ ATPase and ubiquitin ligase activities. Recent studies have highlighted the role of RNF213 in fighting against microbial infections, including viruses, parasites, bacteria, and chlamydiae. This review aims to summarize the recent research progress on the mechanisms of RNF213 in pathogenic infections, which will aid researchers in understanding the antimicrobial role of RNF213.
Introduction
1.1 RNF213 and molecular structure comprehensive understanding of its structure and chemical activity, which is essential to fully comprehend its functions.Ahel et al. employed Cryo-Electron Microscopy (Cryo-EM) analysis to obtain a detailed structure of the full-length mouse RNF213, which has a molecular weight of 584 kDa and consists of 5,148-amino acid residues (Ahel et al., 2020).This structure closely resembles the human homolog protein.The mouse RNF213 protein is composed of six ATPase structural domains, an E3 ubiquitin ligase domain, and a RING domain (Pollaci et al., 2022).The N-arm of the RNF213 protein (residues 1-1,290) is connected by a linker domain (1,774) to the six AAA+ units (1,405).The dynein-like core, which is responsible for the ATPase activity of RNF213, is composed of two catalytically active and four inactive AAA+ structural domains.The central region of the protein consists of a hinge domain (3,588) that links the AAA+ core to the E3 domain (3,926).The E3-RING domain (3,999), which is located on top of the E3 scaffold made up of the E3back, E3-shell, and E3-core domains, is responsible for the ubiquitin ligase activity of RNF213.The last C-terminal domain is located at the edge of the protein (Figure 1).Thus, RNF213 is capable of both a dynein-like core function and a unique ubiquitin transfer function (Ahel et al., 2020).As a giant cytosolic protein, RNF213 is capable of forming a ring-shaped homo-oligomer inside the cell, and a portion of RNF213 diffuses in a monomeric way (Morito et al., 2014).In addition to its RING-mediated ubiquitin ligase activity, RNF213 can also promote ubiquitin transfer through a transthiolation reaction, which is a form of ubiquitination that does not depend on the RING structure.What sets RNF213 apart is that it is the only known protein to possess both AAA+ ATPase and ubiquitin ligase activities.The distinctive structure of RNF213 suggests that it has multiple functions.
RNF213 and vasculopathy
As the first susceptibility gene to be identified for MMD, RNF213 has been extensively studied in the context of angiopathy.MMD is a rare cause of stroke that is characterized by progressive stenosis of the terminal and compensatory capillary collateral branches of the internal carotid artery, which can be observed radiologically (Ihara et al., 2022).The RNF213 gene is encoded by an ORF that is 15,624 bp long and located on chromosome 17q25.3,with Untranslated Regions (UTRs) of 5,431 bp in length at the 5′ and 3′ ends.In 2011, two independent Japanese studies found that the p.R4810K variant of RNF213 was found to increase the risk of MMD by over 100-fold, based on analyses of patient families (Kamada et al., 2011;Liu et al., 2011).Missense mutations in this gene are significantly more common in patients with MMD than in the general population, with the RNF213 R4810K mutant being a particularly prevalent sequence variant (also called p.R4810K, c.14429G>A, rs112735431).Subsequently, numerous follow-up studies have investigated the association between p.R4810K and MMD in East Asian populations.Comprehending the molecular function of RNF213 could significantly enhance our understanding of MMD (Asselman et al., 2022).
MMD is more likely to develop in genetically predisposed individuals (Liu et al., 2011;Takamatsu et al., 2017).However, non-p.R4810K rare missense variants have been found to be significantly associated with MMD in Caucasian patients, with the variants preferentially clustering in a C-terminal hotspot that encompasses the RING-finger domain of RNF213 (Guey et al., 2017).This suggests that there may be other genetic risk factors for MMD beyond the Asian RNF213 p.R4810K variant, such as p.E4950D and p.A5021V (Liao et al., 2017).In addition, several studies have suggested that the p.R4810K variant on RNF213 is associated with non-MMD intracranial major artery stenosis/ occlusion (non-MMD ICASO) (Cheng et al., 2019).Kobayashi and colleagues have conducted in vitro and in vivo experiments to obtain biochemical and functional characterizations of p.R4810K in angiogenesis, demonstrating that the upregulation of RNF213 can be induced by inflammatory signals and that the p.R4810K polymorphism leads to a decreased tube-forming ability in response to environmental stimuli (Kobayashi et al., 2015).There is growing evidence that RNF213 and its mutants are linked to vasculopathy with a range of clinical presentations that include MMD as well as other intracranial and systemic vasculopathies (Hiraide et al., 2022).Although the connection between RNF213 mutations and vascular disorders has been highlighted, recent research suggests that the ubiquitination activity of RNF213 plays an essential role in host defense against microbial infections.
FIGURE 1
Domain architecture of RNF213.The N-arm (residues 1-1,290) is connected by a linker domain (1,291-1,774) to the six AAA+ units (1,405).Two of these AAA+ units are catalytically active, and the other four are inactive, comprising the dynein-like core.The central region is composed of a hinge domain (3,588) that connects the AAA+ core to the E3 domain (3,926).The ubiquitin ligase activity of RNF213 is expected to depend on the E3-RING domain (3,999).The last CTD is located at the edge of the molecule (4, 148).
expressed in all eukaryotes (Goldstein et al., 1975).The total protein sequence of ubiquitin contains seven lysine sites (K6, K11, K27, K33, K48, and K63), a methionine site located at the N-terminal (Met1, also called M1 or linear), and a glycine site located at the Cterminal (G76) (Wilkinson and Audhya, 1981).The linear polyubiquitin chain, also known as M1 linkage, is assembled on proteins by the linear ubiquitin chain assembly complex (LUBAC), which is crucial in various signaling pathways (Rieser et al., 2013).RNF213, as an ubiquitin ligase, plays an essential role in generating the ubiquitin coat required for its unique structure (Zheng and Shabek, 2017).The process of ubiquitination involves the catalytic actions of ubiquitin-activating enzymes (E1), ubiquitin-conjugating enzymes (E2), and ubiquitin-protein ligases (E3).The specific biological functions of ubiquitin can be dictated by individual E2 enzymes, as the interaction between E2 and E3 determines the ultimate substrates that will be covalently bound by the E2 enzyme (Winkler and Timmers, 2005).Through yeast two-hybrid screening using a fragment containing the RING domain of RNF213 as bait, researchers identified UBC13 (UBE2N) as an E2 ubiquitinconjugating enzyme for RNF213 E3 ubiquitin ligase (Habu and Harada, 2021).Further analysis of the ubiquitin chain on RNF213 revealed that RNF213 undergoes autoubiquitination primarily in the form of K63-linked chains, rather than K48-linked chains.This demonstrates that RNF213 functions as a K63-linked E3 ubiquitin ligase, and UBC13 plays a crucial role in mediating RNF213dependent ubiquitination.Interestingly, RNF213 has the ability to form various types of ubiquitin chains, including M1 (Otten et al., 2021), K11 (Bhardwaj et al., 2022), K48 (Tian et al., 2023), and K63 (Habu and Harada, 2021), which may vary depending on the specific pathogens involved.
RNF213 and lipid droplets
The relationship between RNF213 and lipid droplets (LDs) is gradually becoming clearer.LDs consist of a neutral lipid core of triglycerides and sterol esters that are surrounded by a monolayer of phospholipids, which have long been deemed as solely neutral lipid storage compartments in cells.As cell-autonomous organelles, LDs possess a protein-mediated antimicrobial capacity, and RNF213 has been found on LDs (Bosch et al., 2020).In addition, LDs play fundamental roles in the regulation of inflammation and immunity (Monson et al., 2021).Certain pathogens, including bacteria, parasites, and viruses, induce LD accumulation, leading to enhanced immune responses to infection (Monson et al., 2021;de Almeida et al., 2023).
However, lipid accumulation can interfere with normal cellular and tissue functions, potentially causing lipotoxicity and various metabolic diseases (Brookheart et al., 2009).Palmitate, in particular, is toxic to cells, as high concentrations can induce apoptosis (Paumen et al., 1997).Piccolis and colleagues found that RNF213 depletion reduced palmitate-mediated cytotoxicity by approximately 50%, and knockdown of RNF213 attenuated the cell death induced by palmitate in hepatocellular carcinoma-derived cell line 2 (HepG2) cells and suspended mammary 159 pleural effusion (SUM159) cells (Piccolis et al., 2019).RNF213 is clearly a modulator of lipotoxicity from saturated fatty acids through a mechanism that affects the ability of cells to store lipids in LDs.Organisms require the nuclear transcription factor-kappa B (NF-kB) signaling pathway to initiate the inflammatory response when infected.Of note, the authors also noted that downstream of lipoxicity and RNF213 depletion have an impact on endoplasmic reticulum (ER) stress and NF-KB signaling, suggesting a link between RNF213 and the inflammatory response (Piccolis et al., 2019).However, the exact molecular mechanism remains to be investigated.
Moreover, Sugihara and colleagues reported the first evidence that RNF213 could physically interact with LDs (Sugihara et al., 2019).Overexpression of RNF213 resulted in a notable increase in both the quantity and size of LDs within cells.Conversely, knockout or knockdown of RNF213 resulted in a significant reduction in LDs abundance.These findings strongly suggest that RNF213 plays a role in lipid storage in vivo.Consistently, RNF213-knockout (RNF213-KO) cells showed markedly increased adipose triglyceride lipase (ATGL) attached to LDs.Despite not discovering a physical interaction between RNF213 and the ratelimiting lipase ATGL, researchers have speculated that RNF213 may affect a putative anchoring protein through its AAA+ activity.By blocking the influx of ATGL into LDs and eliminating the ATGL in LDs, they reduced lipolysis and increased fat storage, thus regulating the formation of LDs (Sugihara et al., 2019).However, it was not clear whether monomeric or oligomeric RNF213 associates with LDs until recently.Indeed, some researchers have found that RNF213 oligomerization is associated with LDs in cells treated with type-1 interferon (IFN) (Thery et al., 2021).These findings indicate a deep relationship between RNF213 and LDs, which might be important for the antimicrobial infection function of RNF213.
Antiviral role of RNF213
Although researchers initially focused on studying the function of RNF213 in MMD, Echizenya et al. demonstrated that RNF213 could alleviate severe headaches in patients with enterovirusinduced hand, foot, and mouth disease.They revealed that RNF213 played an important role in cerebrovascular diseases caused by viral infections, thus beginning to unveil the link between RNF213 and viruses (Echizenya et al., 2020).However, the first evidence of a direct antiviral role of RNF213 was revealed by Houzelstein and colleagues (Houzelstein et al., 2021).Rift valley fever (RVF) is an acute viral zoonotic disease caused by RVF virus (RVFV) and transmitted by mosquito vectors or by contact.To demonstrate the antiviral role of RNF213, Houzelstein and colleagues generated RNF213-deficient mice using the clustered regularly interspaced short palindromic repeats (CRISPR)-CRISPR-associated 9 (Cas9) system.RNF213-deficient mice were found to be more susceptible to RVFV, whereas mice overexpressing RNF213 in vivo showed increased resistance to RVFV and exhibited reduced symptoms of infection compared with controls (Houzelstein et al., 2021).In addition, the expression of RNF213 was significantly upregulated in experimental animals, such as chickens and ducks, upon injection with highly pathogenic strains of avian influenza (Pirbaluty et al., 2022).Together, these findings strongly indicate that RNF213 functions as an antiviral protein, playing a critical role in safeguarding against viral infections.
Extensive research studies have been conducted recently on the antiviral activity of RNF213, unveiling the links between ISGylated proteins, RNF213, and antiviral immunity.IFN-stimulated gene 15 (ISG15) protein is a ubiquitin-like protein that can be strongly induced by viral (Yuan and Krug, 2001;Jurczyszak et al., 2022), bacterial infections (Millman et al., 2022), and IFNs (Der et al., 1998;Hermann and Bogunovic, 2017).ISG15 has been implicated as a central player in the host antiviral response (Perng and Lenschow, 2018).Thery and colleagues used the Virotrap approach, which captures protein complexes within virus-like particles (VLPs) that bud from mammalian cells, to identify noncovalent interaction partners of ISG15.They confirmed the presence of RNF213 in the ISG15 VLPs and verified RNF213 as an ISG15-binding protein.Further experiments showed that RNF213 specifically associated with ISG15, defining RNF213 as a sensor for ISGylated protein.The association of RNF213 with LDs was increased upon IFN treatment, accompanied by a notably enhanced appearance of a smear of ISGylated proteins.Mechanistically, they confirmed that IFN-induced RNF213 associated with ISGylated proteins on the surface of LDs in vitro (Thery et al., 2021).Furthermore, Thery and colleagues used herpes simplex virus (HSV-1), respiratory syncytial virus (RSV), and coxsackievirus (CVB3), all of which are ISG15-sensitive viruses.Knockdown of RNF213 in HeLa cells increased the genome replication of the virus and the expression level of viral proteins.However, it is unclear whether RNF213 binds ISGylated proteins on LDs in the three viruses resembling the vitro experiment and requires further verification (Thery et al., 2021).The work of Thery and colleagues suggests that the RNF213-ISG15 association in LDs plays an important role in the antiviral process (Figure 2A).However, the restriction of HSV-1 by RNF213 does not require ISG15, whereas the protective effect of RNF213 against an intracellular bacterial pathogen Listeria monocytogenes was dependent on ISG15, indicating different mechanisms for the antiviral and antibacterial effects.The roles of RNF213 in different microorganisms.(A) Upon HSV-1 infection, RNF213 targets LDs and is involved in the ubiquitination of viral particles (VPs) mediated by LDs.In KSHV infection, RNF213 acts as an E3 ubiquitin ligase to promote K48-linked polyubiquitination of protein RTA.This downregulates RTA and attenuates its function in initiating downstream replication and transcription.(B) Upon Toxoplasma gondii infection, RNF213 is translocated to the parasitophorous vacuole (PV).On one hand, RNF213 relies on the LUBAC to recruit ubiquitin adapter proteins and initiate PV ubiquitination.On the other hand, PV can be directly modified by M1-linked and K63-linked ubiquitin to initiate autophagy.(C) Salmonella enters the host cell via receptor-mediated endocytosis and forms a bacteria-containing vesicle (BCV).RNF213-mediated LPS ubiquitination can recruit the E3 ligase LUBAC, which catalyzes the formation of linear ubiquitin chains in the membrane of bacterial cells recognized by the organism.Finally, the host initiates cellular immune signaling and allogeneic phagocytosis.In Listeria monocytogenes, a Gram-positive bacterium that lacks LPS, RNF213, acts as an ISG15 binding protein that localizes to the bacterial surface and initiates ubiquitination.(D) In GarD-deficient chlamydial inclusion bodies, RNF213 is necessary for the formation of M1-ubiquitin chains to exert anti-Chlamydia activity.However, in the presence of GarD, RNF213 cannot mediate the ubiquitination of inclusion bodies, leading to evasion of immunity.Kaposi's sarcoma-associated herpesvirus (KSHV), a type of gherpesvirus linked to several human malignancies, has not been shown to be efficiently inhibited by IFN in its lytic replication.Researchers constructed an expression library of ISGs and used murine g-herpesvirus 68 (MHV-68) as a model virus for KSHV study.They identified several ISGs that could inhibit the replication of MHV-68 (Tian et al., 2023).Of these ISGs, RNF213 significantly suppressed the early gene transcription and genome replication of the virus.Mechanistically, RNF213 acted as an E3 ubiquitin ligase to promote K48-linked polyubiquitination of a viral protein called RTA (replication and transcription activator), which subsequently led to RTA degradation via the proteasome-dependent pathway.KSHV infection undergoes a switch from the latent phase to lytic replication governed by RTA, known as the "molecular switch."The downregulated RTA is subsequently attenuated in its function in initiating downstream replication and transcription (Figure 2A).Intriguingly, RNF213 degrades RTA protein via the proteasomal pathway, rather than the autophagy-lysosome pathway.In summary, this study highlights the role of RNF213 in inhibiting KSHV and provides a novel idea for effective prevention and control of viral infection.
The immune system plays a crucial role in the antiviral response.When viruses invade the body, antigen-presenting cells (APCs) uptake antigens and present them to T cells, which then initiate a cascade of immune responses.To further explore the function of RNF213, researchers generated RNF213-KO and RNF213-knockin mice with single-nucleotide insertions corresponding to mutations found in patients with MMD.Mice with dysregulated expression of the RNF213 gene exhibited reduced dendritic cell development, antigen processing, and presentation functions (Tashiro et al., 2021).Hence, RNF213 might aid in eliminating viruses through T cells, thereby achieving antiviral effects.However, the specific mechanism by which RNF213 contributes to the reduced function of APCs requires further investigation.
Anti-parasitic role of RNF213
Toxoplasma gondii (TO) is a parasite that is capable of causing zoonotic parasitic diseases.Previous studies have shown that immunity-related guanosine triphosphatases (GTPases) (IRG) and guanylate-binding protein (GBP) work cooperatively in the cell-autonomous immune response to Toxoplasma in mouse cells and can promote each other's recruitment to parasitophorous vacuoles (PVs) (Bhushan et al., 2020;Frickel and Hunter, 2021).However, the mechanism by which IRGs and GBPs cooperatively detect and destroy PVs is still under exploration.Researchers have found that IFN-g-primed host cells prompt IRG-dependent association of Toxoplasma-containing vacuoles with ubiquitin through the regulated translocation of the E3 ubiquitin ligase tumor necrosis factor (TNF) receptor-associated factor 6 (TRAF6) (Haldar et al., 2015).In addition, different members of the TRAF family can localize to Toxoplasma PVs and enhance the cell-autonomous immunity to Toxoplasma by ubiquitination in IFN-g-primed cells (Mukhopadhyay et al., 2020).Moreover, it has been found that the IFN-g-mediated growth restriction of Toxoplasma gondii depends on the core components of the autophagy pathway but not on the initiation or degradative steps of autophagy.ISG15, which is upregulated by IFN-g along with other members of the ISGylation pathway, interacts with members of the autophagy-related gene (ATG).Although ISG15 does not affect the ubiquitination of the PV, it plays a crucial role in recruiting autophagy mediators such as p62 (SQSTM1), NDP52 (CALCOCO2), and Microtubule-associated protein light chain 3 (LC3) to the PV, leading to growth restriction of Toxoplasma gondii by ATG.Thus, ISG15 serves as a vital molecular link between the ATG and IFN-g signaling pathways, enabling cell-autonomous defense against parasites in human cells (Bhushan et al., 2020).
Hernandez and colleagues further investigated the anti-parasitic potency of RNF213 and found that IFN-g-stimulated cells failed to significantly reduce Toxoplasma burden in RNF213-KO cells compared with that in wild-type (WT) cells.The loss of cellautonomous immunity in RNF213-deficient cells was confirmed by plaque assays and measuring relative light units emitted by luminescent strains.These data verified that RNF213 executes antiparasitic activity in IFN-g-stimulated A549 cells and characterized RNF213 as a potent executioner of human anti-parasitic host defense.Furthermore, the researchers demonstrated that IFN-g induced RNF213 to translocate to the surface of PVs with LUBAC.Subsequently, the host cell could initiate the degradation of intracellular substances through a noncanonical autophagy-related process.Interestingly, linear ubiquitination of PVs also occurred in cells lacking essential components of LUBAC, suggesting that LUBAC is dispensable for Toxoplasma PVs ubiquitination and cellautonomous host defense.It is a really notable discovery because LUBAC was the only known enzyme to catalyze M1-ubiquitin chain, suggesting that other E3 ubiquitin ligases catalyzing M1 chain might exist in human but are yet to be discovered.Moreover, RNF213 could recruit adaptor proteins of autophagy, such as p62, TAX1BP1 (Tax1 binding protein 1), NDP52 and optineurin (OPTN), to ubiquitinated Toxoplasma PVs, which were modified by the linear-linked (M1) ubiquitin and K63-linked ubiquitin.Therefore, the host was able to recognize the ubiquitin-labeled PVs and to initiate the host immune defense against Toxoplasma gondii infection (Hernandez et al., 2022) (Figure 2B).RNF213 mediates IFN-g-dependent M1-and K63linked ubiquitination of the Toxoplasma PVs to control pathogen replication, which is likely a shared pattern of pathogenesis between distinct intracellular pathogens (Gilliland and Olive, 2022).However, it is unclear whether RNF213 catalyzes M1-or K63-ubiquitin chain directly in vivo.Likely, RNF213 conjugates the first ubiquitin repeat on PVs, whereas other E3 ubiquitin ligases conjugate secondary ubiquitin repeats, which remains to be demonstrated (Otten et al., 2021).In the future, it would be valuable for researchers to investigate how RNF213 exerts anti-parasitic effects on other parasites and whether RNF213 acts by similar mechanisms with other types of microorganisms.This could provide insights into the potential broadspectrum antimicrobial activity of RNF213 and its potential as a therapeutic target for infectious diseases.In addition, further studies could explore the potential involvement of RNF213 in other aspects of the immune response and its potential role in the development of immune-related disorders.
Anti-bacterial role of RNF213
Emerging evidence suggests that RNF213 plays a significant role in anti-bacterial immunity, and, unexpectedly, the substrates of its ubiquitination activity extend beyond the proteome.Although lysine ubiquitination is traditionally considered canonical, there is a growing recognition of atypical non-lysine ubiquitination, which is gradually being established as an important regulatory mechanism (Kelsall, 2022).Otten et al. reported that RNF213 can recognize and ubiquitinate lipid A, a component of the outer wall of Gram-negative bacteria, breaking the dogma that only proteins can be ubiquitinated substrates (Otten et al., 2021).To detect ubiquitin on the bacterial surface, they carried out immunoblotting with FK2, an antibody specific for conjugated ubiquitin, and revealed a ubiquitin smear above 50 kDa in WT Salmonella (S.) Typhimurium.However, S. Typhimurium Drfc, lacking the Oantigen polymerase required for smooth LPS synthesis, carried ubiquitinated products of lower molecular weight.They further demonstrated the ubiquitination of LPS (Ub-LPS) in vitro.To identify the enzyme that could generate Ub-LPS, they fractionated HeLa cell lysates by sequential ammonium sulfate precipitation, hydrophobic interaction chromatography, gel-filtration, and ion exchange chromatography, followed by mass spectrometry.RNF213 was found to be the only protein whose peptide counts matched the LPS-ubiquitinated activity.The researchers attenuated RNF213 expression by small interfering RNA (siRNA) or CRISPRbased knockout in human and mouse cells to explore whether RNF213 is required for the Ub-LPS and the functional importance of RNF213 for cell-autonomous immunity.Upon RNF213 downregulation, the presence of Ub-LPS in the Gram-negative bacteria S. Typhimurium was reduced, indicating an essential role for human and murine RNF213 in LPS ubiquitination on the bacteria (Otten et al., 2021).Subsequently, the host cells initiated pathogenic autophagic degradation against bacterial invasion (Figure 2C).In vitro studies have demonstrated that purified RNF213 can ubiquitinate LPS in a canonical manner, necessitating the presence of adenosine triphosphate (ATP), E1, and E2 enzymes.Unexpectedly, the RING domain of RNF213 is not required for its intrinsic E3 ligase activity or its Ub-LPS (Ahel et al., 2020;Otten et al., 2021).Collectively, it can be concluded that RNF213 has the ability to ubiquitinate LPS from Gram-negative bacteria invading the cytoplasmic matrix in vivo, and the process of ubiquitination is dependent on the dynein-like core of RNF213.The ability of RNF213 to ubiquitinate LPS breaks the previous notion that only proteins can be ubiquitinated substrates and provides a new research direction to study the antimicrobial function of RNF213.
It is worth noting that the E3 ligase LUBAC catalyzes the formation of Met1-linked linear ubiquitin chains, which have been demonstrated to play a pivotal role in cellular autonomous immunity (Noad et al., 2017).Otten et al. knocked out RNF213 and found that RNF213-deficient cells failed to recruit LUBAC and Nemo (the M1-specific adaptor) and that RNF213 deficiency could hamper M1-linked ubiquitin chains to the bacterial ubiquitin coat.The research also confirmed that RNF213-KO cells lost the capability to recruit autophagy cargo receptors associated with ubiquitin (Otten et al., 2021).Therefore, when host cells lack the RNF213 gene or contain mutations in the catalytic zinc-binding domain of the protein, the cells fail to recruit LUBAC and to accumulate Met1-linked ubiquitin chains on the bacterial cell surface.On the basis of this, the researchers demonstrated that lipid A can be ubiquitinated, leading to the removal of bacteria by immune signaling and xenophagy.Salmonella enters the host cell via receptor-mediated endocytosis and forms a bacteria-containing vesicle (BCV).The vesicle ruptures, and the bacteria are exposed to the ubiquitination system mediated by RNF213.RNF213-mediated LPS ubiquitination can recruit the E3 ligase LUBAC, which catalyzes the formation of linear ubiquitin chains in the membrane of bacterial cells recognized by the organism.Finally, the host initiates cellular immune signaling and allogeneic phagocytosis (Figure 2C).More importantly, the mechanisms underlying how the dynein-like ATPase core and the RING finger contribute to RNF213 function and how LPS is recognized remain to be elucidated.Answers to these questions and intensive research may facilitate a profound understanding of the anti-bacterial potency of RNF213 and offer insights into ubiquitinationdependent cell-autonomous defense against cytosolic bacterial invaders (Peng et al., 2021).
RNF213 has been reported to counteract infection not only with S. Typhimurium but also with Listeria monocytogenes, a Grampositive bacterium that lacks LPS, implying functional diversity of RNF213 (Martina et al., 2021;Thery et al., 2021).By knocking down RNF213, Thery and colleagues found a significant increase in intracellular Listeria bacterial load, similar to knockdown of ISG15.Moreover, in ISG15 knockout cells, even overexpressed RNF213 could not play a protective role, indicating that the antibacterial activity of RNF213 requires ISG15.In contrast, unlike the antiviral effect, the restriction of HSV-1 by RNF213 does not require ISG15 (Thery et al., 2021).They further showed that RNF213 acted as a binding protein for ISG15 and localized at the bacterial surface to exert an antibacterial effect (Figure 2C), whereas other researchers have observed that ISG15 expression could restrict Listeria infection in vitro and in vivo (Radoshevich et al., 2015).The study by Thery et al. suggests that RNF213 can make a significant difference in cellular autonomous immunity and emphasizes the role of the RNF213-ISG15 association in the antibacterial effect.In addition, RNF213 can also regulate the transcription of dimethylarginine dimethylaminohydrolase 1 (DDAH1) and the production of nitric oxide to facilitate antilisterial effects (Martina et al., 2021).Collectively, RNF213 can induce cellular autophagy and stimulate the function of cellular autonomous immunity.This specific mechanism of the antimicrobial function of RNF213 is gradually being redefined as an immune signaling pathway.The antimicrobial activity of RNF213 seems to be specifically targeted toward intracellular bacterial pathogens.
Anti-Chlamydia role of RNF213
In addition to studies investigating the antimicrobial infection of RNF213 function, researchers have found that there are also corresponding anti-ubiquitination factors in microorganisms to resist cell-autonomous immunity produced by RNF213, which opens an exciting chapter in pathogen immune evasion (Walsh et al., 2022).By genetic screening using an arrayed library of mutagenized chlamydiae, researchers identified the chlamydial inclusion body membrane protein gamma resistance determinant (GarD; also called CTL0390) as a chlamydial effector.Further analysis revealed that the structure of GarD is similar to that of other type III secreted inclusion body membrane proteins, and it is inserted into the inclusion bodies during infection.Thereafter, the inclusion bodies are not tagged by IFN-g-dependent ubiquitin, which averts the association of lysosomal-associated membrane protein 1 with the inclusion bodies, allowing Chlamydia (C.) trachomatis to evade immune detection and destruction driven by RNF213 (Walsh et al., 2022).Subsequently, GarD::GII inclusion bodies (GarD−) were formed with the insertional inactivation mutants of GarD, which could be modified by linear ubiquitin with RNF213 in IFN-g-induced human epithelial cells.Therefore, the ubiquitin E3 ligase RNF213 was identified as a candidate antichlamydial protein.However, the WT (GarD+) secretes the GarD protein to stabilize the pathogen surrounding membranes and remains devoid of ubiquitin in IFN-g-primed cells to safeguard C. trachomatis from ubiquitination and associated cell-autonomous immunity.In GarD-deficient C. trachomatis inclusion bodies, RNF213 is necessary for the formation of the M1-ubiquitin chain, but it is unclear whether RNF213 directly catalyzes the formation of the M1-ubiquitin chain.Interestingly, the LUBAC complex is redundant for this process.This study proposes that other E3 ligases mediating M1-ubiquitin chain types might exist as an alternative to the LUBAC complex (Figure 2D).Furthermore, Walsh and colleagues confirmed that GarD acts as a cis-acting factor to execute its anti-ubiquitination, directly at the inclusion membrane on which it resides.To capitalize on the ability of C. trachomatis inclusions to fuse with each other, they insertionally inactivated IncA, a chromosomal gene encoding a protein required for homotypic fusion of chlamydial inclusion bodies.Co-infections of GarD::GII with WT C. trachomatis resulted in fused inclusion bodies containing a mix of WT and GarD mutant and a reduction in inclusion ubiquitination.However, inclusion bodies formed by GarD::GII in cells co-infected with fusion-deficient IncA continued to be ubiquitinated at high frequency.Therefore, they defined GarD as an RNF213 antagonist that is necessary for C. trachomatis to proliferate during IFN-g-stimulated cell-autonomous immunity in the host cell.Of note, it is likely that RNF213 catalyzes the attachment of the first ubiquitin on nonproteinaceous substrates on the inclusion bodies, which is followed by the formation of M1-type ubiquitin chains by alternative enzymes instead of the LUBAC complex.Collectively, this research suggests that the antimicrobial function of RNF213 can be exerted only when the action of GarD is inhibited.This discovery could be leveraged to develop a novel therapeutic approach to treat infection with Chlamydia.It will be interesting and meaningful to identify the microbial virulence factors associated with RNF213 to resist host immunity in other microorganisms.
Conclusions and future prospects
Since the discovery of the RNF213 gene in 2011 (Kamada et al., 2011), its role in cardiovascular and cerebrovascular diseases has received a lot of attention due to the identification of RNF213 as a susceptibility gene for MMD.In recent years, the novel function of RNF213 in the microbiology field has started to be unfolded.
Microorganisms
Roles of RNF213
Virus
Rift valley fever virus RNF213 has the ability to inhibit viral infection (Houzelstein et al., 2021).
Herpes simplex virus 1, Respiratory syncytial virus, Coxsackie virus B3 RNF213 associates with ISGylated proteins to target intracellular LDs, which maybe mediate the ubiquitination of RNF213 in viral particles (VP) and participate in cellular antiviral functions (Thery et al., 2021).
g-Herpesvirus RNF213 promotes polyubiquitination modifications of protein RTA in a K48-linked manner, resulting in the reduction of early gene transcription and genome replication of the virus (Tian et al., 2023).
Toxoplasma gondii
RNF213 can translocate to the surface of Toxoplasma PVs with LUBAC, leading to the initiation of autophagy by the host cells.In addition, RNF213 can recruit ubiquitin adaptor proteins and initiate the modification of PVs with both linearlinked ubiquitin and K63-linked ubiquitin (Hernandez et al., 2022).
Bacteria
Salmonella Typhimurium RNF213 has been found to label the lipid A of bacteria by ubiquitination and subsequently initiate the pathogen autophagic degradation.Moreover, RNF213-mediated ubiquitination of LPS can recruit LUBAC in the host cell (Otten et al., 2021).
Listeria monocytogenes RNF213 functions as an ISG15 binding protein, which in turn induces cellular autophagy and stimulates cellular autonomic immunity (Thery et al., 2021).
Chlamydia trachomatis
In the absence of GarD, inclusion bodies could be decorated by linear ubiquitin in IFN-g-induced human epithelial cells with RNF213.However, when wild-type GarD was present, it could protect the inclusion bodies from cell-autonomous immunity (Walsh et al., 2022).
RNF213 plays an important role in combating viruses, parasites, bacteria, and chlamydiae, which are summarized in Table 1.
In the antiviral role of RNF213, researchers have linked the antiviral effects of RNF213 to LDs.RNF213 could bind ISG15 treated with IFN-I in human embryonic kidney 293T (HEK293T) cells, Henrietta Lacks (HeLa) cells, and tumor histiocyte-like monocyte 1 (THP-1) cells and associate with ISGylated proteins on LDs in vitro, suggesting that RNF213 is able to exert cellular antiviral activity by similar mechanisms (Thery et al., 2021).However, how RNF213 is recruited to the LDs?Additional research is necessary to ascertain the involvement of LDs in the ubiquitination process of RNF213 in host cells, and the connection between RNF213 and LDs should be explored further.Moreover, different types of polyubiquitin chains, including M1 and K11 chains, have been implicated in NF-kB activation (Iwai, 2012;Tokunaga, 2013).RNF213 might exert its role in infection with M1-ubiquitin chain involved in NF-kB activation.In KSHV infection, RNF213 promotes polyubiquitination modifications of viral protein RTA in K48-linked manner.This downregulates RTA and attenuates its function, resulting in the reduction of early gene transcription and genome replication of the virus (Tian et al., 2023).Similarly, how RNF213 recognize the RTA protein from the herpes virus?Does RNF213 colocalize with the virus or viral components like the co-localization with intracellular bacterial pathogens?Answers to these questions and intensive studies may facilitate a profound understanding of the antiviral functions of RNF213.
In the anti-parasitic role of RNF213, the mechanism focuses on the ubiquitination pathway in Toxoplasma gondii infection.RNF213 can translocate to the surface of PVs with or without LUBAC and execute cell-autonomous host defense.Whether other E3 ubiquitin ligases catalyzing M1-ubiquitin chains exist in human should be further investigated.In addition, RNF213 can recruit ubiquitin adaptor proteins and facilitate the modification of PVs with both M1-and K63-linked ubiquitin (Hernandez et al., 2022).However, it is unclear whether RNF213 catalyzes M1-or K63ubiquitin chain directly.What the specific process of ubiquitination is like and how RNF213 and other E3 ubiquitin ligase conjugate the ubiquitin chains on PVs remain to be demonstrated.Moreover, whether similar or different mechanisms exist in other types of parasites needs to be further investigated.
In the anti-bacterial activity of RNF213, researchers have unveiled an initial set of mechanisms to combat Gram-positive and Gram-negative bacterial infections.Researchers have made a breakthrough in that the LPS of Gram-positive bacteria Salmonella can be ubiquitinated through K63-linked polyubiquitination or a new pathway involving LUBAC (Otten et al., 2021).Therefore, ubiquitin plays an important role in the host cell's defense against bacterial infection.Although lysine as the substrate may still be considered canonical, it is becoming increasingly clear that ubiquitin can modify cysteine, serine, and threonine residues, as well as the N-terminal amino group of proteins (Kelsall, 2022).Because RNF213 conjugates to non-protein substrates such as the lipid A core in LPS, does RNF213 conjugate ubiquitin to host lipid, for instance, on LDs?This also provides researchers with a new perspective to study RNF213 ubiquitination and to focus on other kinds of interactors, such as lipid or sugar, which will improve the comprehensive understanding of the antiviral and anti-bacterial activities of RNF213.In addition, researchers have also revealed several RNF213-binding proteins.The interaction between RNF213 and ISG15 was discovered and demonstrated to be functionally important in the host infected by the Gram-negative bacteria Listeria monocytogenes.It is worth mentioning that RNF213 associates with ISG15 to fight against Listeria but not HSV-1.It suggests that the RNF213-ISG15 interaction differs between different microorganisms, indicating the deeper relationship between RNF213 and other interactors (Thery et al., 2021).
In Chlamydia trachomatis infection, important clues have successfully identified GarD as the first example of pathogen evasion within this previously unknown IFN-g-dependent cellautonomous immune pathway (Gilliland and Olive, 2022).In the absence of GarD, RNF213 could decorate inclusion bodies by linear ubiquitin.However, when WT GarD was present, it could protect the inclusion bodies from cell-autonomous immunity (Walsh et al., 2022).Are there other immune evasion mechanisms similar with GarD of Chlamydia in other pathogenic microorganisms?Further molecular mechanisms of cell-autonomous immunity caused by RNF213 and pathogenic immunity resistance should be studied more extensively.
In conclusion, as an antimicrobial host determinant, the emerging role of RNF213 in antimicrobial infections has been highlighted in many studies, especially in recent years.However, it needs more researchers to further investigate the mechanisms of antimicrobial functions of RNF213 in detail.
Publisher's note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers.Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
TABLE 1
Summary of RNF213 functions in different microorganisms.
|
2023-08-19T15:09:04.330Z
|
2023-08-15T00:00:00.000
|
{
"year": 2023,
"sha1": "ed8fb96a77b0acb67c5fe30983ac4641f78131b3",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fcimb.2023.1205355/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "206361aacd938a612ab38f8830494449066eda03",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
220381176
|
pes2o/s2orc
|
v3-fos-license
|
Single Shot MC Dropout Approximation
Deep neural networks (DNNs) are known for their high prediction performance, especially in perceptual tasks such as object recognition or autonomous driving. Still, DNNs are prone to yield unreliable predictions when encountering completely new situations without indicating their uncertainty. Bayesian variants of DNNs (BDNNs), such as MC dropout BDNNs, do provide uncertainty measures. However, BDNNs are slow during test time because they rely on a sampling approach. Here we present a single shot MC dropout approximation that preserves the advantages of BDNNs without being slower than a DNN. Our approach is to analytically approximate for each layer in a fully connected network the expected value and the variance of the MC dropout signal. We evaluate our approach on different benchmark datasets and a simulated toy example. We demonstrate that our single shot MC dropout approximation resembles the point estimate and the uncertainty estimate of the predictive distribution that is achieved with an MC approach, while being fast enough for real-time deployments of BDNNs.
Introduction
Over the last, decade deep neural networks (DNN) have arisen as the dominant technique for the analysis of perceptual data. Also in safety-critical applications like autonomous driving, where the vehicle must be able to understand its environment, DNNs have seen rapid progress in several tasks (Grigorescu et al., 2019).
However, classical DNNs have deficits in capturing the model uncertainty , (Gal & Ghahramani, 2016). But when using DNN models in safety-critical Presented at the ICML 2020 Workshop on Uncertainty and Robustness in Deep Learning. Copyright 2020 by the author(s). applications, it is mandatory to provide an uncertainty measure that can be used to identify unreliable predictions (Michelmore et al., 2018) (Feng et al., 2018) (Harakeh et al., 2019) (Miller et al., 2018) (McAllister et al., 2017. For example, in the field of robotics (Sünderhauf et al., 2018), medical applications, or autonomous driving (Bojarski et al., 2016), where machines interact with humans, it is important to identify situations where a model prediction is unreliable and a human intervention is necessary. This can, for example, be situations which are completely different from all that occurred during training.
Employing Bayesian DNNs (BDNNs) (MacKay, 1992) tackles the problem and allows to compute an uncertainty measure. However, state of the art BDNNs require sampling during deployment leading to computation times that are by the factor of MC runs larger than a classical DNNs. This work overcomes this drawback by providing a method that allows to approximate the expected value and variance of a BDNN's predictive distribution in a single run. It has therefore the same computation time as a classical DNN. We focus here on a special variant of BDNNs which is known as MC dropout (Gal & Ghahramani, 2016). While our approximation method is applicable also to convolutional neural networks and classification settings, we focus in this work on regression through fully connected networks.
Ensembling based models take an alternative approach to estimate uncertainties and have been successfully applied to DNNs (Lakshminarayanan et al., 2017;Pearce et al., 2020). But ensemble methods do also not allow to quantify the uncertainty in a single shot manner.
MC Dropout Bayesian Neural Networks
BDNNs are probabilistic models that capture the uncertainty by means of probability distributions. Probabilistic DNNs, which are non-Bayesian, only define a distribution for the conditional outcome. In common probabilistic DNNs the output nodes are controlling the parameters of a conditional probability distribution (CPD) of the outcome. For regression type problems a common choice for the CPD is the normal distribution N (µ, σ 2 ), where the variance σ 2 quantifies the data uncertainty, known as aleatoric un-arXiv:2007.03293v1 [cs.LG] 7 Jul 2020 certainty. BDNNs define in addition distributions for the weights which translate in a distribution of the modeled parameters. In this manner the model uncertainty is captured, which is known as epistemic uncertainty (Der Kiureghian & Ditlevsen, 2009). In case of MC dropout BDNNs each weight distribution is a Bernoulli distribution: the weight takes with the dropout probability p * the value zero and with probability 1 − p * the value w. All weights starting from the same neuron are set to zero simultaneously. The dropout probability p * is usually treated as a fixed hyperparameter and the weight-value w is tuned during the training.
In contrast to standard dropout (Srivastava et al., 2014), the weights in MC dropout are not frozen and rescaled after training, but the dropout procedure is also done during test time. It can be shown that MC dropout is an approximation to a BDNN (Gal & Ghahramani, 2016). MC dropout BDNNs were successfully used in many applications and have proven to yield improved prediction performance and allow to define uncertainty measures to identify individual unreliable predictions (Gal & Ghahramani, 2016), (Ryu et al., 2019), (Dürr et al., 2018), (Kwon et al., 2020). To employ a trained Bayesian DNN in practice one performs several runs of predictions. In each run, weights are sampled from the weight distributions leading to a certain constellation of weight values that are used to compute the parameters of a CPD. To determine the outcome distribution of a BDNN, we draw samples from the CPDs that resulted from different MC runs. In this way, the outcome distribution incorporates the epistemic and aleatoric uncertainty. A drawback of a MC dropout BDNN compared to its classical DNN variant is the increased computing time. The sampling procedure leads to a computing time that is prohibitive for many real-time applications like autonomous driving.
Moment Propagation
Our method relies on statistical moment propagation (MP). More specifically, we propagate the expectation and the variance, of our signal distribution through the different layers of a neural network. The variance of the signal arises due to the dropout process. Quantifying the variance after a transformation is also done in error propagation (EP). EP quantifies how an uncertainty of an input which is transformed by a function (i.e. a measurement error) transfers to an uncertainty of the output of this function. In case of a continuous output it is common to characterize the uncertainty by the variance. This approach is also used in statistics as the delta method (Dorfman, 1938). In MP we approximate the layer-wise transformations of the variance and the expected value. A similar approach has also been used for neural networks before (Frey & Hinton, 1999;Adachi, 2019), and used to detect adversarial examples in (Jin, 2015) and (Gast & Roth, 2018).
But, due to our best knowledge, our approach is the first method that provides a single shot approximation to the expected value and the variance of the predictive distribution resulting from a MC dropout NN.
Methods
The goal of our method 1 is to approximate the expected value E and the variance V of the predicted output which is obtained by the above described MC dropout method. When propagating an observation through a MC dropout network, we get each layer with p nodes an activation signal with an expected value E (of dimension p) and a variance given by a variance-covariance matrix V (of dimension p × p). We neglect the effect of correlations between different activations, which are small anyway in deeper layers due to the decorrelation effect of the dropout. Hence, we only consider diagonal terms in the correlation matrix. In the following, we describe for each layer-type in a fully connected network how the expected value E and its variance V is propagated. As layer-type we consider dropout, dense, and ReLU activation layer. Figure 1 provides an overview of the layer-wise abstraction.
Dropout Layer
We start our discussion, with the effect of MC dropout. Let E i be the expectation at the ith node of the input layer and V i the variance at the ith node. In a dropout layer the random value of a node i is multiplied independently with a Bernoulli variable Y ∼ Bern(p * ) that is either zero or one.
The expectation E D i of the i'th node after dropout is then given by: For computing the variance V D i of the i'th node after dropout, we use the fact that the variance V (X · Y ) of the product of two independent random variables X and Y , is given by (Goodman, 1960): With V (Y ) = p * (1 − p * ), we get: Dropout is the only layer in our approach where uncertainty is created. I.e. even if the input has V i = 0 the output of the dropout layer has V D i > 0 for p * = 0.
Dense Layer
For the dense layer with p input and q output nodes, we compute the value of the i'th output node as p j w ji x j + b i , where x j , j = 1 . . . p are the values of the input nodes. Using the linearity of the expectation, we get the expectation E F i of the i'th output node from the expectations, E F j , j = 1 . . . p, of the input nodes: To calculate the change of the variance, we use the fact that the variance under a linear transformation behaves like V (w ji · x j + b) = w 2 ji V (x j ). Further, we assume independence of the j different summands, yielding:
ReLU Activation Layer
To calculate the expectation E R i and variance V R i of the i'th node after a ReLU, as a function of the E i and V i of this node before the ReLU, we need to make a distributional assumption. We assume that the input is Gaussian distributed, with φ(x) = N (x; E i , V i ) the PDF, and Φ(x) the corresponding CDF, we get (see (Frey & Hinton, 1999) for a derivation) for the expectation and variance of the output: 4. Results
Toy Dataset
We first apply our approach to a one dimensional regression toy dataset, with only one input feature. We use a fully connected NN with three layers each with 256 nodes, ReLU activations and dropout after the dense layers. We have a single node in the output layer which is interpreted as the expected value µ of the conditional outcome distribution p(y|x). We train the network using the MSE loss and apply dropout with p * = 0.3. From the MC dropout BDNN, we get at each x-position T = 30 MC samples µ t (x) from which we can estimate the expectation E µ by the average value and V µ by the variance of µ t (x). For comparison, we use our MP approach to also approximate the expected value E µ and the variance V µ of µ at each x-position (see upper panel of 2). We also included the deterministic output µ(x) of the DNN in which dropout has only been used only during training. All three approaches yield nearly identical results, within the range of the training data. We attribute this to the fact, that we have plenty of training data and so the epistemic uncertainty is neglectable. In the lower panel of figure 2 a comparison of the uncertainty of µ(x) is shown by displaying an interval given by the expected value of µ(x) plus-minus two times the standard deviation of µ(x).
Here the width of the resulting intervals of a BDNN via the MP approach and the MC dropout are comparable (the DNN has no spread). This indicates the usefulness of this approach for epistemic uncertainty estimation.
UCI-Datasets
To benchmark our method, we redo the analysis of (Gal & Ghahramani, 2016) for the UCI regression benchmark dataset. We use the same NN model as Gal and Ghahramani, which is a fully connected neural network including one hidden layer with ReLU activation in which the CPD p(y|x) over T = 10, 000 MC runs is given by sampling from the normal PDF: Again µ t (x) is the single output of the BDNN for the t'th MC run. To derive a predictive distribution Gal assumes in each run a Gaussian distribution, centered at µ and a precision τ , corresponding to the reciprocal of the variance. The parameter µ is received from the NN and τ is treated as as a hyperparameter. For the MP model, the MC sampling (Eq. 8) is replaced by integration: We used the same protocol as (Gal & Ghahramani, 2016) which can be found at https://github.com/yaringal/DropoutUncertaintyExps. Accordingly, we train the network for 10× the epochs provided in the individual dataset configuration. As described in (Gal & Ghahramani, 2016) an excessive grid search over the dropout rate p * = 0.005, 0.01, 0.05, 0.1 and different values of the precision τ is done. The hyperparameters minimizing the validation NLL are chosen and applied on the testset.
We report in table 1 the test performance (RMSE and NLL) achieved via MC BDNN using the optimal hyperparameters for the different UCI datasets. We also report the test RMSE and the NLL achieved with our MP method. Allover, the MC and MP approaches produces similar results. However, as shown in the last column in the table the MP method is much faster, having only to perform one forward pass instead of T = 10, 000 forward passes.
Discussion
With our MP approach we have introduced an approximation to MC dropout which requires no sampling but instead propagates the expectation and the variance of the signal through the network. This results in a time saving by a factor that approximately corresponds to the number of MC runs (in our benchmark experiment 10,000). We have shown that our fast MP approach approximates precisely the expectation and variance of the prediction distribution achieved by MC dropout. Also the achieved prediction performance in terms of RMSE and NLL do not show significant differences when using MC dropout or our MP approach. Hence, our presented MP approach opens the door to include uncertainty information in real-time applications.
We are currently working on extending the approach to different architectures such as convolutional neural networks.We are also investigating how to make use of the uncertainty information to detect novel classes in classification settings.
|
2020-07-08T01:01:32.047Z
|
2020-07-07T00:00:00.000
|
{
"year": 2020,
"sha1": "05899fad411650e6ce428f15554d4d740eacea1c",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "05899fad411650e6ce428f15554d4d740eacea1c",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
}
|
220497915
|
pes2o/s2orc
|
v3-fos-license
|
A Genetic and Metabolomic Perspective on the Production of Indole-3-Acetic Acid by Pantoea agglomerans and Use of Their Metabolites as Biostimulants in Plant Nurseries
The species Pantoea agglomerans includes strains that are agronomically relevant for their growth-promoting or biocontrol traits. Molecular analysis demonstrated that the IPDC pathway involved in the conversion of tryptophan (Trp) to indole-3-acetic acid (IAA) is highly conserved among P. agglomerans strains at both gene and protein levels. Results also indicated that the promoter region controlling the inducible expression of ipdC gene differs from the model system Enterobacter cloacae, which is in accordance with the observation that P. agglomerans accumulates higher levels of IAA when cells are collected in the exponential phase of growth. To assess the potential applications of these microorganisms for IAA production, P. agglomerans C1, an efficient auxin-producer strain, was cultivated in 5 L fermenter so as to evaluate the effect of the medium formulation, the physiological state of the cells, and the induction timing on the volumetric productivity. Results demonstrated that higher IAA levels were obtained by using a saline medium amended with yeast extract and saccharose and by providing Trp, which acts both as a precursor and an inducer, to a culture in the exponential phase of growth. Untargeted metabolomic analysis revealed a significant effect of the carbon source on the exometabolome profile relative to IAA-related compounds and other plant bioactive signaling molecules. The IAA-enriched metabolites secreted in the culture medium by P. agglomerans C1 were used as plant biostimulants to run a series of trials at a large-scale nursery farm. Tests were carried out with in vitro and ex vitro systems following the regular protocols used for large-scale plant tree agamic propagation. Results obtained with 4,540 microcuttings of Prunus rootstock GF/677 and 1,080 plantlets of Corylus avellana L. showed that metabolites from strain C1 improved percentage of rooted-explant, number of adventitious root formation, plant survival, and quality of plant as vigor, with an increase in the leaf area between 17.5 and 42.7% compared to IBA-K (indole-3-butyric acid potassium salt)–treated plants.
The species Pantoea agglomerans includes strains that are agronomically relevant for their growth-promoting or biocontrol traits. Molecular analysis demonstrated that the IPDC pathway involved in the conversion of tryptophan (Trp) to indole-3-acetic acid (IAA) is highly conserved among P. agglomerans strains at both gene and protein levels. Results also indicated that the promoter region controlling the inducible expression of ipdC gene differs from the model system Enterobacter cloacae, which is in accordance with the observation that P. agglomerans accumulates higher levels of IAA when cells are collected in the exponential phase of growth. To assess the potential applications of these microorganisms for IAA production, P. agglomerans C1, an efficient auxinproducer strain, was cultivated in 5 L fermenter so as to evaluate the effect of the medium formulation, the physiological state of the cells, and the induction timing on the volumetric productivity. Results demonstrated that higher IAA levels were obtained by using a saline medium amended with yeast extract and saccharose and by providing Trp, which acts both as a precursor and an inducer, to a culture in the exponential phase of growth. Untargeted metabolomic analysis revealed a significant effect of the carbon source on the exometabolome profile relative to IAA-related compounds and other plant bioactive signaling molecules. The IAA-enriched metabolites secreted in the culture medium by P. agglomerans C1 were used as plant biostimulants to run a series of trials at a large-scale nursery farm. Tests were carried out with in vitro and ex vitro systems following the regular protocols used for large-scale plant tree agamic propagation. Results obtained with 4,540 microcuttings of Prunus rootstock GF/677 INTRODUCTION Indole-3-acetic acid (IAA) is the most abundant member of the auxin family of phytohormones, and its biosynthesis in plants and bacteria proceeds through distinct biosynthetic routes: both tryptophan (Trp)-dependent and Trp-independent pathways have been described (Eckardt, 2001;Ljung et al., 2002). Based on the distinct intermediates involved in Trp-dependent IAA biosynthesis, five different pathways have been characterized in bacteria, namely, indole-3-acetamide (IAM), indole-3-pyruvic acid (IPyA), indole-3-acetonitrile (IAN), tryptamine (TAM), and Trp side-chain oxidase pathway (Patten et al., 2013;Doktycz, 2019). Although the Trp-independent pathway is thought to occur in bacteria as well (Prinsen et al., 1993), no specific enzymes in this pathway have been characterized.
The IPyA pathway is operational in plant-beneficial bacteria, such as Azospirillum brasilense and representative members of Enterobacter cloacae complex and is subjected to extremely tight regulation. In this pathway, there is transamination of Trp to IPyA, followed by decarboxylation to indole-3-acetaldehyde (IAAld) by the enzyme indole-3-pyruvate decarboxylase (IPDC; EC 4.1.1.74), and then oxidation of IAAld to IAA. The key enzyme in this pathway, IPDC, is encoded by ipdC, and deletion or functional inactivation of ipdC gene affects IAA biosynthesis in some strains, such as Enterobacter ludwigii (formerly, E. cloacae) UW5, A. brasilense, Pantoea agglomerans 299R, and Pantoea species YR343 (Koga et al., 1991;Brandl and Lindow, 1997;Malhotra and Srivastava, 2008;Garcia et al., 2019). The IPDC genes code for polypeptides of approximately 550 amino acids in length, corresponding to a molecular mass of 60 kDa per subunit. The homotetrameric IPDC from E. ludwigii UW5, characterized both at biochemical and structural levels, has a molecular mass of 240 kDa and binds four molecules of the cofactors thiamine diphosphate (ThDP) and Mg 2+ (Schutz et al., 2003). The ipdC gene from E. ludwigii UW5 is activated by the transcription factor TyrR and increases in response to the aromatic amino acid Trp, tyrosine, and phenylalanine (Ryu and Patten, 2008;Coulson and Patten, 2015). However, regulation varies across bacterial species: constitutive in Agrobacterium; regulated by specific transcriptional factors, such as RpoS or RpoN, in some Pseudomonas and Enterobacter strains; regulated by the global signal transduction system GacS/GacA that controls secondary metabolism in several plant-associated gram-negative bacteria (Spaepen et al., 2007).
Indole-3-acetic acid plays an important role in the regulation of growth and development of vascular plants, including cell division, cell extension, and cell differentiation (Kasahara, 2016). It specifically plays a crucial role in root initiation, apical dominance, tropisms, and senescence (Zhao, 2010). In addition, IAA is produced by plants, as well as by some beneficial bacteria in the rhizosphere, where it acts as a signaling molecule with significant effects on the communication between plants and microorganisms, and on plant growth (Spaepen and Vanderleyden, 2011;Duca et al., 2014).
In recent years, several studies have reported alternative approaches to the application of phytohormones and plant growth regulators, such as the use of symbiotic organisms and/or natural biostimulants from microbial and non-microbial organisms in agriculture systems and in tissue cultures, which are also environment friendly, as for microbial biostimulants (Ruzzi and Aroca, 2015;Orlikowska et al., 2017;Rouphael and Colla, 2018). Within this framework, the species P. agglomerans has drawn attention for its plant growth-promoting activity.
Classification of P. agglomerans into the biosecurity group 2 and the fact that sometimes this species causes human infections particularly in immunocompromised people prevent its utilization as bioinoculant in Europe (Dutkiewicz et al., 2016a;Büyükcam et al., 2018). However, increasing evidence has shown that selected members of the P. agglomerans species can have a great potential as plant growth-promoting bacteria (Paredes-Páliz et al., 2016a and comprise strains that are agronomically relevant for their growth-promoting or biocontrol traits and have been increasingly regarded as ideal candidates among plant growth-promoting rhizobacteria to be used as a biocontrol agent (Dutkiewicz et al., 2016b).
In details, P. agglomerans strain C1, isolated from the phyllosphere of lettuce plants (Lactuca sativa L.) treated with plant-derived protein hydrolysates (Luziatelli et al., 2016), has been studied for its potential as novel biostimulant in sustainable agriculture. It has been specifically characterized for its heavy metal resistance and metabolic capacities (Luziatelli et al., 2019b, as well as for its ability to solubilize phosphate, to inhibit plant pathogens, to produce IAA and siderophores, and to improve the use of rock phosphates and the growth of corn (Zea mays L.) and tomato (Solanum lycopersicum L.) in pot experiments (Luziatelli et al., 2019aSaia et al., 2020). In particular, when P. agglomerans strain C1 is grown onto medium rich in Trp, large amounts of IAA are produced, which makes it a natural biostimulant suitable to be used in the rooting phase of micropropagation instead of the synthetic auxins .
Micropropagation allows a rapid multiplication of several plant species in large-scale clonal plants and is becoming a widespread technique for rootstocks and crops species propagation (Loberant et al., 2010). On the nursery farming scale, micropropagation has a strong economic impact, because in a relatively short time period, reduced space and growing controlled conditions are required independently of environmental conditions and season period (Akin-Idowu et al., 2009). Micropropagation starts, in fact, from a small part of selected elite plant and allows the production thousands of plants that can be produced in a continuous process (Pierik and Scholten, 1994). However, rhizogenesis and root growth are critical morphophysiological events for plant survival in ex vitro conditions and are achieved by auxin treatment of unrooted microshoots as the last phase of in vitro culture and/or the first phase of in vivo acclimatization phase.
In micropropagation of woody crops, it has been observed that, besides IAA, synthetic molecules with strong auxin activities can be used. Among them, indole-3-butyric acid (IBA) is the compound of choice, thanks to its higher stability in the culture medium (Nissen and Sutter, 1990;Bhatti and Jha, 2010). Moreover, it has been assumed that, among the metabolites making up the pool of excreted molecules (exometabolome), there are molecules that could have a synergistic activity with auxin and/or elicitor roles, acting on the regulation pathways of adventitious rooting events and quality of roots.
The study was designed to test the biostimulant properties of metabolites excreted by P. agglomerans strain C1 on root induction and plant development of some woody fruit crop species in large-scale nurseries, at the phase of in vitro and ex vitro rooting and acclimation.
Pathway and Gene Identification
As a preliminary step, a database of bacterial proteins that are associated to the five major IAA biosynthetic pathways was constructed following a comprehensive search of the literature. Then, the complete set of 4497 protein sequences encoded by P. agglomerans C1 was compared against this manually curated set of IAA biosynthetic genes using the Basic Local Alignment Search Tool (BLAST; Altschul et al., 1990). Multiple protein sequence alignments were generated using ClustalW algorithm (Thompson et al., 1994). Bacterial promoter prediction program, BPROM 1 , was used to identify the position of the promoter, that is, -10 box and -35 box in the input sequence. The PATRIC BLASTN and BLASTP services were used to perform homology searches against P. agglomerans genomes available at PATRIC Website 2 using, as queries, nucleotide/protein sequences of strain C1 corresponding to (1) the 4 kb ipdC-containing region; (2) the 5 upstream region of ipdC gene; (3) the genes encoding aminotransferase (peg.1678), IPDC (ipdC;peg.1955), and IAAld dehydrogenase (peg.576 and peg.879); (4) the deduced protein sequences of peg. 1678, peg.1955, peg.576, and peg.879. The resulting hits were filtered at 100% query coverage and ≥ 80% of sequence identity.
Growth Conditions
Seed cultures, from a freeze glycerol stock of P. agglomerans strain C1, were inoculated into 500 Erlenmeyer flasks containing 50 mL of LB medium and incubated at 30 • C with agitation (180 rpm).
Seed cultures in the late exponential phase of growth [optical density at 600 nm (OD 600 ) of 4.5-4.8] were used to inoculate, with an initial OD 600 of 0.1, 25 mL of production medium amended with Trp (4 mM), and growth was monitored by measuring the wet weight of cells. In addition, LB + Trp (4 mM) was used as reference media for IAA biosynthesis.
After 24 h of growth at 30 • C and 180 rpm, cells were removed by centrifugation (10 min at 8,000 g), and the supernatant was filter-sterilized through a 0.22 µm filter and stored at -20 • C until use.
Fermentation and Optimization of the Induction Conditions
To evaluate the relationship between the physiological state of the cells and the IAA production level, P. agglomerans strain C1 was grown in a BioFlo 120 bench-top stirred tank reactor (Eppendorf S.r.l., Milan, Italy) equipped with a 7.5-L vessel, two Rushton impellers, digital ISM probes for dissolved oxygen (DO), and pH (Mettler-Toledo S.p.A., Milan, Italy) and a platinum RTD (Pt100) sensor for the temperature. The growth was carried out in YES medium (5 L) at 30 • C under aerobic conditions. The oxygen concentration was maintained greater than 20% of saturation (oscillation between 25 and 40%), blowing sterile air at an aeration rate of 0.2-1.5 (vol/vol/m), and by regulating the impellers speed from 150 rpm (initial condition) to 525 rpm (end of the exponential phase of growth). The initial pH of the medium was adjusted to 6.6 by addition of 1 M HCl or 2 M of NaOH, and growth was carried out without pH control.
The reactor was inoculated at 5% (vol/vol) with an initial optical density (OD 600 ) of 0.4, using an LB culture grown at 180 rpm and 30 • C up to the late exponential phase. After 3, 3.5, 4, 4.5, 5, and 6 h of growth, 100 mL aliquots were collected from the fermenter, and the cells recovered by centrifugation (8,000 rpm for 10 min) were resuspended in fresh YES medium to obtain a final OD 600 of 10 and used to inoculate shaken flasks containing YES medium (25 mL) amended with Trp (4 mM). All cultures were inoculated at the same initial OD 600 of 0.5 and incubated at 180 rpm and 30 • C for 18 h. For each time point, experiments were carried out in triplicate. To measure the accumulation of IAA in the culture medium, 1 mL samples were taken at intervals of 60 min for the first 2 h and at the end of the growth (18 h) and treated as reported before.
Spectrophotometric Determination of Indole Auxins
Auxin production was measured using Salkowski reagent as described previously . In brief, 1 mL of filter-sterilized (0.22 µm) supernatant was added to 2 mL of Salkowski reagent (0.5 M FeCl 3 , 35% vol/vol HClO 4 ), and the mixture was incubated at room temperature (in the dark) for 20 min. The presence of IAA and other indole auxins were detected measuring pink color development at 535 nm using a Cary 50 UV-Vis spectrophotometer (Agilent, Santa Clara, CA, United States). A series of IAA standard solutions of known concentrations were prepared to set up the calibration curve.
Metabolome Analysis by Quadrupole Time-of-Flight Liquid Chromatography-Mass Spectroscopy
For untargeted metabolomics, the sample (1 mL) was extracted in 5 mL of cold (−20 • C) acidified (0.1% HCOOH) 80/20 methanol/water using an Ultra-Turrax Homogenizer (Ika T-25, Staufen, Germany), centrifuged at 1200 rpm and filtered through a 0.2 µm cellulose membrane. The analysis was carried out on an Agilent 6550 Q-TOF with ESI source, coupled with an Agilent 1290 UHPLC. A BEH C18 column from Waters (100 × 2.1 mm internal diameter, 1.7 µm) was used according to the procedure and gradient described in Tsugawa et al. (2019). Injection volume was 2 µL for all samples. A pooled quality control was obtained by mixing 10 µL of each sample and acquired in tandem mass spectroscopy mode using iterative function five consecutive times to increase the number of compounds with associate MS2 spectra. Blank filtering, alignment, and identification were accomplished using MS-DIAL (Tsugawa et al., 2015) and MS-FINDER (Tsugawa et al., 2016), with the procedure described by Blaženović et al. (2019).
The table with all compound peaks height was exported from MS-DIAL into MS-FLO (DeFelice et al., 2017) to reduce false positives and duplicates. Then, an internal developed workflow in R was employed for fold change and Benjamini-Hochberg corrected p-value, PLSDA analysis (Thévenot et al., 2015), and chemical enrichment analysis (Barupal et al., 2012).
Plant Inoculation
In vitro shoot tips of peach clonal rootstock "GF677" (Prunus persica × Prunus amygdalus) of 10 mm in length and in vitro shoot tips of hazelnut cv. "Fertile de Coutard" (Corylus avellana L.) of 15 mm in length were cultured on proliferation medium, containing a modified Quirin and Lepoivre (QL; Quoirin and Lepoivre, 1977) and Driver and Kuniyaki Walnut (DKW;Driver and Kuniyuki, 1984) basal salt solution, respectively, and enriched by 30 g L −1 sucrose and solidified by 6.8 g L −1 agar. The growth regulators added to the medium were as follows: 2.22 µM of 6-benzyladenine (BA), 0.05 µM of α-naphthalene acetic acid for the rootstock GF677; 10 µM of BA, 0.05 µM of IBA for the cv. Fertile de Coutard. The pH of the medium was adjusted to 6.3 ± 0.1 for the rootstock GF677, and 6.0 ± 0.1 for the cv. Fertile de Coutard, before addition of agar and sterilization at 120 • C for 20 min. Shoots of rootstock GF677 were subcultured at a 4-week interval, while shoots of cv. Fertile de Coutard were subcultured at a 6-week interval, under 16 h light photoperiod, using white fluorescent lamps Philips TL-D 58/865-MASTER (Philips, Italia), at 40 µM m −2 s −1 photon flux at constant temperature of After the proliferation step, microcuttings were transferred for 15 days in Murashige and Skoog (MS) elongation medium (Murashige and Skoog, 1962), supplemented with 14 µM gibberellic acid (GA3) and 20 g L −1 sucrose. The medium was sterilized at 120 • C (2 bars), for 20 min after addition of 6.8 g L −1 agar and pH titration to 6.5. Glass jars of 500 mL in volume, each containing 100 mL of culture medium, were used as culture vessels.
At the beginning of the acclimatization, before the transfer into the cell plug tray, plantlets were immersed for 10 s in an auxin solution containing either an appropriate dilution of IAAenriched excretome from P. agglomerans strain C1 (indole auxins final concentration of 1 µM) or 10 µM IBA potassium salt (IBA-K). Ex vitro acclimatization started on April 2018 by transferring the treated plantlets into 360 cells plug tray (Jiffy, Netherlands), with a volume of each cell cavity of 7 cm 3 .
Trays were placed under controlled misting of the greenhouse (temperature 28 • C ± 4 • C, RH 90%), at natural photoperiod and photosynthetic active radiation varying daily between 300 and 500 µM m −2 s −1 .
All on-farm trials, two with Prunus rootstock GF677 and one with hazelnut, were carried out in agreement with the company's production cycle. In the first trial with rootstock GF677, 720 plants (two trays of 360 cells) were treated by dipping with secreted metabolites from strain C1, and an equal number of plants were treated with IBA-K solution. In the second trial, 3,780 plants (10.5 trays of 360 cells) were treated with C1 metabolites, and 1,800 plants (five trays of 360 cells) with IBA-K. In both experiments, after 10 days from the dip treatment, plants were treated again by spraying fine drops of the same solution until the complete wetting of the leaves was achieved.
For hazelnut adventitious rooting induction experiments, 1,440 plantlets were treated by dipping with secreted metabolites from strain C1, and 1,080 plantlets were treated with IBA-K.
On the 20th day of the ex vitro acclimatization, the percentage of rooted plantlets, the number of roots per plantlet, the root length, and the elongation of plantlet stem were recorded.
After 1 month, all plantlets were transferred into 60 preloaded cells plug tray (Jiffy), each cell plug with a volume of 117 cm 3 . Two weeks later, the survival ratio of plant was determined, and the total leaf area per plant was detected by Android "CANOPEO" application 3 on a total of 600 randomly chosen plants (300 for control and 300 Pantoea-treated).
Since the beginning of the transfer in vivo, every 20 days, plants were fertilized with NUTRIGREEN AD (GREEN HAS ITALIA S.p.A.), through the fertigation system at the amount of 2 mL L −1 .
Statistical Analysis
Statistically significant differences between the means were determined by the one-way analysis of variance using the SigmaStat 3.1 package (Systat Software Inc., San Jose, CA, United States).
Identification of the IAA-Biosynthetic Genes
Genes encoding enzymes involved in IAA synthesis were identified in P. agglomerans C1 genome by BLASTx analysis using the sequences of 11 different enzymes associated with alternative IAA pathways, as queries ( Table 1). This analysis revealed the presence, in strain C1, of the whole set of genes of only one of the five IAA pathways occurring in bacteria and plants: the indole-3-pyruvic acid (IPyA) pathway (Table 1 and Figure 1). Noteworthy, for each of the three enzymes involved in this pathway, the identity, at amino acid level, with sequences of other Pantoea strains was between 81 and 92% along the entire protein length ( Table 1). For the same proteins, the identity between sequences from strain C1 and strains belonging to other genera, including Azospirillum, Pseudomonas, Enterobacter, Azospirillum, and Arthrobacter, varied between 25 and 56% ( Table 1).
Proteins with weak similarity to (i) indole acetamide hydrolase (pathway IAM) and (ii) Trp decarboxylase and amine oxidase (pathway TAM) were also identified (Table 1 and Figure 1). In contrast, no sequence related to Trp aminotransferase and YUCCA enzymes (pathway IPyA-YUCCA) or nitrilases (pathway IAN) was detected ( Table 1).
Sequence Comparison of IPDC Proteins
BLAST analysis of the ThDP-binding indolepyruvate decarboxylase (IPDC EC 4.1.1.74) revealed ( Table 1) that the deduced amino acid sequence encoded by C1_peg1955 from P. agglomerans C1 (IPDC Pa_C1 ) shares a 92% of identity with IPDC from P. agglomerans (formerly Erwinia herbicola) strain 299R (IPDC Pa_299R ; Brandl and Lindow, 1996) and 73% of identity with IPDC1 from Pantoea species strain YR343 (IPDC1 Psp_YR343 ; Garcia et al., 2019). To gain more information about IPDC Pa_C1 , a comparative analysis was carried out between this protein and IPCD from E. ludwigii (previously misidentified as Pseudomonas putida and subsequently as E. cloacae) strain UW5 (IPDC Ec_UW5 ). The structural model of IPDC Ec_UW5 has been published almost 10 years ago by Schutz et al. (2003), and the authors demonstrated that IPDC Ec_UW5 is a homotetrameric enzyme in which each monomer has defined domains involved in the binding of both the substrate and the cofactors (Mg 2+ and ThDP). Using the MULTALIN tool (Corpet, 1988), IPDC sequences from strain C1, E. ludwigii UW5 and other Pantoea strains were aligned as reported in Table 1. This analysis allowed demonstrating that several amino acids, which were found to be essential for the activity of IPDC Ec_UW5 , were conserved or conservatively replaced in all Pantoea strains (Figure 2). In detail, 90% of the active site residues of IPDC Ec_UW5 are conserved in IPDC Pa_C1 , suggesting that the two enzymes might have a similar catalytic mechanism.
Organization of the IPDC Coding Region
DNA sequence analysis of the genomic region surrounding the ipdC gene from P. agglomerans C1 revealed the presence of two ORFs encoding a 330-amino acid protein, annotated as Lglyceraldehyde 3-phosphate reductase (ORF1), and a 322-amino acid protein, with high homology to glucokinase, respectively (Figure 3). Interestingly, a similar genetic structure occurs in E. ludwigii UW5 (Coulson and Patten, 2015) and Pantoea species YR343 (Garcia et al., 2019; Figure 3), as well as in 45 of 50 P. agglomerans genomes sequenced so far. In the latter case, the identity over the entire 4 kb sequence of the ORF1-ipdC-ORF2 gene cluster was between 88 and 100%, with a mean value of 99% (Figure 4, lane A). Surprisingly, a high sequence identity (mean value = 98%) was also observed (Figure 4, lane Cg-Fg) comparing the DNA sequence of all P. agglomerans genes of the IPyA pathway, the genes encoding amino transferase, IPDC and IAAld decarboxylase (peg.1678, peg.1955, peg.576, and peg.879 in strain C1; Figure 1). The remarkable evolutionary conservation of the IPyA pathway among the members of the P. agglomerans species was also confirmed analyzing the variability at the level of protein sequence that varied between 1 and 2% and the kernel density plot (Figure 4, lane Cp-Fp). A similar observation was done analyzing the ipdC promoter region, which, as shown by the density of the data in the violin plot reported in Figure 4 (lane B), is more conserved than the IPCD coding sequence.
Analysis of the ipdC Promoter
A search of functional motifs carried out with BPROM annotation package (Softberry Inc., Mount Kisco, NY, United States) revealed the presence, in the promoter region upstream of P. agglomerans C1 ipdC gene, of sequences that resemble to the Escherichia coli RpoD (s 70 ) -10 and -35 elements, matching the E. coli consensus sequences at four of six nucleotides (-10), and five of six nucleotides (-35), respectively. Interestingly, the ipdC putative regulatory sites of C1 showed significant similarity with those predicted in the ipdC promoter of other Pantoea strains (YR343 and 299R) and E. ludwigii UW5 (Figure 5).
Inspection of the 5 untranslated region upstream from the C1_ipdC start codon showed no detectable cis-regulatory element such as the inverted repeats identified in the ipdC promoter of A. brasilense (Vande Broek et al., 2005) or the two 18-bp consensus sequences (weak and strong TyrR boxes) recognized by TyrR, which regulates the expression of the ipdC gene in E. ludwigii UW5 (Coulson and Patten, 2015). Sequences with weak similarity to the TyrR consensus motif (TGTAAA-N 6 -TTTACA) were found in the ipdC promoter region from Pantoea strains, but they do not meet the minimum molecular requirements for TyrR-mediated regulation: the presence of a strong TyrR box or a weak box with an adjacent strong box; the presence in the TyrR box of the G-C residues, essential for TyrR binding, spaced 14 bp apart (Figure 5). These evidences strongly support the hypothesis that the transcription of the ipdC gene in P. agglomerans is not controlled by the TyrR regulatory system. FIGURE 1 | Deduced pathways for IAA biosynthesis in P. agglomerans strain C1. The solid line indicates the principal pathway for IAA biosynthesis, and the dotted ones the pathways that are not fully supported by molecular evidences (low identity with known proteins; see Table 1 for details). The names of the pathways indicate the name of their first products. The names of the protein encoding genes (peg) and the definition of the protein coding enzymes (EC number) are reported on the top and the bottom of the arrows, respectively.
Selection of the Growth Medium for IAA Production
In preliminary experiments, it was observed that, in contrast to LB, when P. agglomerans C1 cells were grown on saline M9-glucose medium without Trp, no basal level of IAA was produced. Providing Trp (as inducer and precursor) at 4 mM concentration, no significant difference was observed in the IAA final titer (54 ± 0.9 mg L −1 ) shifting from LB to M9glucose medium. To further investigate the possibility to use a medium that has equivalent performances of LB but is free of animal-derived ingredients, the ability of C1 cells to grow and produce IAA on a simplified culture medium containing a saline phosphate buffer (M9 saline solution), a sugar, as a carbon source, and yeast extract, as a source of organic nitrogen, vitamins, and other growth factors, was investigated. For this purpose, glucose or sucrose, as a carbon source, was used alternatively, in combination with two concentrations of yeast extract (5 or 10 g L −1 ). All four media were amended with Trp (4 mM), and LB-Trp was used as a control. Results reported in Figure 6 indicate that IAA production occurred in all tested conditions and, independently from the carbon source and the amount of yeast extract that was used, was higher in saline medium compared to the control medium (LB-Trp). In particular, the higher level of biomass (64.1 ± 3.1 g [wet weight] L −1 ) and IAA (120.5 ± 0.9 µg mL −1 ) were obtained cultivating the microorganism in the presence of both sucrose and yeast extract at a final concentration of 0.5% (wt/vol) (SYE medium; Figure 5). Surprisingly, an increase in the organic nitrogen content, with double the amount of yeast extract [from 0.5 to 1% (wt/vol)], did not affect IAA production (80.13 ± 0.11 µg mL −1 in rGYE and 88.1 ± 0.44 µg mL −1 in rSYE) and biomass yield (47.1 ± 2.5 or 48.7 ± 2.2 g [wet weight] L −1 ) (Figure 6).
Interestingly, there was no significant difference in the biomass yield when cells were grown on GYE or LB medium (40.8 ± 3.3 g [wet weight] L −1 and 40.3 ± 2.5 g [wet weight] L −1 ), but the IAA-specific productivity shifting from LB to GYE had approximately 1.8-fold increase (Figure 6). These results suggested that LB was not an optimal medium for IAA production by P. agglomerans strain C1.
Overall Impact of the Carbon Source on the Metabolites Secreted by P. agglomerans C1 Results obtained in shake-flask experiments indicated that, in contrast to previous finding obtained with P. agglomerans 299R (Brandl and Lindow, 1997), the production of IAA from strain C1 was affected by the carbon source. For this reason, the FIGURE 2 | Sequence alignment of peg.1955 encoded protein from P. agglomerans strain C1 and functionally characterized IPDC from E. ludwigii and other Pantoea strains. Residues in IPDC sequence from E. ludwigii UW5 that have been identified by crystal structure analysis as being at the active site or involved in either ThDP or substrate binding are highlighted in black. In the alignment, the amino acid residues of these regions that are conserved among all or almost all different sequences are highlighted in black or gray. effect of the carbon source on the production of IAA and IAArelated compounds was studied in more detail by analyzing the exometabolome of P. agglomerans C1 using an untargeted metabolomics approach. Of the 528 features detected in the exhausted growth medium, when cells were cultured on YES or YEG amended with Trp, a total of 381 were more than twofold higher, and 94 were more than twofold lower when sucrose was used as a carbon source. ChemRICH analysis showed that a total of 58 metabolite clusters were significantly different (P > 0.05) between YES and YEG (Figure 7 and Supplementary Table S1). For 27 of these clusters, the differences arise from an increased level in all the compounds of the cluster, and for other 10 clusters, the enriched molecules represented 75-95% of the total compounds of the cluster. As shown in Figure 7, in sucrose-grown cultures, the classes of metabolites with the highest elevated level were as follows: dipeptides and cyclic peptides; triterpenes; compounds belonging to new clusters 1, 21, and 49. Clustering analysis also revealed FIGURE 4 | Distribution of sequence identity matches of P. agglomerans strain C1 to other P. agglomerans DNA and protein sequences related to IPyA pathway. The violin plots show the distribution of sequence identities of the ipdC gene cluster (A), the ipdC promoter region (B), the IPDC gene (C g ) and protein (C p ), the amino transferase gene (D g ) and protein (D p ), and the two IAAld dehydrogenase genes (E g and F g ) and proteins (E p and E p ).
FIGURE 5 | Alignment of relevant elements of the ipdC promoter sequence of E. ludwigii UW5, P. agglomerans C1, P. agglomerans 299R, and Pantoea species YR343. The conserved nucleotides in the -10 and -35 region (upper panel) and putative TyrR binding sites (lower panel) are highlighted in black. significant differences in the concentration level of single metabolites belonging to the indole-oligopeptides clusters and to the flavonoids cluster. In the first two clusters, we observed an increase of some bisindole alkaloids, such as the guaiaflavine (140-fold); an increase of some monoindole alkaloids, such as the indole-3-carbinol (I3C; 4.3-fold) and the IAA (2.2-fold); an increase of some amide-linked-IAA-L-amino acid conjugates (IAA-aa), such as indole-3-acetyl-L-valine (39-fold) and indole-3-acetyl-L-leucine (2.4-fold); a threefold decrease of aldehyde derivatives, such as IAAld and 1 H-indole-3-carboxaldehyde (I3A) (Supplementary Table S2). In the flavonoids cluster, a significant increase in N-containing flavonoids, such as phyllospadine (54-fold increase), was observed. These results clearly indicate that in P. agglomerans C1 the levels of secreted metabolites were significantly affected by the medium carbon source.
Optimization of the Growth Conditions in Bench-Top Fermenter
In contrast with previous findings obtained with Enterobacter and Agrobacterium, in which the ipdC gene is under the FIGURE 6 | Effect of culture medium on IAA production (mg L −1 ) and biomass concentration (g [wet weight] L −1 ). LB, YEG, and YES contained 5 g L −1 of yeast extract; in reinforced media (rYEG and rYES) the concentration of this ingredient is doubled (10 g L −1 ). Differences in letters and symbols indicate that the values are significantly different (P > 0.05).
FIGURE 7 | ChemRich analysis of exometabolome of P. agglomerans C1 grown on YES vs. YEG medium. Red clusters associated with higher outcomes, and the blue ones associated with lower outcomes.
Frontiers in Microbiology | www.frontiersin.org control of TyrR and the accumulation of IAA occurred only after entrance in the stationary phase (Ryu and Patten, 2008), when P. agglomerans strain C1 was grown on LB medium, as well as on M9 glucose, no significant production of IAA was detected when Trp was provided to resting cells or to stationaryphase cultures. To better understand the link between the cell physiological state and the biosynthesis of IAA in P. agglomerans, strain C1 was grown under controlled bioreactor conditions ensuring optimum oxygen uptake and temperature control and high growth rates.
Analyzing the growth profile of C1 cultures grown at 30 • C on YES medium under aerobic conditions (DO level > 20% of saturation), a lag phase of 30 min was observed followed by an acceleration phase (up to 1.5 h) during which the growth rate gradually increased (Figure 8). In the exponential phase, which reached its maximum at approximately 4 h, the specific growth rate was 1.63 h −1 . Between the 4th and the 5.5th hour, the specific growth rate decreased to approximately 67% (from 1.63 to 0.53 h −1 ) and at approximately 5.5 h the culture entered in the stationary phase (Figure 8). Interestingly, during the first 3.5 h of growth, medium pH decreased slowly from 6.6 to 6.5, at a rate of approximately 0.028 unit per hour (Figure 8). In YES medium, when the culture reached the end of the exponentially phase (4 h), the pH started to decrease rapidly (at a rate of 0.87 unit per hour) and reached a minimum value of 6.22 at 5 h. Later, at the beginning of the stationary phase, the medium pH increased from 6.22 up to 6.34 and remained constant up to the end of the fermentation (Figure 8). Variations in the medium pH between the 3.5th and the 5.5th hour occurred concurrently with changes in the oxygen consumption rate that were controlled by DO-dependent modifications of the agitation speed and the airflow using a closed-loop system. As shown in Supplementary Figures S1B,C, the oxygen demand suddenly increased between 4 and 4.5 h (agitation speed increased > 10% in 30 min, from 400 to 450 rpm) and then remained stable up the beginning of the stationary phase (5.5 h).
The growth of P. agglomerans strain C1 in bioreactor allowed increasing the biomass yield compared to shake-flask cultures of about sixfold, up to 292.5 ± 4.8 g [wet weight] L −1 .
Influence of the Physiological State of the Inoculum on IAA Production
Cells collected at different time points during the growth in bioreactor were transferred in medium containing Trp, so as to evaluate the effect of the physiological state of the cells used for inoculation on the IAA production. The growth was carried out in shake flasks at 30 • C, and the IAA production was monitored during the first 2 h and at the end of the growth. In all tested conditions, no significant difference was observed in the initial rate of IAA production that was approximately 42 ± 2 g L −1 h −1 . Surprisingly, at the end of the growth, a significant influence of the inoculum was observed on the IAA titer that varied between 161.58 ± 4.91 (6 h old inoculum) and 263.33 ± 8.25 mg/L (4 h old inoculum; Figure 9).
Use of IAA-Enriched Excretome From P. agglomerans C1 in Plant Nursery
The efficacy of the exhausted culture medium, containing IAA and other secreted metabolites from P. agglomerans strain C1, named IAA-E C1 (for IAA-enriched Excretome from strain C1) was tested on a total of 4,540 plants of Prunus rootstock GF/677 and 1,080 plants of hazelnut cv. Fertile de Coutard. The experiments were carried out in a large-scale nursery farm (Vivai Piante Battistini, Cesena, Italy) according to the protocols used for large agamic propagation of fruit crop trees.
Plant Survival Experiments
The inductive activity of adventitious rooting of exometabolites from strain C1 was tested in vitro on two sets of 760 plantlets of rootstock GF677 treated with either a 3 µM IBA-K solution or an appropriate volume of IAA-E C1 to achieve a final concentration of auxins of 1 µM. After 10 days of rooting stage, when plantlets were transferred to 360-cell plug tray, the percentage of survival was 85% for IAA-E C1 -treated plants and 97% for IBA-treated plants; after 1 month of ex vitro acclimation, this value resulted 95% for IAA-E C1 -treated plants and only 80% for IBA-treated plants. Taken together, these values indicate an overall increase of about 3% in plant survival when bacterial metabolites were used in alternative to IBA-K.
Ex vitro Experiments
After 20 days of ex vitro acclimatization, regardless of the inductive treatments, the percentage of rooting was 100%. However, the number of roots per rooted explant was significantly higher for IAA-E C1 -treated plants (5.2 ± 0.5 roots per explant), compared to an average of 3.6 ± 0.5 of roots for IBA-K-treated plants. The quality of the roots also resulted improved after the treatment with IAA-EC (Figure 10). No significant difference was, instead, detected for the elongation of roots, which was approximately 14 mm for all plants, as well as for the plant survival percentage that, after 3 weeks from the beginning of the experiments, was 85% ± 1%. Finally, the average total leaf area per plant was higher in IAA-E C1 -treated (115 ± 2 mm 2 ) with respect to IBA-K-treated plants (102 ± 8 mm 2 ).
The inductive role of adventitious rooting by IAA-E C1 was also investigated on hazelnut microcuttings of cv. Fertile de Coutard using the same experimental protocol applied for Prunus rootstock. This experiment was carried out on two distinct pools of 1,080 and 1,440 binodal cuttings that were treated with IBA-K (3 µM solution) and IAA-E C1 (a 1:1000 diluted solution with an indole auxins final concentration of 1 µM), respectively. The percentage of rooted cuttings and number of roots per rooted explants resulted to be not significantly affected by the treatments, with the rooted ratio being between 64 and 75% and the value of root number being between 1.1 ± 0.1 and 1.3 ± 0.2 ( Table 2). The root length was significantly affected by C1 metabolites and increased about 1.3-fold after treatment with IAA-E C1 compared to IBA-K ( Table 2). A similar positive effect was also observed analyzing the stem elongation and the leaf area which both increase of about 1.4-fold in IAA-E C1 -treated plants (Figure 11). It is worth mentioning that the treatment with C1 metabolites determined a development of adventitious roots that are not present in IBA-K-treated plants (Figure 12).
DISCUSSION
Indole-3-acetic acid production is widespread among plant growth-promoting bacteria and varies from species to species and also among strains belonging to the same species (Spaepen and Vanderleyden, 2011;Duca et al., 2014). Auxins and other plant growth-promoting metabolites obtained from nonpathogenic P. agglomerans strains can be successfully applied in agriculture systems as plant biostimulants. It has been demonstrated that P. agglomerans strain C1 produces IAA and siderophores, and the metabolites secreted by this strain can be utilized as biostimulants to improve the root surface area in tomato cuttings . This strain is also able to solubilize phosphates and can improve the use of rock phosphates by corn (Zea mays L.) and tomato (S. lycopersicum L.) (Saia et al., 2020).
The application of comparative genetics, metabolomics, and fermentation technology in this study allowed analyzing in more detail the possibility of using this strain for production of a novel cell-free biostimulants. Whole-genome analysis demonstrated that P. agglomerans C1 has several genes connected with the production of IAA from Trp and all the genes of the IPyA pathway (Table 1 and Figure 1). In GenBank, there are several sequences encoding indolepyruvate decarboxylase, the key enzyme of the IPyA pathway, and homologous enzymes, but only in few cases that the corresponding proteins have been isolated and characterized for their biochemical properties. The 1,653 bp peg1955 gene from P. agglomerans C1 encodes a 551-amino acid protein, which shares high identity with wellcharacterized IPDC from other Pantoea strains: 92% identity with the IPDC from P. agglomerans 299R (Brandl and Lindow, 1996) and 73% identity with IPDC1 from Pantoea species YR343 (Garcia et al., 2019; Table 1 and Figure 2). Interestingly, the results obtained in this study demonstrated that, with few exceptions, in most of the sequenced genomes of isolates belonging to P. agglomerans species (45 of 50), the genes encoding enzymes of the IPyA pathway share an identity higher than 95% over the full-length of the sequence (Figure 4). This unusually high sequence conservation among bacteria occurring in diverse natural environments has never been reported before for IAA genes belonging to other species. This is a strong evidence of the evolutionary importance of this functional trait for the interaction between P. agglomerans and host plants. Molecular data presented in this work also indicate that strain C1 produces IAA through the IPyA route, which is usually associated to beneficial bacteria, whereas the IAM pathway, linked to plant pathogens using auxin synthesis as a virulence factor (Patten et al., 2013), is not present. These observations encourage the exploitation of strain C1 and its metabolites as a plant growth promoter.
Interestingly, comparative genomics also allowed demonstrating that the non-coding region upstream of the ipdC gene is highly conserved among member of P. agglomerans species (Figure 4) and that this gene is not regulated by TyrR (Figures 3, 5). TyrR is a transcriptional regulator that controls the expression of a number of genes involved in the biosynthesis, catabolism, and transport of aromatic amino acids in E. coli (Pittard et al., 2005) and regulates expression of ipdC gene in E. ludwigii UW5 (Coulson and Patten, 2015). The ipdC promoter from strain UW5 contains two TyrR binding motifs, a strong (highly conserved) and a weak Tyr-box, which are not present in Pantoea strains (Figure 5). This result agrees with the observation that in P. agglomerans C1 the IAA production occurs in exponential phase of growth and not in stationary phase as in species in which ipdC gene is under the control of TyrR. The regulatory mechanism that control expression of the genes involved in IPyA pathway in P. agglomerans has not been elucidated yet, although there is evidence that multiple regulatory mechanisms may exist. For example, Brandl and Lindow (1997) demonstrated that in P. agglomerans strain 299R ipdC gene was expressed at low levels when cells were cultured in in vitro liquid cultures (independently from the presence of Trp or the growth phase of the culture) and was fully induced only when cells were grown on plants under water stress. In contrast, IAA production in P. agglomerans C1 is Trp-dependent and is significantly affected from the carbon source (Figure 6) and the physiological state of the cells (Figure 9). Moreover, cultivation in bench-top fermenter indicated that with strain C1 there is a correlation between the amount of IAA produced after induction with Trp and the preparation of the inoculum (Figure 9). This observation agrees with the findings of a previous work with recombinant E. coli cells, where the production of aromatic compounds in enteric bacteria is enhanced with cells in exponential phase of growth in which the availability of ATP is higher (Luziatelli et al., 2019c).
Untargeted metabolomics allowed demonstrating that P. agglomerans C1, on medium containing sucrose as a carbon source, produces, together with IAA, other IAA-related compounds, such as I3C and IAA-leucine, which can either modulate the effect of an excess of IAA (Katz et al., 2015) or increase the availability of this compound (Sanchez Carranza et al., 2016). In the exometabolome of P. agglomerans C1, other classes of compounds are also present, such as peptides and cyclopeptides that crosstalk with auxins or affect auxin transport or turnover.
Finally, the results obtained from studies run on a large scale on both one of the most diffuse Prunus rootstock and plant hazelnut confirmed that metabolites secreted by indole auxinproducing cells of P. agglomerans C1 have a high stimulating effect on adventitious rooting induction and adaption and plant growth in vitro. Interestingly, the excretome of strain C1 can also play relevant roles in root morphology (Figures 10, 12) and plant growth (Figure 11) in ex vitro conditions. It is worth noting that these effects, as shown by the control experiments with IBA-K, are only in part dependent on auxins and with IAA-E C1 can be achieved at a molar concentration of IAA equ threefold lower than the synthetic auxin (1 vs. 3 µM). Both in vitro and extra vitro procedures have shown that IAA-E C1 improved the performance and quality of micropropagated plant production. It should be emphasized that evaluation of the applicability of Pantoea metabolites in plant nursery industry was carried out following standard operation procedures used for large-scale production. Best practices for plant production rely on a careful control of microbial contamination of the production lines and equipment, the growing media, and the plant containers. In this specific context, the use of microbial inocula that can be beneficial for some plant cultures and not appropriate for others is always questionable and potentially dangerous. Under this respect, the use of liquid products, which are cell-free and contain only microbial metabolites, reduces contamination risks, is more compatible with the automation systems, and facilitates the adoption of this technology in plant nurseries.
CONCLUSION
In conclusion, the use of metabolites secreted by selected strains of P. agglomerans species can provide a valuable contribute for production of innovative biostimulants that can comply with current EU legislation. Further studies on gene expression will help to decipher the regulatory network that control IAA biosynthesis in P. agglomerans and provide more insight into the mechanisms by which Pantoea metabolites elicit plant growth promotion.
DATA AVAILABILITY STATEMENT
All datasets presented in this study are included in the article/Supplementary Material. LG was supported by a Ph.D. grant of Minister for Education, University and Research (MIUR), Department of Excellence project SAFE-Med (Law 232/2016). The funder was not involved in the study design, collection, analysis, interpretation of data, the writing of this article or the decision to submit it for publication.
|
2020-07-14T13:04:17.228Z
|
2020-07-14T00:00:00.000
|
{
"year": 2020,
"sha1": "5cf094befa104a4f0698bf24b0d6b3b38c71c617",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fmicb.2020.01475/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5cf094befa104a4f0698bf24b0d6b3b38c71c617",
"s2fieldsofstudy": [
"Biology",
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
15695645
|
pes2o/s2orc
|
v3-fos-license
|
Lectures on DNA Codes
For $q$-ary $n$-sequences, we develop the concept of similarity functions that can be used (for $q=4$) to model a thermodynamic similarity on DNA sequences. A similarity function is identified by the length of a longest common subsequence between two $q$-ary $n$-sequences. Codes based on similarity functions are called DNA codes. DNA codes are important components in biomolecular computing and other biotechnical applications that employ DNA hybridization assays. The main aim of the given lecture notes -- to discuss lower bounds on the rate of optimal DNA codes for a biologically motivated similarity function called a block similarity and for the conventional deletion similarity function used in the theory of error-correcting codes. We also present constructions of suboptimal DNA codes based on the parity-check code detecting one error in the Hamming metric.
Introduction and Biological Motivation
Single strands of DNA are, abstractly, (A, C, G, T )-quaternary sequences, with the four letters denoting the respective nucleic acids: adenine (A), citosine (C), guanine (G), and thymine (T ). Strands of DNA sequence are oriented; for instance, X = AACG is distinct from Y = GCAA. Furthermore, DNA is ordinarily double stranded: each sequence X, or strand, occurs with its reverse complement X ′ , with reversal denoting that the sequences of the two strands are oppositely oriented, relative to one other, and with complementarity denoting that the allowed pairings of letters, opposing one another on the two strands, are (A, T ) or (C, G)the canonical Watson-Crick pairings. For instance, two sequences X = AACG and X ′ = CGT T are reverse complement of one another. Obviously, for any strand X, we have (X ′ ) ′ = X.
Whenever two, not necessarily complementary, oppositely directed DNA strands "mirror" one another, they are capable of coalescing into a DNA duplex which is based on hydrogen bonds between some pairs of nucleic acids. Namely, pair (A, T ) forms two bonds, pair (C, G) forms three bonds, and any other pair is called a mismatch because it does not form any bond. The process of forming DNA duplexes from single strands is referred to as DNA hybridization. The greatest energy of DNA hybridization (the greatest stability of DNA duplex) is obtained when the two sequences are reverse complement of one another and the DNA duplex formed is a Watson-Crick (WC) duplex. However, there are many instances when the formation of non-WC duplexes are energetically favorable. The energy of DNA hybridization (the stability of DNA duplex) E(X, Y ) of two single DNA strands X and Y is, to a first approximation, measured by the longest length of a common subsequence (not necessary contiguous) of either strand and the reverse complement of the other [1]. For two mutually reverse complementary strands X and X ′ of length n, this measure plainly equals their length n, i.e., the maximum number of Watson-Crick bonds (complementary letter pairs) which may be formed between two oppositely oriented strands: (1.1) For instance, if X = AACG and X ′ = CGT T , then E(X, X ′ ) = 4.
A DNA code X is a collection of N single stranded DNA sequences (codewords) of fixed length n where each strand occurs with its reverse complement and no strand in the code equals its reverse complement [1,3], i.e., if X ∈ X , then X ′ ∈ X and X ′ = X. In DNA hybridization assays, the general rule is that formation of WC duplexes is good, but the formation of non-WC duplexes is bad. A primary goal of DNA code design is to be assured that a fixed temperature can be found that is well above the melting point of all non-WC duplexes and well below the melting point of all WC duplexes that can form from strands in the code. Thus the formation of any WC duplex must be significantly more energetically favorable than all possible non-WC duplexes. Note [1] that for biotechnical applications, the code length n, 10 ≤ n ≤ 40, is experimentally accessible and that codes with up to N = 10 9 codewords could soon be called for.
The following practical issue was an origin for the concept of DNA code. Assume that we have p types of some molecular objects and p pools. Each pool contains many identical copies (clones) of the corresponding object. We need to perform an experiment over all these pools. Since each experiment is expensive we are interested in the junction of these pools into one big metapool and performing only one experiment over this metapool. Then we face a problem of singling out some copies of each object from this mixture for analyzing experiment results.
For this purpose, there exists a method in which codewords of a DNA code X of size N , where N = 2p is an even number, are used as tags. We fix any p codewords X(1), . . . , X(p) of X which are called capture tags and the corresponding reverse complementary codewords X ′ (1), . . . , X ′ (p) called address tags. Modern technologies allow to generate many copies of each tag and mark each molecular object by the corresponding tag. Then a metapool is created and an experiment is performed. We assume that these processes do not change capture tags.
After this a solid support is taken. It is divided into p separated zones. Many copies of an address tag X ′ (i) are immobilized onto the corresponding i-th zone that physically segregates them. Then the support is placed into the metapool. This process is illustrated on Fig. 1.
Each pair of DNA sequences (codewords of DNA code X ) in a pool may form a duplex except immobilized address tags. In particular, any capture tag X(i) may form a duplex with an address tag X ′ (j). In this case, the corresponding object of the i-th type finds itself settled on the j-th zone of the support. Since there are many copies of each object and many copies of each address tag, one can finally find any type of object settled on j-th zone for any j = 1, . . . , p.
Let a stability function E expresses the melting temperature of a duplex. Assume that for an index j ∈ {1, 2, . . . , p} a certain temperature range separates large value E(X(j), X ′ (j) from small values E(X(i), X ′ (j) for i = j and small values E(X(i), X(j)) = E (X(i), (X ′ (j)) ′ ) for any i and j. This means that there exists a temperature range at which all duplexes on the j-th zone melt except those which are formed by X(j) and X ′ (j). Finally, only the objects of the j-th type will be settled on the corresponding zone and that separates them from the other types, see Fig. 2. Whenever this condition holds for all values j, we are able to separate all types of objects.
r r r r r r r r r r r r Figure 1: a metapool with capture tags X(i) and address tags X ′ (i) zone j The mathematical analysis of DNA hybridization is based on the concept of similarity functions that can be used to model a thermodynamic similarity on single stranded DNA sequences. For two quaternary n-sequences X and Y , the longest length of a sequence occurring as a (not necessary contiguous) subsequence of both is called a deletion similarity S λ (X, Y ) between X and Y . We supposed [1,3] that the deletion similarity S λ (X, Y ) identifies the number of base pair bonds in a hybridization assay between X and the reverse complement of Y , i.e., the energy of DNA hybridization E(X, Y ′ ) satisfying (1.1) is defined as follows Let D, 1 ≤ D ≤ n − 1, be a fixed integer. A DNA code X is called a DNA code of distance D based on deletion similarity or, briefly, an (n, D)-code [1,3] if the deletion similarity Definition (1.2) and condition (1.3) mean that the energy of DNA hybridization i.e., any strand X ∈ X and the reverse complement of the other strand Y ∈ X can never form ≥ n − D base pair bonds in a hybridization assay. In the theory of deletion -correcting codes, condition (1.3), by itself, specifies codes capable to correct any combination of D deletions [5].
is a (n, D)-code of length n = 4 and distance D = 1 because n − D − 1 = 2 and sequence Z = AT of length 2 is the longest common subsequence between any pair of strands in DNA code X . Hence, In paper [2], we introduced the concept of common block subsequence, namely: a common subsequence Z of sequences X and Y is called a common block subsequence if any two consecutive elements of Z which are consecutive in X are also consecutive in Y and vice versa. For two quaternary n-sequences X and Y , the longest length of a sequence occurring as a common block subsequence of both is called a block similarity between X and Y . For example, sequence Z = AT of length 2 is the longest common block subsequence between any pair of strands in DNA code (1.4). Thus, DNA code (1.4) can be considered as DNA (4, 1)-code based on block similarity.
The first conventional issue of coding theory [8] for DNA codes -to get a lower random coding bound on the rate of DNA codes and, hence, to identify values of the distance fraction D/n for which DNA code size grows exponentially when n increases. The given problem is more difficult than the corresponding problem for deletion -correcting codes. For instance, we cannot apply the best known random coding bounds [9] on the rate of deletion-correcting codes because these bounds were proved for codes which are not invariant under the reverse complement transformation. The second conventional issue of coding theory for DNA codes -to present constructions of DNA codes. The aim of our lecture notes is to discuss bounds and constructions for DNA codes based on the deletion and block similarities which have a good biological motivation to model a thermodynamic similarity on DNA sequences [2]. We will study q-ary DNA codes which are generalizations of quaternary DNA codes.
Notations, Definitions and Examples
The symbol denotes definitional equalities and the symbol [n] {1, 2, . . . , n} denotes the set of integers from 1 to n. Let q = 2, 4, . . . be a fixed even integer, A {0, 1, . . . , q − 1} be the standard alphabet of size |A| = q and ⌊u⌋ (⌈u⌉) denote the largest (smallest) integer ≤ u (≥ u). Introduce the binary entropy function Consider two arbitrary q-ary n-sequences In what follows, we will denote by symbol S = S(x , y ) an arbitrary symmetric function satisfying and called [1] a similarity function. For instance, an additive similarity function is the number of positions in which x and y coincide. Function S α (x , y ) can be called the Hamming similarity because n − S α (x , y ) is the well-known Hamming distance function (metric) applied in the theory of error-correcting codes [8].
Let ℓ ∈ [n] and m = 1, 2, . . . , ℓ. By symbol we will denote a common subsequence of length |z | ℓ between x and y . By definition, the empty subsequence of length |z | 0 is a common subsequence between any sequences x and y. Definition 1. [5]. Let S λ (x , y ), 0 ≤ S λ (x , y ) ≤ n, denote the length |z | of longest common subsequence z between sequences x and y . The number S λ (x , y ) is called a deletion similarity between x and y . Evidently, the function S λ = S λ (x , y ) satisfies (2.1). Definition 2. [2]. A common subsequence z = (z 1 , z 2 , . . . , z ℓ ), 2 ≤ ℓ ≤ n, is called a common block subsequence of length |z | ℓ between x and y if any two consecutive elements z m , z m+1 , m = 1, 2, . . . , ℓ − 1, which are consecutive (separated) in x are also consecutive (separated) in y and vice versa, i.e, By definition, any common subsequence z of length |z | = 0 or |z | = 1 is a common block subsequence. Let S β (x , y ), 0 ≤ S β (x , y ) ≤ n, denote the length |z | of longest sequence occurring as a common block subsequence z between sequences x and y . The number S β (x , y ) is called a block similarity between x and y . Obviously, S β = S β (x , y ) satisfies (2.1) and Definition 3. [1,3]. If q = 2, 4, . . ., then is called a complement of a letter x. For sequence x = (x 1 , x 2 , . . . , x n−1 , x n ) ∈ A n , we define its reverse complement x (x n ,x n−1 , . . . ,x 2 ,x 1 ) ∈ A n . Obviously, if y x , then x = y for any x ∈ A n . If x = x , then x is called a self reverse complementary sequence. If x = x , then a pair (x , x ) is called a pair of mutually reverse complementary sequences. Obviously, S α (x , y ) = 2. The deletion similarity S λ (x , y ) = 6 because 6-sequence z (0, 1, 0, 1, 1, 0) = (x 1 , x 2 , x 3 , x 4 , x 5 , x 6 ) = (y 2 , y 3 , y 4 , y 6 , y 7 , y 8 ) is the longest sequence occurring as a common subsequence between x and y. The block similarity S β (x , y ) = 5 because sequence is the longest sequence occurring as a common block subsequence between x and y . be a pair of mutually reverse complementary sequences. We have S α (x , y ) = 4. The deletion similarity S λ (x , y ) = S λ (x ,x ) = 8 because the self reverse complementary sequence z (0, 0, 0, 0, 1, 1, 1, 1) =z = (x 1 , x 4 , x 5 , x 6 , x 7 , x 8 , x 9 , x 10 ) = (y 1 , y 2 , y 3 , y 4 , y 5 , y 6 , y 7 , y 10 ) is the longest sequence occurring as a common subsequence between x and y x . The block similarity S β (x , y ) = S β (x ,x ) = 6 because the following self reverse complementary sequence is a longest sequence occurring as a common block subsequence between x and y =x .
In other words, X is a collection of N/2 pairs of mutually reverse complementary sequences. (ii) For any k, k ′ ∈ [N ], where k = k ′ , the similarity S(x (k), x (k ′ )) ≤ n − D − 1. We will also say that code X is a DNA code of length n, distance D and similarity n − D − 1.
For q = 4, a biological motivation of (n, D)-codes based on deletion similarity S λ = S λ (x , y ) was suggested in [1]. If only condition (ii) is retained, then an (n, D)-code based on deletion similarity is a code of length n capable to correct any combination of ≤ D deletions [5]. A biological motivation of quaternary DNA codes based on block similarity S β = S β (x , y ) was suggested in [2].
For given n and D, we denote by N q (n, D) the maximal size of (n, D)-codes. If d, 0 < d < 1, is a fixed number, then is called a rate of (n, ⌊dn⌋)-codes.
We will use notations with upper indices N λ q (n, D) , R λ q (d) and N β q (n, D) , R β q (d) for the corresponding parameters of DNA codes based on similarity functions S λ and S β . From inequalities between considered similarity functions it follows that N λ q (n, D) ≤ N β q (n, D) and This upper bound follows from the corresponding results [5,10,11] (see, also [6], p. 272) obtained for codes capable to correct any combinations of ≤ D deletions.
Remark 2.2. One can easily understand that the conventional Hamming bound on the size of block codes with distance D + 1 is a trivial upper bound on N β q (n, D), i.e., For D = 1 an improvement of this trivial bound is given by Proof of Theorem 2.1. Consider an arbitrary q-ary DNA code X = {x (k), k ∈ [N ]} of length n, distance D = 1 and block similarity n − 2. For each codeword x (k), there exists one or two tail subsequences of length n − 1 obtained by deletions of the first or the last element of x (k). Let X contain N 1 (N 2 = N − N 1 ) codewords which yield one (two) tail subsequences of length n−1. Obviously, N 1 ≤ q. From item (ii) of Definition 4, it follows that there are N 1 +2N 2 distinct tail subsequences of length n − 1. Thus one can write N 1 + 2N 2 ≤ q n−1 , N 1 ≤ q. These two inequalities lead to N = N 1 + N 2 ≤ q n−1 +q 2 . Theorem 2.1 is proved.
Suboptimal DNA Codes for Distance D = 1
In this section, we assume that n is a number divisible by q, where q = 2, 4, . . . is an even number. Hence, n is an even number as well. We also remind that the complement of a letter a ∈ A {0, 1, . . . , q − 1} is defined asā (q − 1) − a ∈ A. Therefore,ā = a for any a ∈ A. We say that a codeword x ∈ A n satisfies the parity-check condition if the arithmetic sum of its elements is a number divisible by q. Let M q (n) denote the set of all these codewords: Any subset T ⊆ M n (q) is called a parity-check code. The set M n (q) is the optimal code of size q n−1 detecting one error in the Hamming metric [8]. It is called the maximal parity-check code. We will construct suboptimal DNA codes for distance D = 1 which are subcodes of M q (n).
Obviously, for each codeword x ∈ M q (n), its reverse complement x ∈ M q (n).
Formulations of Results
In Sect. 3.2, we prove There exists a q-ary DNA code of length n, distance D = 1, block similarity n − D − 1 = n − 2 and size Remark 3.1. If n = qk, where k = 1, 3, 5, . . . is an arbitrary odd number, then Theorem 2.1 means that the construction of Theorem 3.1 is optimal. If q is fixed and n → ∞, then Theorem 2.1 means that the construction of Theorem 3.1 is asymptotically optimal.
Example 3.1. For n = q = 4, the construction of optimal DNA code from Theorem 3.1 is illustrated by the following table which contains 4 3 = 64 codewords satisfying the parity-check condition, namely: for each codeword, the sum of its elements is a number divisible by 4. These codewords are written as 1 2 · 4 3 = 32 pairs of mutually reverse complementary codewords. Any row of the table consists of 1, 2, or 4 pairs. In any row, the first (second) codewords are obtained as consecutive left (right) cyclic shifts of the first (second) codeword of any fixed pair of the row. If we eliminate from the table all 15 pairs from the second and fourth columns of the table, then one can easily check that the rest 17 mutually reverse complementary pairs will constitute a quaternary DNA code X of length n = 4, size N = 2·17 = 34, block distance D = 1 and block similarity n−D−1 = 2. We mark by the symbol "underline" pairs of codewords (there are 10 such pairs) from code X which have pairwise deletion similarities ≤ 2. They constitute a quaternary DNA code of length n = 4, size N = 2 · 10 = 20, deletion distance D = 1 and deletion similarity n − D − 1 = 2. This means that the maximal size N λ 4 (4, 1) ≥ 20.
In Sect. 3.3, we prove . is an even number and k = 1, 3, . . . is an odd number. Let there exists a parity-check code T , correcting single deletions, i.e., T ⊂ M n (q) and the deletion similarity S λ (x, y) ≤ n − 2 for any x, y ∈ T , x = y. Then there exists a DNA (n, 1)-code T ′ ⊂ M n (q) of size |T ′ | ≥ |T |.
We will use the following construction [10] of a a parity-check code T correcting single deletions. a) Consider a partition of the set A n into q subsets In particular, the maximal parity-check code M n (q) = M 1 (0). b) For each x ∈ A n , we introduce a binary sequence (α 2 , . . . , α n ), where Consider a partition of the set A n into n subsets M 2 (γ), γ = 0, 1, . . . , n, where c) The intersection of two partitions defined in items a) and b) yields a partition of the set A n into nq subsets having the form T (β, γ) M 1 (β) ∩ M 2 (γ). One can prove [10] that every subset of this partition is a code correcting single deletions. Hence, the size of a maximal code correcting single deletions exceeds q n /(nq) = q n−1 /n.
If we fix β = 0, then we obtain a partition of the set M n (q) into n subsets of the form T (0, γ), 0 ≤ γ ≤ n − 1. Each of these subsets can be applied as a parity-check code T for Theorem 3.2. If we choose a code having the maximal size then we obtain the following lower bound on the maximal size of DNA (n, 1)-code.
Proof of Theorem 3.1
The following important property of the maximal parity-check code M q (n) takes place. Then x n−i+1 = q − 1 − x i , i = 1, 2, . . . , n/2, and the sum is a number divisible by q. This contradicts to the condition k = 1, 3, . . ..
Lemma 3.1 is proved.
For any sequence x ∈ A n , we define its first left cyclic shift T 1 , i.e., Introduce the (k + 1)-th left cyclic shift T k+1 , k = 1, 2, . . ., i.e., T k+1 (x ) T 1 (T k (x )). By the similar way we define the k-th right cyclic shift T k , where k < 0. Let symbol T 0 be the identity operator. For indices i, k ∈ [n], we define index i+k ∈ [n] as the corresponding sum by modulo n.
Obviously, the i-th element of T k (x ) has the form T k (x ) i = x i+k .
In addition, it is easy to see that Proof of Lemma 3.2. As far as x ∈ O(x ) then there exists an integer k, k = 0, 1, . . . , n−1, for which the k-th cyclic shift T k (x ) = x . Hence, for any i = 1, 2, . . . , n, the i-th coordinate of Let k be an odd number. Since n is an even number we put the integer i n+1−k
Consider sequence y T t (x ) ∈ O(x ). Taking into account the above properties of x , one can easily check that the i-th coordinate of y is We have y = y and the ℓ-th shift T ℓ (y ) = y = y because y ∈ O(x ). This means that the orbit size ℓ = ℓ(x ) = ℓ(y) is an even number, i.e., ℓ = 2m. On the other hand, let s be an arbitrary integer such that T s (z ) be a self reverse complementary sequence. For any i ∈ [n], we obtain i.e., T 2s (z ) = z . It follows that 2s is a number divisible by ℓ = 2m and s is a number divisible by m = ℓ/2. Therefore, the orbit O(x ) contains exactly two self reverse complementary sequences y T t (x ) and T ℓ/2 (y ). The form (3.3) for mutually reverse complementary sequences follows from (3.2). Lemma 3.2 is proved. Obviously, the size |X | = 1 2 · |O(x )| = ℓ/2 = 2k because for any y ∈ O(x ), the s-th shift T s (y ) = y if and only if s is a number divisible by ℓ = 4k. In virtue of Lemma 3.2 and equality ℓ/2 = 2k, the set X does not contain self reverse complementary codewords. From (3.3) it follows that for codeword y = T ℓ/2−i (x ) ∈ X , codeword y = T ℓ/2+i (x ) ∈ X , i = 1, 3, . . . , (ℓ − 2)/2. Finally, Lemma 3.3 shows that the block similarity of code X does not exceed n − 2.
Lemma 3.4 is proved.
We divide the set M q (n), n = qk, into four nonintersecting subsets G i , i = 1, 2, 3, 4. Subset = (a, a, . . . , a), a ∈ A} and the size |G 1 | = q. The set G 1 is invariant under the reverse complement transformation and does not contain self reverse complementary codewords. The block similarity between any two codeword from G 1 is equal to zero. Therefore, G 1 satisfies DNA code definition. We construct a required code X in the following way. 1a) The set G 1 is included in X . 1b) For each pair of mutually reverse complementary orbits O(x ) and O( x ), code X contains one-half of their codewords having the following form: Taking into account (3.2) and Lemma 3.3, it is easy to see that the code X is a DNA code of block similarity n − 2. The size of X has the form 2) Let q = 2 m and n = 2 m+m ′ , m ′ ≥ 1. In this case, G 2 contains self reverse complementary orbits of size ℓ = 2 and codewords x ∈ G 2 have the form = (a,ā, a,ā, . . . , a,ā), a ∈ A}, |G 2 | = q.
Set G 4 consists of mutually reverse complementary orbits O(x ) and O( x ).
We construct a required code X in the following way. 2a) The set G 1 is included in X . 2b) Elements of G 2 are not included in X . 2c) Code X contains one-half of codewords from the set G 3 according to Lemma 3.3. 2d) Code X contains one-half of codewords from the set G 4 having the form described in item 1b). Obviously, X is a DNA code of block similarity n − 2. The size of X has the form For any O(x ) ∈ M 1 q (n), the size ℓ(x ) = n = 4k. Therefore, according to the construction described in item 1b) and Lemma 3.4, we obtain a DNA code X of block similarity n − 2 and size Theorem 3.1 is proved.
Proof of Theorem 3.2
Let a sequence x ∈ A n . We will say that an integer-valued vector n(x ) = n = (n 0 , n 1 , . . . , n q−1 ), 0 ≤ n x ≤ n, x ∈ A = {0, 1, 2, . . . q − 1}, is a composition of x if n x is equal to the number of entries of the symbol x ∈ A in x . The reverse complement transformation of a sequence x leads to the reverse transformation of its composition: n( x ) =n (n q−1 , . . . , n 1 , n 0 ). In what follows, we will consider codewords x ∈ M q (n) having compositions n for which x · n x ≡ 0 (mod q).
Proof of Lemma 3.5. By contradiction. Consider two arbitrary codewords x , y ∈ M q (n) with deletion similarity S λ (x , y ) = n − 1. Obviously, these codewords can be obtained by two distinct insertions of the same symbol into their common subsequence of length n−1. Therefore, x and y should have the same composition that contradicts to the condition of Lemma 3.5.
Lemma 3.5 is proved. Proof of Lemma 3.6. By contradiction. Let there exist a composition n for which n =n.
It means that n x = n q−1−x , x ∈ A , and the sum In virtue of (3.4), the right-hand side is a number divisible by q. This contradicts to k = 1, 3, . . ..
Lemma 3.6 is proved.
Let a subset T ⊂ M q (n) be a code correcting single deletions, i.e., for any codewords x , y ∈ T , x = y, the deletion similarity S λ (x , y ) ≤ n − 2. We will prove that there exists a DNA code T ′ ⊂ M q (n), |T ′ | ≥ |T |, having the same property.
Let T be a fixed code correcting single deletions. We choose a set of compositions N satisfying is a DNA code of size |T ′ | ≥ |T |. From Lemma 3.5 it follows that for any codewords x , y ∈ T ′ having distinct compositions, the deletion similarity S λ (x , y ) ≤ n − 2. From construction of T ′ it follows that for any codewords x , y ∈ T ′ having the same composition, we have x , y ∈ T or x , y ∈ T . And, therefore, in this case the deletion similarity is S λ ( x , y ) = S λ (x , y ) ≤ n − 2.
Theorem 3.2 is proved.
Formulations of results
Theorem 4.1 presents lower bounds on the size N λ q (n, D) and rate R λ q (d) of DNA codes based on deletion similarity. Let d = d λ q , 0 < d λ q < (q − 1)/q, be the unique root of equation holds.
One can easily understand that v(d) is calculated using the following recurrent method: w 1 2, Define the function (1)). (ii). If 0 < d < d β q , then the rate R β q (d) > 0 and the following lower bound holds.
Random Coding Method for DNA Codes
In this section, we develop a general random coding method for DNA codes. Let S = S(x , y ) be an arbitrary similarity function (2.1). For integers 0 ≤ s ≤ n, we define two sets P(n, s) {(x , y ) ∈ A n × A n : S(x , y ) = s},P(n, s) {x ∈ A n : S(x ,x ) = s}. (4.7) Consider two random sequences with independent identically distributed components having the uniform distribution on A.
Obviously, the corresponding probability distributions of random variables S(u, v ) and S(u,ũ) have the form: Proof of Lemma 4.1. Let X = {x (1), x (2), . . . , x (2N )} be an arbitrary DNA code of length n and size 2N . Without loss of generality, we put the codeword x (N + k) x (k) for any k ∈ [N ]. In virtue of this, code X satisfies the condition (i) of Definition 4. Note that code X will satisfy the condition (ii) of Definition 4 if for an arbitrary pair of codewords (x (k), x (k ′ )), We will say that a pair of codewords (x (k), x (k + N )), k = 1, 2, . . . , N , is an D-bad pair in code X if there exists a codeword x (k ′ ) for which Otherwise, we will say that (x (k), x (k + N )), k = 1, 2, . . . , N , is an D-good pair in code X .
Inequality (4.12) means that for the ensemble of q-ary codes X of length n and size 2Ñ , If we apply Lemmas 4.1 and 4.2 to a similarity function S(x , y ), then we need to investigate the corresponding sets (4.7). For instance, consider the additive similarity S α (x , y ) which is defined as the number of positions i, i = 1, 2, . . . , n, where x i = y i . Let the corresponding sets (4.7) be P α (n, s) andP α (n, s). It is easy to see that the setP α (n, s) is empty if s is odd. The sizes of sets |P α (n, s)| and |P α (n, s)|, s = 2, 4, . . ., are calculated as follows: Thus, for any u, 0 < u < 1, the ∩-convex function p α (u) lim n→∞ log q q n n ⌈(1−u)n⌉ (q − 1) n−⌈(1−u)n⌉ n = 1 + h q (u) + u log q (q − 1) and the ∩-convex functionp α (u) = p α (u)/2. Obviously, Hence, applying Lemma 4.2, we get the following lower bound on the rate R α q (d) of DNA codes based on the additive similarity This bound coincides with the well-known Gilbert-Varshamov bound on the rate of q-ary errorcorrecting codes for the Hamming metric [8].
In Sect. 4.3 and 4.4, we will investigate the sizes of sets (4.7) for similarity functions S λ and S β . Applying this analysis, we will prove Theorems 4.1 and 4.2 with the help of Lemmas 4.1 and 4.2.
Proof of Theorem 4.2
Let s, 1 ≤ s ≤ n, be an arbitrary integer and denote sets (4.7) for similarity of blocks S β (x , y ).
For a fixed sequence z = (z 1 , z 2 , . . . , z s ) ∈ A s , we introduce the concept of its j-block partition z = {b 1 , b 2 , . . . , b j−1 , b j }, j = 1, 2, . . . , min{s, n − s + 1}, (4.13) i.e., a partition of z into j nonempty blocks, where each block contains consecutive elements of z . Let x = (x 1 , x 2 , . . . , x n ) ∈ A n , be a fixed q-ary n-sequence. Definition 2 means that a block partition z of the form (4.13) is a block subsequence of x if z is a subsequence of x , i.e., and all blocks {b 1 , b 2 , . . . , b j−1 , b j } consisting of consecutive elements of the sequence x are separated in x . In addition, if a pair (x , y ) ∈ P β (n, s) (a sequence x ∈P β (n, s)), then there exists a block partition z which is a common block subsequence between x and y (x andx ), i.e., each of sequences x and y (x andx ) contains separated blocks {b 1 , b 2 , . . . , b j−1 , b j } consisting of their consecutive elements.
Obviously, for any z ∈ A s , the number of all its j-block partitions of the form (4.13) is is an upper bound on the cardinality of the following set of q-ary n-sequences. These n-sequences are obtained by M = (n − s) − (j − 1) insertions of a fixed M -collection of q-ary letters (marbles) into N = j + 1 "spaces" generated by a fixed q-ary s-sequence z having a fixed j-block partition (4.13), namely: the space before b 1 , the space after b j and j − 1 inter-block spaces of (4.13) which are marked by a fixed (j − 1)-collection of separating q-ary letters (marbles). The given interpretation of formulas (4.15)-(4.16) leads to (4.14).
Lemma 4.4. The setP β (n, s) is empty if s ≥ 1 is odd. If s ≥ 2 is even and an n-sequence x ∈P β (n, s), then there exist an integer j, j = 1, 2, . . . , min{s, n − s + 1} and a self-reverse complementary s-sequence z =z, |z| = s, of the form (4.13) which is a common block subsequence between x andx and z has a self reverse complementary block partition i.e., block b 1 =b j , block b 2 =b j−1 , . . ., block b j−1 =b 2 , and block b j =b 1 .
Proof of Lemma 4.4. Consider an arbitrary x ∈P β (n, s) and its reverse complementx . Let a sequence z ∈ A m , m ∈ [s], be a block subsequence (BSS) of x . Then one can easily see that z is a BSS ofx if and only if its reverse complementz is a BSS of x . This means that the following two statements are equivalent.
1. The setP β (n, s) is empty if s is odd. If s is even and a block partition z , |z | = s, is a common BSS between x andx , then there exists a sequence z ′ =z ′ of length |z ′ | = |z | = s having a self-reverse complementary block partition z ′ which is a common BSS between x andx .
2. The setP β (n, s) is empty if s is odd. If s is even and block partitions z ,z of length |z | = |z | = s are BSS of x , then there exists a sequence z ′ =z ′ of length |z ′ | = |z | = s having a self-reverse complementary block partition z ′ which is a BSS of x .
Statement (i) of Theorem 4.2 is proved.
Proof of Statement (ii) of Theorem 4.2. Let u, 0 < u < 1, be fixed parameter. Define the function E q (u) lim n→∞ log q B ( n, ⌈(1 − u)n⌉) n , 0 < u < 1. Therefore, Lemma 4.2 gives a random coding bound on the rate R β q (d) of q-ary DNA (n, ⌊dn⌋)codes based on the block similarity. One can easily check that the given lower bound R β q (d) can be written in the form where The derivative of the binary entropy function h q (v) is Thus, the partial derivative of the function F q (v, d) is and for a fixed d, 0 < d < 1/2, equation ∂F q (v, d)/∂v = 0 is equivalent to equation (4.3). The binary entropy function h q (v) is ∩-convex function of parameter v, 0 < v < 1. Hence, formulas
Proof of Theorem 4.1
Let s, 0 ≤ s ≤ n, be an arbitrary integer and P λ (n, s) {(x , y ) : S λ (x , y ) = s},P λ (n, s) {x : S λ (x ,x ) = s} denote the sets from Lemma 4.1 for the deletion similarity. An upper bound on the size |P λ (n, s)| is based on the following well-known [6,7] result.
Lemma 4.6. [6,7]. Let n and s be integers, 0 ≤ s ≤ n. For an arbitrary sequence y ∈ A s denote by B q (y, n) the set of all sequences x ∈ A n that include y as a subsequence, i.e., that can be obtained from y by n − s insertions. Then for the fixed n and s, the size of B q (y, n) does not depend on y and has the form |B q (y , n)| = n−s k=0 n k (q − 1) k B q (n, s). (4.20) Proof of Lemma 4.6. We will use the induction over s. For s = 0 and s = 1, Lemma 4.4 is trivial. Assume that Lemma 4.4 is proved for all integers less than s ≥ 2. Consider an arbitrary s-sequence y = (y 1 , y 2 , . . . , y s ) and its (s − 1)-subsequence y ′ (y 2 , y 3 , . . . , y s ). Divide the set B q (y , n) into the sum of mutually disjoint sets B k q (y , n), k = 1, 2, . . . , n − s + 1, where the set B k q (y , n) is composed of n-sequences x = (x 1 , x 2 , . . . , x n ) ∈ B q (y, n) such that x i = y 1 for i = 1, 2, . . . , k − 1 and x k = y 1 . Obviously, any such sequence x belongs to the set B q (y , n) if and only if the (n − k)-sequence (x k+1 , x k+2 , . . . , x n ) contains y ′ . In virtue of the induction hypothesis, the size |B k q (y , n)| = (q − 1) k−1 |B q (y ′ , n − k)| = (q − 1) k−1 B q (n − k, s − 1), i.e., for any k = 1, 2, . . . , n − s + 1, the size |B k q (y , n)| is the same for all s-sequences y . This means that the size |B q (y , n)| does not depend on y as well. To complete the proof, we consider the s-sequence y = (0, 0, . . . , 0) for which the equality of Lemma 4.4 is trivial. Lemma 4.6 is proved.
Lemma 4.7. The setP λ (n, s) is empty if s is odd. If s is an even number and a sequence x ∈P λ (n, s), then there exists a self reverse complementary sequence z =z, |z| = s, which is a common subsequence between x andx.
The proof of Lemma 4.7 is omitted here because it can be easily obtained by an evident modification of our arguments used for Lemma 4.4. Lemmas 4.6 and 4.7 yield |P λ (n, s)| ≤ q s · [B q (n, s)] 2 , |P λ (n, s)| ≤ q s/2 · B q (n, s), 0 ≤ s ≤ n.
|
2014-01-29T12:37:45.000Z
|
2014-01-29T00:00:00.000
|
{
"year": 2014,
"sha1": "b0270c8d382ea627cad1c730cb3c394c795156a8",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "b0270c8d382ea627cad1c730cb3c394c795156a8",
"s2fieldsofstudy": [
"Computer Science",
"Biology"
],
"extfieldsofstudy": [
"Mathematics",
"Computer Science",
"Biology"
]
}
|
3305025
|
pes2o/s2orc
|
v3-fos-license
|
Crosstalk in concurrent repeated games impedes direct reciprocity and requires stronger levels of forgiveness
Direct reciprocity is a mechanism for cooperation among humans. Many of our daily interactions are repeated. We interact repeatedly with our family, friends, colleagues, members of the local and even global community. In the theory of repeated games, it is a tacit assumption that the various games that a person plays simultaneously have no effect on each other. Here we introduce a general framework that allows us to analyze “crosstalk” between a player’s concurrent games. In the presence of crosstalk, the action a person experiences in one game can alter the person’s decision in another. We find that crosstalk impedes the maintenance of cooperation and requires stronger levels of forgiveness. The magnitude of the effect depends on the population structure. In more densely connected social groups, crosstalk has a stronger effect. A harsh retaliator, such as Tit-for-Tat, is unable to counteract crosstalk. The crosstalk framework provides a unified interpretation of direct and upstream reciprocity in the context of repeated games.
The blue circle depicts the state where the player cooperates (c) and the red circle depicts the state where the player defects (d) in the next game. After a game, depending on the action (c or d) of the co-player, the state changes according to the given probabilities. a | A stochastic reactive strategy is encoded by the tuple (p, q) denoting the probability to cooperate if the co-player in the previous round either cooperated or defected, respectively. b-d | The well-known strategies Tit-for-Tat (TFT), Stochastic Tit-for-Tat (STFT), and Generous Tit-for-Tat (GTFT; 0 < q < 1) implemented by stochastic two-state automata.
Current automata states Supplementary Figure 2: Crosstalk in concurrent repeated games. Players (large circles) use reactive strategies implemented by a separate two-state automata for each interaction partner. The current state is emphasized in bold font (panel 1). Two random players (here players 1 and 2) are selected to play a PD (Prisoner's Dilemma; panel 2). Crosstalk between independent automata within the involved players 1 and 2 happens with a small probability γ. Crosstalk might change the action in the following interaction. By chance, state D of the automaton implementing the interaction of player 1 with player 3 is copied to the automaton implementing the interaction of player 1 with player 2 (indicated by a blue arrow). Here player 1 defects (due to crosstalk) and player 2 cooperates (panel 3). The states of the automata are updated according to the player's strategy (panel 4). Player 1 plays TFT (Tit-for-Tat) and moves to state C. Player 2 plays TFT and moves to state D. The level of cooperation is determined by the frequency that players are in the cooperative state, averaged over all players in the population. Full lines correspond to a crosstalk rate of γ = 0.05 and dotted lines correspond to a crosstalk rate of γ = 0.5. a-d | Since TFT (Tit-for-Tat) is not an error-correcting strategy, its cooperation frequency converges to zero for any γ > 0. Stochastic Tit-for-Tat (STFT) can secure a basic level of cooperation as it sometimes forgives defection. Across all population structures GTFT (Generous Tit-for-Tat) maintains a high level of cooperation. High crosstalk rates lead to a faster spreading of defective behavior for both TFT and STFT whereas population structures with a low connectivity delay the spreading of defection (e.g., cycle or square lattice). All players use a given conditional cooperative strategy except one random player always defects (ALLD). Number of players is N = 16 (one of those is the ALLD player). Simulation results are averages over 10 4 realizations. Twenty-four erroneous conditional cooperators (STFT p = 0.999, q = 0.001; blue framed nodes) and one ALLD (red framed node, placed in the center) or ALLC (yellow framed node) player populate a 5x5 lattice. The fill color of the nodes depicts the expected payoff of the players after 100, 1,000 and 2,000 games. a-b | In the absence of crosstalk (γ = 0.0) cooperative and defective behavior can not spread. The erroneous ALLC (p = 0.999, q = 0.999) or ALLD (p = 0.001, q = 0.001) player only affect the payoff of its STFT neighbors. c | In the presence of crosstalk (γ = 0.5) cooperation spreads from the ALLC player via crosstalk to all STFT players. We assume that in the first round, the STFT players are equally likely to cooperate or to defect, which is their stationary cooperation frequency in a homogeneous population of STFT players. Parameter values: benefit b = 3, and cost c = 1. Stationary payoff of GTFT (p = 1, q = 1/3) and ALLD (0, 0) players versus the crosstalk rate in different population structures. One ALLD player is randomly placed on the graph, among N − 1 GTFT players. Full lines show numerically exact results for the average payoff of all GTFT players (blue) and of the ALLD player (red) in the steady state. Dotted lines show the steady state payoff of individual players with a given distance to the ALLD player. Circles and crosses show the respective simulation results. The larger the distance of a GTFT player to an ALLD player, the less likely a player's payoff is affected by the ALLD player. Players with distance 1 are adjacent to the ALLD player. a-d | On the cycle, the average payoff of the GTFT players exceeds the defector's payoff up to a crosstalk rate of γ ≈ 0.85, whereas for well-mixed populations the critical crosstalk rate is much lower, γ ≈ 0.41. The other two population structures exhibit crosstalk thresholds in between these two extremes. Parameter values: number of players N = 16 (one ALLD player), benefit b = 3, and costs c = 1. Simulation results are averages over 10 4 realizations. Figure 6: Mean time until a population of conditional cooperators returns to full cooperation after a single error. Higher crosstalk rates (γ) as well as probabilities to cooperate after defection (q) decrease the number of games to recover from an error such that the whole population returns to full cooperation. For the case of γ = 0, analytical results are denoted by blue circles for the cycle, purple squares for the square lattice, red triangles for the 6-regular graph, and yellow crosses for the complete graph (see Section 2.1 for further details). Parameter values: number of GTFT players N = 16, all GTFT players (p = 1; full lines: q = 1/3, dotted lines: q = 0.1; dashed lines: q = 2/3), benefit b = 3, and costs c = 1. Simulation results are averages over 10 5 realizations. For two different resident strategies, ALLD and GTFT, we have calculated how easily mutants can invade. To this end, we have considered a fine grid of mutant strategies (p, q) with p, q ∈ {0, 0.005, 0.010, . . . , 1}. For each of these mutant strategies, we have calculated its fixation probability into the respective resident strategy. The value of the fixation probability is represented by the color of the respective square at (p, q). In addition, we have simulated how many mutant invasions the resident strategies can resist if mutants are randomly drawn from that grid (reported in the upper left of each panel). We have also recorded the average trait values of successful mutants (indicated by the arrow). We find that ALLD is typically invaded by conditionally cooperative strategies, and that the invasion time increases with the crosstalk rate. For GTFT, we find that in the absence of crosstalk, it takes a considerable number of mutants until the first mutant reaches fixation. Moreover, the successful mutant is typically a cooperative strategy itself. As the crosstalk rate increases, however, the invasion time into GTFT decreases, and successful mutants do no longer need to be cooperative. Parameters: population size N = 16, benefit b = 10, cost, c = 1, selection strength s = 1. The strategies are subject to small amounts of noise, ALLD = (0.001, 0.001) and GTFT = (0.999, 0.333). Supplementary Fig. 7, we explored how many mutant invasions it takes until an extortionate resident population is successfully replaced. Without crosstalk, extortionate strategies are quickly replaced by more cooperative strategies. This result is in line with previous observations that extortion is unstable in typical models of direct reciprocity [1]. However, once there is substantial crosstalk, it takes more mutant strategies until an extortionate resident population is invaded, and successful mutants are similar to the extortionate strategy. c | To quantify the overall success of extortion in well-mixed populations, we have recorded how often the evolutionary process visits a δ-neighborhood of the set of all extortionate strategies (see also Refs. [1,2]). For comparison, the dashed line indicates how often this neighborhood is visited in the case of neutral evolution (when the selection strength s is zero). As the crosstalk rate approaches γ = 1, the set of extortionate strategies is visited more than 10 times more often than expected under neutrality. Thus, when crosstalk is common, selection favors extortionate strategies. Parameters: population size N = 16, b = 10, c = 1, s = 1.
STFT ALLC
For the invasion analysis, we have used the extortionate resident strategy (0.4,0), and for the δ-neighborhood, we have used δ = 0.02. and Anti-Tit-for-Tat (0,1). The black-dotted line is the set of singular points, as given by Eq. (S11); the grey area below that line is the cooperation-rewarding zone. As the crosstalk rate γ increases from 0 to 0.75, this cooperation-rewarding zone shrinks considerably; most initial population configurations lead to a state in which everyone defects. Parameter values: population size N = 16, benefit b = 10, cost c = 1. Higher crosstalk rates, densely connected populations and previous interaction crosstalk (full lines) accelerate spread of defective behavior. The number of players is N = 16 (one of those is the ALLD player), and the crosstalk rate is γ = 0.5. Simulation results are averages over 10 4 realizations. We consider four different population structures, and as in Supplementary Fig. 11, we assume there is one ALLD players and N−1 players with strategy AGTFT. Depending on the region in the (γ, τ ) parameter space, there are three different qualitative outcomes. (i) FC/FC: In this case, the AGTFT players cooperate with everyone. As a consequence, the defector gets a higher payoff than all residents. (ii) FC/PC: The AGTFT players fully cooperate among each other, but they only partially cooperate with the ALLD player. In this case AGTFT is stable against ALLD if the conditional cooperation probability q is sufficiently low. (iii) PC/PC: Here, the AGTFT do no longer fully cooperate among themselves. The analytically derived boundaries of the three parameter regions match the numerically found results in Supplementary Fig. 11. Parameter values: number of players N = 16. Twenty-four conditional cooperators (blue framed nodes, panels) and one ALLD (Always-Defect) player (red framed node, placed in the center) populate a 5x5 lattice. Players connected by an orange line have a 10-fold increased interaction probability. Defection spreads faster due to the increased interaction probability along the central, horizontal line of players. The fill color of the nodes depicts the expected payoff of the players after 100, 1,000 and 2,000 games. Parameter values: crosstalk rate γ = 0.5, benefit b = 3, and cost c = 1. For GTFT (defined by p = 1 and 0 < q < 1), we used q = 1/3.
Supplementary Note 1
In Section 1, we provide further analytical results for our model of crosstalk in the special case of well-mixed populations. Specifically, we describe a more efficient algorithm to calculate steadystate payoffs when only two different strategies are present. Using this algorithm, we can calculate explicitly which strategies (p, q) are able to resist invasion by any other mutant strategy (p , q ).
Moreover, the algorithm allows us to explore the adaptive dynamics of the system for any crosstalk rate.
In Section 2, we present additional results that hold for any population structure. We compute the time that a cooperative population needs to recover from an isolated defection event. Moreover, we introduce a model of crosstalk that allows players to react to the aggregate cooperation received across all their co-players. Finally, we argue that the evolutionary results presented in the main text remain unchanged if we consider a birth-death process instead of an imitation process.
1 Analytical results for well-mixed populations
Efficient calculation of payoffs in the special case of two strategies
In the main text, we have derived the following linear system to capture the players' cooperation frequencies in the steady state, In this equation, the unknowns y ij represent the probability that player i cooperates against player j in the steady state, and (p i , q i ) is the reactive strategy of player i. By solving this linear system, we can calculate payoffs in small and intermediate-sized populations. However, as populations become large, the computational effort increases quadratically in the population size N . We thus derive a more efficient algorithm for the complete graph, assuming that there are only two different strategies present in the population. Suppose that k individuals use strategy (p 1 , q 1 ), whereas the remaining N − k individuals use strategy (p 2 , q 2 ). Because the population structure is fully symmetric, we can assume that players with the same strategy receive the same payoff. Hence, we set y ij = y i j in Eq. (S1) whenever the strategies of player i and i and the strategies of player j and j coincide. This implies a drastic simplification: instead of having to consider all N · (N −1) possible combinations of players, we only need to consider all 4 possible combinations of strategies present in the population. That is, the linear system (S1) simplifies to the linear system M y = x, with r i := p i −q i , and x = (q 1 , q 1 , q 2 , q 2 ) T . The solution vectorŷ = (ŷ 11 ,ŷ 12 ,ŷ 21 ,ŷ 22 ) T contains the respective steady-state frequenciesŷ ij for a player with strategy i to be in state C with respect to a co-player with strategy j. Using this vector, we can again calculate the expected payoffs of the two strategies as We note that the computation time for the payoffs is now independent of the population size.
The optimal level of generosity
Based on the above method to calculate payoffs in well-mixed populations with two strategies, we can also analytically derive the most generous strategy (1, q M ) and the most robust strategy (1, q R ), as defined in the main text. To this end, we consider a population in which N−1 individuals adopt a cooperative strategy, whereas the remaining individual plays ALLD. If we set (p 1 , q 1 ) := (0, 0), (p 2 , q 2 ) := (1, q), and k = 1, we can use Eq. (S3) to calculate the payoff of the single ALLD player as In contrast, the payoff of each cooperative player becomes To calculate the most generous strategy that can resist invasion by ALLD we solve π D = π C , yielding In particular, for no crosstalk and large populations (γ = 0 and N → ∞), we recover the well-known probability q M = 1−c/b [3,4]. Because the maximum level of generosity is monotonically decreasing in γ. Thus, the more crosstalk, the less generous cooperative players need to be to still prevent the invasion of ALLD. The value of q M becomes zero when In particular, only if the crosstalk rate satisfies γ < γ * , we can hope for full cooperation to evolve in the complete graph.
Similarly, we can also calculate the most robust cooperative strategy, defined as the strategy (1, q) that has the highest relative payoff advantage compared to a single ALLD mutant. By setting ∂ ∂q (π C −π D ) = 0, we yield We note that q R is zero for γ = 0 and for γ = γ * , with γ * as defined by Eq. (S8). In between, for 0 < γ < γ * , the value of q R is positive. This means that as the crosstalk goes to zero, γ → 0, we get q R → 0 and the strategy most robust against invasion by ALLD approaches TFT = (1, 0).
For positive γ, the most robust level of generosity is non-monotononic (as shown in Fig. 3d). For small crosstalk rates, the most robust response to an increase in γ is to slightly increase q. A small increase of q is often sufficient to prevent the spread of defection across the network without being too generous towards the single defector. But once the crosstalk rate has passed a certain threshold, robustness requires players to become less generous with increasing γ. In that case, a certain spread of defection can no longer be prevented, and players instead have to minimize the payoff of the defector by decreasing q.
A general invasion analysis
When a cooperative resident strategy (1, q) satisfies q < q M , the above results only guarantee that residents can resist invasion by ALLD. However, in the following we show that if q < q M holds, residents are in fact able to resist all possible mutants with a reactive strategy. To this end, suppose a single mutant employs the strategy (p 1 , q 1 ) whereas the remaining residents use strategy (p 2 , q 2 ).
Again, we can use Eq. (S3) to calculate the payoff π 1 of the mutant, as well as the payoff π 2 of each resident. This yields the following payoff difference (S10) Here, r 1 := p 1 −q 1 and r 2 := p 2 −q 2 , whereas For the resident strategy (p 2 , q 2 ) to be resistant against invasion, we require π 1 − π 2 ≤ 0 for all mutant strategies (p 1 , q 1 ). We can distinguish three cases: Case 1: r 2 > r M . In this case, Eq. (S10) implies for every p 2 < 1 that the mutant strategy ALLC with (p 1 , q 1 ) = (1, 1) can invade. Hence, resident strategies can only resist invasion if p 2 = 1.
This analysis suggests there are three different sets of strategies that can resist invasion by single mutants. The first case corresponds to the case of cooperative strategies (1, q) such that 1−q > r M (or, equivalently, q < q M as defined by Eq. (S6)). The second case corresponds to the case of defective strategies (p, 0) such that p < r M . Finally, the last case corresponds to all strategies (p, q) that satisfy the linear relationship p−q = r M .
Adaptive dynamics for well-mixed populations
Based on the above static results, we can also derive a simple deterministic model to describe how the players' strategies evolve over time, the so-called adaptive dynamics of the system [5].
We consider a well-mixed population of size N . The population is monomorphic and applies the resident strategy (p 2 , q 2 ). This population is then invaded by a single mutant with strategy (p 1 , q 1 ).
We define the mutant's invasion fitness as F := π 1 − π 2 , as given by Eq. (S10). Adaptive dynamics posits that evolutionary trajectories point towards the mutant with the highest invasion fitness, p = ∂F ∂p 1 p 1 =p 2 =:p, q 1 =q 2 =:q andq = ∂F ∂q 1 p 1 =p 2 =:p, q 1 =q 2 =:q . (S12) Plugging Eq. (S10) into Eq. (S12) yields the following two-dimensional dynamical system, Eq. (S13) implies thatṗ andq always have the same sign. Since h(r) = 0 if and only if r = r M , with r M as defined in Eq. (S11), we obtain the analogous three cases as in the previous section. For initial populations (p, q) with p − q > r M , bothṗ andq are positive, and populations evolve towards higher cooperation probabilities. The 2-dimensional area in the (p, q)-space for which p − q > r M is thus called the cooperation rewarding zone [6]. In contrast, for initial populations with p − q < r M , bothṗ andq are negative, and we speak of the defection rewarding zone.
Provided that r M < 1, the system is thus bistable (Supplementary Fig. 9 shows phase portraits for two different crosstalk rates). Orbits either converge to a fully cooperative population (1, q) with q < 1 − r M , to a fully defective population (p, 0) with p < r M , or to the line of interior singular points p − q = r M . In the limiting case of no crosstalk ( Supplementary Fig. 9a) this recovers previous results on the adaptive dynamics of reciprocity in finite populations [7]. However, as the crosstalk rate γ increases, it follows from Eq. (S11) that r M increases. Geometrically, this means that the line of fixed points is shifted to the right and the cooperation rewarding zone shrinks ( Supplementary Fig. 9b). As γ exceeds the value of γ * as defined by Eq. (S8), this zone vanishes altogether. Higher rates of crosstalk thus impede the evolution of cooperation, as they diminish the set of initial population that converge towards fully cooperative states.
The above results consider evolution as a deterministic process: if the initial population is in the defection rewarding zone, then the population will not employ a cooperative strategy (1, q) in subsequent generations. In the main text, we have thus contrasted this deterministic model of adaptive dynamics with a stochastic imitation process. According to the imitation process, an ALLD pop- In contrast, GTFT is already invaded after 105 mutant strategies, and successful mutants are no longer similar to GTFT. These simulation results again highlight that high crosstalk rates undermine the evolutionary robustness of cooperation. For large values of γ, cooperative strategies become unstable, and defective strategies prevail.
The evolutionary relevance of extortionate strategies under crosstalk
When the population only consists of N = 2 individuals with respective strategies (p 1 , q 1 ) and (p 2 , q 2 ), their respective payoffs according to Eq. (S3) become (S15) These two payoffs satisfy the linear relationship In particular, by choosing a strategy of the form (p 2 , 0), player 2 can enforce the relation (S17) Since c < b and 0 ≤ p 2 ≤ 1, it follows that π 2 ≥ π 1 , irrespective of player 1's strategy (for p 2 < 1, equality only holds if both players get the mutual defection payoff 0). Moreover, by choosing p 2 > c/b, player 2 makes sure that the two payoffs π 1 and π 2 are positively related. In that case, if the co-player 1 aims to maximize her own payoff, she automatically maximizes player 2's payoff as well. Strategies of the set E = (p, q) ∈ [0, 1] 2 c/b < p < 1, q = 0 (S18) have thus been termed extortionate [8]. With an extortionate strategy, players can ensure that they almost always outperform the opponent. At the same time, it is in the opponent's best interest to be unconditionally cooperative.
Here, we aim to explore when such extortionate strategies are stable in populations of size N in the presence of crosstalk. Case 2 in Section 1.3 implies that extortionate strategies need to satisfy to resist invasion by all mutant strategies. We note that this condition is automatically satisfied if N = 2, recovering previous results that extortion can succeed in small populations [1,2]. But while generic models of direct reciprocity (with γ = 0) predict that extortionate strategies become unstable in large populations, condition (S19) suggests that crosstalk can stabilize extortion. In particular, once γ > γ * (as defined in Eq. (S8)), every extortionate strategy (p, 0) is resistant against mutant invasions. At the same time, however, it should be noted that by becoming stable, extortionate strategies lose their most appealing property when crosstalk rates are high. For high values of γ, the best response in a population of extortioners is no longer to give in and to cooperate unconditionally. The best response is to be extortionate as well.
In Supplementary Fig. 8, we show simulations using the stochastic imitation process considered in the main text. We consider a resident population that employs a given extortionate strategy (p, 0) ∈ E. For this resident population, we calculate the fixation probability for all possible reactive mutant strategies, as well as the average time it takes until a random mutant replaces the resident. These simulations support the above analytical findings. For moderate population sizes and no crosstalk, the extortionate strategy is quickly invaded by more cooperative strategies ( Supplementary Fig. 8a). As the crosstalk rate γ increases, the extortionate strategy become more robust against mutant invasions, and successful mutants typically show the characteristics of extortionate strategies themselves ( Supplementary Fig. 8b). We have also explored the evolutionary relevance of extortionate strategies by measuring how often the evolving population visits a δ-neighborhood of E. The respective fraction of time increases substantially as the crosstalk rate γ increases (Supplementary Fig. 8c). We conclude that under crosstalk, extortionate strategies are able to persist even in larger populations.
Stochastic evolutionary dynamics for a birth-death process
In the main text, we have considered a cultural evolution setup to describe how strategies in a population change over time. We have assumed that strategies that perform well are more likely to be imitated by other players. Similarly, we can also study the dynamics when strategies spread by inheritance, and not by imitation. To this end, let us consider a Moran process. As in the main text, we consider a population of individuals that engage in repeated games subject to crosstalk.
Each individual i acts according to a fixed strategy (p i , q i ) that is now genetically determined. The payoff π i of individual i again is determined by Eq. (3) of the main text. This payoff translates into an individual fitness f i = exp(sπ i ), with s ≥ 0 being again the strength of selection (the exponential fitness mapping ensures that the players' fitness is always positive). We assume that in each evolutionary time step one individual is chosen for reproduction (proportional to its fitness), and that its offspring replaces a randomly chosen individual. The offspring inherits the parent's strategy with probability 1 − µ, and it adopts a new reactive strategy with probability µ, where µ is the mutation rate. For this Moran process, the fixation probability of a single mutant in a well-mixed population of N −1 residents is given by Ref. [9]: Here, π M (j) and π R (j) are the mutant's and the resident's payoff in a population with j mutants, respectively. This fixation probability coincides with the respective fixation probability for the pairwise imitation process [10]. In particular, all corresponding evolutionary results for the imitation process (Fig. 4c,d) considered in the main text equally apply to the Moran process discussed here.
2 Further results for arbitrary population structures
Expected recovery time after errors
So far we have been concerned with the effects of crosstalk on the stationary cooperation rates and payoffs in a population. In particular, we have seen that crosstalk can lead to a spread of defection across a network. Herein we are interested in the respective timescale. How long does it take until an isolated defection event (e.g. due to an error) is "forgotten" in a generally cooperative population?
To this end, we consider a population of size N on a regular network with degree k. All players apply the strategy (1, q), and all players are in state C initially. Suppose that due to an error in the very first game, one of the players defects and that in all subsequent rounds no more errors occur and players act according to their strategies. We are interested in the recovery time, in other words, the time it takes until all players are in state C again.
In the limiting case of no crosstalk, this recovery time can be calculated analytically. Since γ = 0, an error only affects the edge between the pair of players that has interacted in the very first game. Moreover, within this pair there is always at most one player who is in the D state (because in each round, at least one of the players cooperates and players have p = 1). As the probability that a specific edge of the regular graph is chosen is 2/(N · k), we can calculate the probability that the population recovers after exactly t rounds as follows: No recovery in first t − 1 rounds, but in the t-th round the respective edge is chosen and the coplayer of the defector forgives.
Therefore, the expected recovery time T γ for γ = 0 is In particular q = 1 implies T 0 = 1. The expected recovery time T 0 has the following two properties: 1. For any given q, the recovery time is monotonically increasing in k (i.e., recovery always takes longer in the complete graph than in the circle).
2. For any given k, the recovery time is monotonically decreasing in q (i.e., recovery always occurs faster when players are more forgiving).
These two properties hold in fact for any crosstalk rate, as further simulations show ( Supplementary Fig. 6). Moreover, these simulations also show the recovery time is a decreasing function of γ.
Intuitively, under crosstalk each player's automaton is more likely to be updated during a single interaction. Given that the residents have p = 1 and q > 0, these updating events on average increase the cooperation level in a population: a player's D state is more likely to be overridden by a C than the other way around.
Crosstalk based on aggregate experience
In the so far explored models of crosstalk, a player who needs to decide whether to cooperate or not only considered single experiences (either with the present co-player, or with the previous coplayer). Instead one may also consider a model where decisions are based on a player's aggregate experience in previous games. In the following, we sketch a simple model for that case.
Again, we consider a population of size N , and each player holds a two-state automaton for each of her co-players. This automaton is in state C if the respective co-player has cooperated in the previous round, and it is in state D otherwise. To encode the present state of a player's automaton at time t, we use the variable x t ij . The value of this variable is x t ij = 1 if in the last interaction between i and j prior to round t, player j cooperated. Otherwise, we set x t ij = 0. Suppose now that in round t, players i and j interact. We assume that prior to her decision which action to choose, player i considers a weighted average score across all her co-players' previous decisions, As before, we interpret γ as the model's crosstalk rate. In the limiting case γ = 0, there is no crosstalk and only the direct co-player's previous action is taken into account. In the other limiting case γ = 1, there is full crosstalk and player i simply considers the average cooperation rate across all her co-players. Given the average scorex t ij (γ), we assume that player i with strategy (p i , q i ) cooperates with probability p i ifx t ij (γ) ≥ τ , and otherwise cooperates with probability q i . The parameter τ i denotes an exogenous cooperation threshold. In the special case γ = 0, the above model is equivalent to the standard model of reactive strategies of direct reciprocity [3,6] for any 0 < τ < 1. However, for positive values of γ, players do not only respond to their direct co-player, but they are also affected by outside experiences with previous co-players.
First, we explore the above model using computer simulations. To this end, we consider players using the strategy (1, 1/3, τ ), to which we refer to as Aggregate Generous Tit-for-Tat (AGTFT).
For four different population structures (cycle, lattice, 6-regular graph and complete graph), we study the cooperation dynamics that arise in a population in which one player applies ALLD and all other players apply AGTFT. In Supplementary Fig. 11, we show the resulting payoffs for three different values of the cooperation threshold τ ∈ {0.2, 0.5, 0.8} and for different crosstalk rates 0 ≤ γ ≤ 1. As expected, in all population structures the GTFT players gain a higher payoff than the ALLD player in the absence of crosstalk. However, as the crosstalk rate increases, the ranking of strategies can change once γ exceeds a certain threshold. There are two qualitative changes that can occur. When τ is relatively low compared to γ, AGTFT players start to fully cooperate with the ALLD player (in Supplementary Fig. 11, the red curve jumps from π D = 1 to π D = 3). On the other hand, when τ is high compared to γ, already a single defector in the population can prevent AGTFT players to fully cooperate with other AGTFT players (in Supplementary Fig. 11a,b, this happens when the blue curve for τ = 0.8 jumps from π A ≈ 2 to π A < 1).
To understand these discontinuous transitions in the players' payoffs, we calculated when AGT F T players fully cooperate among themselves, and when they fully cooperate with the ALLD player. This yields three different cases: 1. AGTFT players fully cooperate with everyone. This case applies if the average cooperation ratex t ij is always above the threshold τ , even if the respective co-player is a defector. By Eq. (S22), this yields the condition 2. AGTFT players are fully cooperative among themselves, but they only cooperate against the defector with probability q. This case applies ifx t ij ≥ τ in case the co-player uses AGTFT, whereasx t ij < τ if the co-player used ALLD. By Eq. (S22), this yields the following condition, 3. AGTFT players are no longer fully cooperative among themselves. This case applies ifx t ij < τ even if the co-player adopts AGTFT and even if all AGTFT players have cooperated in the previous round. This yields In Supplementary Fig. 12, we show the parameter regions (γ, τ ) that satisfy the three inequalities (S23), S24, and (S25). For all population structures, we find that if τ is too small, the AGTFT population cooperates with everyone. On the other hand, if τ is too large, AGTFT do not even fully cooperate among themselves. Only when τ is intermediate, the AGTFT players succeed in keeping the defector's payoff low, while still maintaining full cooperation among themselves.
Surprisingly, we find that this region in the (γ, τ )-space always has an area of 1/2. Independent of the population structure, half of the parameter combinations are amenable to AGTFT to be stable against defectors. However, as in our original model, we find that, all other parameters kept constant, it is easier for AGTFT to succeed against ALLD if γ is small ( Supplementary Fig. 12).
|
2018-04-03T04:09:01.702Z
|
2018-02-07T00:00:00.000
|
{
"year": 2018,
"sha1": "38198a17d3a46cd215d5f36ec099aec00c4f0a48",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41467-017-02721-8.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "53d409c02a129578af6bd4e9e071b7b71c12cd73",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
}
|
86441082
|
pes2o/s2orc
|
v3-fos-license
|
The Routines, Knowledge and Attitudes towards Nutrition and Documentation of Nursing Staff in Primary Healthcare: A Cross-Sectional Study
Primary health care faces challenges concerning high malnutrition rates. Attention to documentation is important for ensuring that health care professionals in primary health care deliver appropriate and timely nutritional care and treatment, hence maintaining continuity of care and enhancing patient outcomes. Healthcare professionals’ competencies have been shown to be of great importance in delivering high quality documentation and nutritional care. This aim of this study was to investigate the routines, knowledge and attitudes towards nutrition and documentation in primary health care of the primary healthcare workforce. Using a descriptive cross-sectional design, a validated questionnaire on registered nurses, social and health service assistants, social and health service helpers’ attitudes, routines and knowledge about nutrition and documentation was delivered to eligible participants. The questionnaire was distributed to 1,391 eligible participants in a municipality in Denmark. The overall response rate was 32%, leaving a total number of 449 respondents. The study shows that the level of nutritional knowledge and nutritional routines and documentation practices was poor in all three healthcare professional groups. The respondents showed large variations in knowledge and routines, hence complicating the accurate transfer of relevant nutritional related data in the patients’ healthcare record and thereby compromising continuity of care. Overall, the three groups of healthcare professionals indicated a somewhat positive attitude towards documentation and nutrition and regarded nutrition and documentation as a part of their area of responsibility, although there were discrepancies in the self-perceived degree of responsibilities among the groups of healthcare professionals. The regression analysis conducted in this study showed that a high degree of nutritional knowledge and attitudes did not determine nutritional routines. This information suggests that a focus on increasing healthcare professional’s nutritional knowledge may be redundant if the organizations and management do not continuously articulate and prioritize nutritional care and documentation. Citation: Håkonsen SJ, Bjerrum M, Bygholm A, Kjelgaard HH, Pedersen PU (2018) The Routines, Knowledge and Attitudes towards Nutrition and Documentation of Nursing Staff in Primary Healthcare: A Cross-Sectional Study. J Comm Pub Health Nursing 4: 220. doi:10.4172/24719846.1000220
such as lack of prioritization and lack of time and resources allocated for documentation, cultural aspects and lack of available and intuitive systems (both IT and manual) that support the documentation process [13][14][15][16][17]. The internal factors include healthcare personnel's knowledge, their practices and routines and their perceptions and attitudes towards documentation and nutrition [11,16,[18][19][20][21][22][23][24]. Several studies have investigated nurses' and doctors' nutritional routines, their knowledge and attitudes and also found that the level of nutritional knowledge was inadequate leading to poor clinical decisions regarding nutritional interventions [18][19][20]25]. However, no studies have described the level of knowledge, the routines and attitudes towards documentation and nutrition among the three primary caregivers in primary healthcare in Denmark; registered nurses, social and health service assistants and social and health service helpers. These three groups of professionals have a close degree of collaboration with each other and with the patient, whether in a nursing home or in the patient's own home. Furthermore, there are no studies that have investigated whether there are differences in healthcare personnel's routines, knowledge and attitudes when comparing their place of employment; nursing home or home care/ home nursing.
Aim
This aim of this study was to investigate the routines, knowledge and attitudes towards nutrition and documentation in primary health care of primary healthcare workforce. 1) What routines, knowledge and attitudes do registered nurses, social and health service assistants and social and health service helpers have in relation to nutrition and documentation in primary health care?
Research questions
2) Are there differences in routines, knowledge and attitudes towards nutrition and documentation between these groups of personnel in nursing homes and home care/home nursing?
Methods and Materials
Design Using a descriptive cross-sectional design a web-based questionnaire regarding registered nurses, social-and health service assistants, social and health service helpers' attitudes, routines and knowledge about nutrition and documentation as delivered to eligible participants. See Table 1 for an overview of the professional characteristics of the participants.
Setting and sample
A municipality in Denmark participated in the study representing a primary care setting. Both home care, home nursing and nursing homes were identified and a local project coordinator contacted the heads of departments via email to participate in the study and to provide local distribution and promotion of the questionnaires to eligible participants. The municipality was divided into four rural and urban districts (District 1-4). Each district has a local leader but they all refer to an overall center manager. The data were collected within these four districts from April 2017 to June 2017.
Questionnaire
As there were no valid and reliable questionnaires available, the authors developed a questionnaire specifically for this study, based on current research and expert opinion.
The questionnaire consisted of 40 questions divided into four subscales; 1) demographic data, consisting of 9 questions, 2) routines in relation to nutrition and documentation, consisting of 10 questions, 3) knowledge in relation to nutrition and documentation, consisting of 11 questions and 4) attitudes in relation to nutrition and documentation consisting of 10 questions. It mainly used closed questions with only a few open-ended questions with the possibility of further elaboration. The majority of the questions had a numeric scale of answer options from 0-10 (0 typically being never or very difficult and 10 typically being always or not difficult), where the remaining questions can be answered dichotomously (yes/no).
To test face and content validity of the questionnaire, four registered nurses and non-registered nurses, three leaders within primary health care and three experts within the nutritional area and documentation were asked to judge whether the questions appeared to be reasonable and if they covered relevant and important data with clarity [26]. This was done using a 4-point scale ranging from "not relevant" (1) to "highly relevant" (4). If questions were scored 3 or less the item were
Education Length
Length and content of the theoretical part of the education Length and content of the practical part of the education Typical work assignments Registered nurses 3 years and 6 months 60% of the education (120 ECTS credits) The theoretical training includes nursing science, medical science, natural science, humanities and social science.
40% of the education (90 ECTS credits)
Practical training takes place in a variety of settings in order to learn to observe, diagnose, assess, manage, evaluate, document and adjust nursing care for citizens and patient in stable, acute and complex care and treatment pathways.
The work assignments of a registered nurse include independent, professional, well-founded and reflective nursing practice in interaction with patients, citizens and relatives, as well as other professionals throughout the healthcare system with special focus on patient experienced continuity and quality.
Social-and health service assistants
1 year and 8 months
weeks
The theoretical teaching includes health and nursing studies, medical subjects, social science subjects, pedagogy with psychology, cultural and physical activity subjects.
weeks
Practical training takes place in somatic and psychiatric hospitals as well as in community care facilities and nursing homes.
The work assignments of a social-and health service assistant include care, basic nursing and implementation of physical activity to elderly people as well as ill and disabled people. This takes place at hospitals, mental institutions and in the homes of citizens and nursing homes.
Social and health service helpers
1 year and 2 months 17 weeks The theoretical teaching includes health studies, social science subjects, pedagogy with psychology, physical activity and practical subjects.
weeks
Practical training takes place in the homes of citizens as well as in nursing homes.
The work assignments of a social-and health service helper include assisting mostly elderly people in practical and personal tasks and basic hygiene as well as implementing physical activity. revised. The total score was 3.7 and resulted only in minor linguistic changes and layout changes.
To test internal consistency, Cronbach's alpha coefficients were calculated, resulting in coefficients of 0.85 (routines), 0.56 (knowledge) and 0.69 (attitudes). The summarized Cronbach's alpha coefficient for the three subscales is 0.86.
Procedure
A web based questionnaire (developed in an online survey system, www.onlineunderoegelse.dk) was linked to an e-mail and sent to all relevant participants with information about complete anonymity. After two weeks, one reminder was sent by heads of departments to those who had not answered the questionnaire. This procedure was repeated three times every two weeks. The connection between questionnaires and e-mail addresses was deleted after data collection was complete, ensuring complete anonymity.
Data Analysis
For statistical analyses, the Statistical Package for Social Sciences (SPSS), version 22.0 (SPSS Inc., Chicago, IL, USA), was used. The dichotomous results are presented as percentages. The remaining results are given as means +/-1 SD. Parametric data were tested for distribution by the F-test. If data were normally distributed Student's paired and unpaired two-tailed t-test was used. To test for significance between more than two groups of data the one-way ANOVA was used. P-values below 0.05 were considered significant. Linear regression analyses were conducted to determine whether knowledge and attitude scores predicted routine scores.
Ethical Considerations
The registered nurses, social and health service assistants and social and health service helpers' participation in the study was voluntary. They responded anonymously and all data were treated with confidentiality. In the information letters to the heads of departments and to the registered nurses, social and health service assistants and social and health service helpers, we emphasized that the aim of the study was not to audit individual staff members, but to describe the routines, knowledge and attitudes towards nutrition and documentation of the healthcare staff surveyed.
Results
The questionnaire were distributed to 1,391 eligible registered nurses, social and health service assistants and social and health service helpers in a municipality in Denmark. The overall response rate was 32%, leaving a total number of 449 respondents. A total of 54% of eligible registered nurses, 47% of eligible social and health service assistants and 26% of eligible social and health service helpers responded to the questionnaire. Employees from all four districts were represented among the respondents. District 3 was strongly represented by 57% of the respondents. It is however, also by far the largest district in terms of the number of employees. The response rate in nursing homes was equivalent to the response rate in home care/home nursing, 52% and 48% respectively. Respondents' years of experience in their respective profession ranged from less than one year to 48 years. Thirty-four (62%) nurses had a bachelor's degree or equivalent and 21 (4.8%) had completed a Diploma.
Routines in relation to nutrition and documentation
No significant differences were found between registered nurses with a bachelor degree with regard to their routines when compared to registered nurses without a bachelor degree.
The four districts in the municipality differed significantly on five questions concerning their routines. Their mean scores were statistically significantly different in question 2, 3, 5, 7 and 8. The routines covered in Q2), weighing newly referred patients at first visit (p-value=0.045), Q3) planning regular nutritional assessments (p-value=0.017), Q5) reporting about nutritional issues if there is a problem (p-value=0.030), Q7) contacting General Practitioner on having identified or suspected a nutritional problem (p-value=0.017) and Q8) reporting nutritional intake in patients whom are identified at being at nutritional risk (p-value=0.000) were different in the four districts.
Routines regarding nutrition and documentation were significantly different in seven out of ten questions when comparing educational level. Where results were statistically significant, social and health service assistants had the highest score (closer to always maintaining a routine) and social and health service helpers had the lowest score (closer to never maintaining a routine) ( Table 3).
Routines concerning nutrition and documentation were significantly different in five out of ten questions when comparing the setting (home care/home nursing versus nursing homes). Where results were statistically significant, nursing homes entered the highest score (closer to always maintaining a routine) and home care/home nursing entered the lowest score (closer to never maintaining a routine) ( Table 3).
Knowledge in relation to nutrition and documentation
No significant differences were found between registered nurses with a bachelor's degree with regard to their knowledge of nutrition and documentation, when compared to registered nurses without one.
The four districts in the municipality did not differ significantly with regard to their knowledge of nutrition and documentation.
Knowledge of nutrition and documentation were significantly different in nine out of eleven questions when comparing educational level (Table 4). Social and health service helpers showed a lower level of knowledge in nine questions when compared to registered nurses and social and health service assistants. No differences between registered nurses and social and health service assistants were found.
Knowledge about nutrition and documentation was significantly different in seven out of eleven questions when comparing the setting (home care/home nursing versus nursing homes). Where results were statistically significant, nursing homes showed the highest level of knowledge and home care/home nursing the lowest level (Table 4).
Attitudes in relation to nutrition and documentation
No significant differences were found between registered nurses with a bachelor's degree concerning their attitudes towards nutrition and documentation when compared to registered nurses without one. Only in question 2 (Should there be a care-plan for routine evaluation of patients' nutritional status? (10=always, 0=never) did two groups differ. Nurses without a bachelor's degree had a mean score of 8.52 (SD 2.46) and nurses with a bachelor's degree had a mean score of 6.91 (SD 3.78).
The four districts in the municipality did not differ significantly with regard to respondents' attitudes towards nutrition and documentation.
Attitudes towards nutrition and documentation were significantly different in eight out of ten questions when comparing educational level (Table 5).
Attitudes towards nutrition and documentation were significantly different in five out of ten questions when comparing the setting (home care/home nursing versus nursing homes) ( Table 5).
Linear regression analysis of attitude and knowledge scores against routine scores
Linear regression analysis was used to test if knowledge and attitudes significantly predicted participants' routines. The results of the regression analysis indicated that the knowledge score was not a significant predictor of routine score (F (2,310)=1.151, p-value ≤0.853 with an R 2 of .007) and the attitude score was not a significant predictor of routine score (F(1,315)=0.947, p-value≤0.823 with an R 2 of 0.003). Furthermore, the knowledge score was also not a significant predictor of attitude score (F(2,305)=0.907, p-value≤0.745 with an R 2 of 0.006). Therefore, neither knowledge nor attitudes are significant predictors of routines (Table 6).
Discussion
449 registered nurses, social and health service assistants, social and health service helpers participated in this cross-sectional study in a municipality in Denmark. This is the first cross-sectional study to examine their knowledge, routines and attitudes towards nutrition and documentation. A response rate of 32% is low, but may be considered acceptable in a web based survey, which is typically 10% lower than that of mail or telephone surveys [27]. The following measures were enacted to facilitate responses to the present survey: The questionnaire was validated among a small group of nutritional and documentation experts and future respondents and thereby pilot tested in order to refine it; It was linked directly to the e-mail received and opened directly in the questionnaire and the accessibility to the questionnaire was high, as all eligible participants also frequently received reminders. Since the entire workforce in the municipality has a work e-mail and uses electronic documentation systems, the distribution of a web-based questionnaire was not considered as an obstacle.
Registered nurses and social and health service assistants were similar in their responses concerning their attitudes towards nutrition. They considered nutrition to be part of their daily work assignments and tasks. This finding is in accordance with other studies suggesting that nursing staff overall have a positive attitude toward nutritional care and feel that it is a part of their responsibility [19][20][28][29][30][31]. Bachrach-Lindström et al. [31] found however in 2007 that nursing staff working with older people do not show a definitive positive attitude concerning their nutritional care responsibilities. Concerning documentation, the two groups also had similar responses although social and health service assistants stated that documentation of nutrition is more time and resource consuming than perceived by registered nurses. Social and health service helpers differed from the two other groups in eight out of ten questions. Especially in relation to areas of responsibilities, they stated that they feel less obliged to perform nutritional related activities than the two other groups. Registered nurses and social and health service assistants however stated that all three groups have equal responsibility when it comes to nutritional care and documentation. The discrepancy in their responses could therefore indicate a different perceptions of which professional groups have which responsibilities regarding nutritional care and the results are therefore consistent with a study where nurses expressed the need for a formally clarification of nutritional care responsibilities among the healthcare professionals involved in the patientcare [32].
Overall, between 10% to 38% of the participating healthcare professionals indicated that they do not know where to document nutritional problems or develop nutritional care plans in the patients' healthcare record. The daily routines regarding nutrition and documentation, as perceived by the healthcare professionals, were widespread. This suggests that there is a large nutritional routine variation among the three groups of healthcare professionals. The continuity of nutritional care and treatment are therefore compromised and the patients are likely to be exposed to a number of nutritional routines and practices that are unnecessary or even harmful. A large cross-sectional study conducted in Scandinavia of nurses and doctors nutritional routines, supports this finding, as they found that nutritional practices was poor in all countries across both disciplines [19]. Again, registered nurses and social and health service assistants were more similar in their responses, although social and health service assistants showed a higher degree of performing specific nutritional routines and, therefore a more coherent and consistent nutrition and documentation routine and practice. Overall, the nutritional routines in this present study are characterized by inconsistency and large variation. With regard to knowledge, there were no difference in the scores between registered nurses and social and health service assistants. Social and health service helpers, however, differed from the two other groups in 9 out of 11 questions. Overall, the three groups showed a poor level of knowledge with large variations concerning nutrition and documentation, which is also what was found in other studies investigating the nutritional knowledge of nurses in nursing homes and hospitals [1,20,30,33]. Between 42% to 88% of the participants are not familiar with the locally recommended nutritional screening tools. Between 5% to 21% of the participants could not calculate BMI (Body Mass Index) and the interpretation of BMI is challenging for all three groups of healthcare professionals. All three groups stated that their education only to some degree provided a basis for making decisions and taking actions on nutrition related issues, which is supported by another study that found that nurses reported lacking sufficient nutritional knowledge and skills to identify and treat undernourished older patients [32].
Social-and Health Service Assistants
The setting in which nutritional care is delivered and documented was also an indicator for statistically significant differences in the healthcare professionals' responses. Those working in nursing homes indicated the highest level of knowledge, routines and attitudes when compared to healthcare professionals employed in home care/ home nursing. Healthcare professionals from both settings, however, displayed poor levels of knowledge and routines regarding nutrition and documentation, in concordance with the above results. Approximately 13% of the participants working in home care/home nursing were familiar with and used the locally recommended nutritional screening tools, whereas approximately 40% of the participants working in nursing homes were familiar with and used the locally recommended nutritional screening tools. Up to 18.4% of healthcare professionals working in home care/home nursing could not calculate BMI (Body Mass Index), whereas only up to 7.7% of employees in nursing homes could not. Both participants working in home care/home nursing and nursing homes reported challenges with the interpretation of BMI, although participants working in home care/home nursing reported a statistically significantly higher degree of difficulties with the interpretation of BMI. Hasson et al. [34]. A study from 2008 reported similar results, as a larger percentage of healthcare professionals in home care/home nursing rated their knowledge as insufficient in a number of areas, including the nutritional area, when compared to healthcare professionals in nursing homes [34].
The majority of social and health service helpers and assistants are employed in nursing homes, whereas the majority of registered nurses are employed in home care/home nursing. It would seem to be reasonable to assume that differences between the two settings reflect the different representation of educational levels. However, since registered nurses have a higher education level than social and health service helpers and assistants it could be assumed that the level of knowledge, routines and attitudes would be higher in home care/home nursing and not in nursing homes as was found in this study. Several studies report that education and training are important to the quality of care and underpin the importance of the presence of healthcare professionals with a high level education regardless of the setting [35,36]. However, based on the results from this descriptive study, it can be suggested that organizations also should focus on qualifying, training and educating healthcare professionals, regardless of their educational level, meaning that organizational, cultural and management support are potentially equally as important as educational level with regard to delivering high quality of care.
In the linear regression analysis conducted in this study, it was hypothesized that a high score on attitudes and knowledge would be a predictor of high scores in routines. However, the analysis showed that a high degree of nutritional knowledge and attitudes did not directly determine nutritional routines and practices. This is in contrast to other studies that have investigated the association between nutritional knowledge and nutritional routines among nurses, doctors and dieticians in different settings. These studies suggested that a low degree of nutritional knowledge is a predictor of poor nutritional care and practices [19,20,37]. The results from the regression analysis support our previous suggestions for an organization, management and culture that articulates and prioritizes nutritional care and documentation. Evidence also suggests that no matter which healthcare professionals are employed or what their specific roles are, a truly effective workforce can only be generated by tackling of organizational structures and issues [38].
The present study has some limitations. Firstly, the study design has a predictive limitation, as it is not possible to assess any cause and effect relationship between the parameters investigated. Furthermore, it is purely a descriptive study aiming to map current conditions in a municipality in Denmark. Secondly, the findings in this study have not been verified with a review of the respondents' documentation practice e.g. use of screening tools and development of nutritional care plans. Thirdly, the questionnaire developed has an acceptable and good summarized Cronbach´s alfa score of 0.86. However, the knowledge subscale had a Cronbach´s alfa score of 0.56, which indicates poor internal consistency. We therefore recommend caution against using only the subscales and not the full questionnaire in another primary healthcare setting before the questionnaire have been adjusted and refined. Fourthly, the low response rate may, overall, reflect a low interest level in the topic or that the healthcare staff does not perceive it as relevant. However, one could anticipate that those healthcare professionals that participated in this study have a higher interest in nutrition and documentation than those who did not participate. An analysis of non-responders in Mowe et al. [19] study showed that the respondent group was more interested in nutrition and that they found it more relevant than the nonresponders. This could support the assumption that nutritional care and documentation routines, level of knowledge and attitudes among the healthcare professionals in this municipality in fact are associated with greater variety and inconsistency than depicted in this study.
Qualitative studies elaborating on the discrepancies and differences registered in this study would be useful to conduct. An investigation of the knowledge, routines and attitudes of nutrition and documentation among registered nurses, social and health service assistants and social and health service helpers in nursing homes and home care/home nursing would give a more thorough and in-depth insight into these areas. It could then provide primary healthcare and managers/leaders with future recommendations containing specific strategies in order to increase the quality of nutritional care and documentation.
Conclusion
This is the first study to compare the routines, knowledge and attitudes regarding nutrition and documentation among registered nurses, social and health service assistants and social and health service helpers in nursing homes and home care/home nursing in a Danish municipality. This study shows that the level of nutritional knowledge and nutritional routines and documentation practices was poor in all three healthcare professional groups. The respondents showed large variations in knowledge and practices, hence complicating the transfer of accurate and relevant nutritional related data in the patients' healthcare record and risking that the continuity of care and treatment would be lacking as the quality of care decreases. Overall, all three groups of healthcare professionals indicated a somewhat positive attitude towards documentation and nutrition and regarded nutrition and documentation as a part of their area of responsibility, although there were discrepancies in the degree of responsibilities among the groups of healthcare professionals.
The regression analysis conducted in this study showed that a high degree of nutritional knowledge and attitudes did not determine nutritional routines. This information suggests that focus on increasing healthcare professionals nutritional knowledge may seem redundant if the organizations and management do not continuously articulate and prioritize nutritional care and documentation.
|
2019-03-28T13:33:48.199Z
|
2018-01-01T00:00:00.000
|
{
"year": 2018,
"sha1": "94c53f12873cddf6170b629693c3f988151dc233",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.4172/2471-9846.1000220",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "6b4156ea410ad2eef4d975141149c97c391f5df4",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
231939993
|
pes2o/s2orc
|
v3-fos-license
|
An ℓ 2 -consistent event-triggered control policy for linear systems ✩
In this article, we consider the design of an event-triggered ℓ 2 -control policy, for a setting where a scheduler is arbitrating state transmissions from the sensors to the controller of a discrete-time linear system. We start by introducing a periodic time-triggered ℓ 2 -controller for different transmission time-periods with a given ℓ 2 -gain bound using the minimax game-theoretical approach. After that, we propose an ℓ 2 -consistent event-triggered controller in the sense that it guarantees at least the same ℓ 2 -gain bound as the designed periodic time-triggered ℓ 2 -controller, however with a larger, or at most equal, average inter-transmission time. In practice, for typical disturbances, the proposed event-triggered scheme can lead to significant gains, both in terms of communication savings and disturbance attenuation, compared to periodic time-triggered policies, which is illustrated through a numerical example.
Introduction
The advent of new communication technologies, such as 5G, will further facilitate the rapid expansion of networked control systems (NCS) in many (industrial) branches of our society in the years to come.In NCSs, sensors, controllers and actuators communicate through shared communication networks.Applications include vehicle platooning, cloud-based control, smart grids, and robot swarms.In configurations where communication between agents happens periodically, the well-developed theory of sampled-data control (Chen & Francis, 2012) can be used to guarantee stability and performance of these systems.However, periodic communication for control applications can be rather resource-inefficient.In fact, control applications require large bandwidth for high communication frequencies and, when relying on wireless technologies, can lead to a large power consumption, which can be prohibitive when using battery powered communication devices.Therefore, managing and reducing the communication between sensors, controllers and actuators is crucial in many networked control applications.
Event-triggered controllers (ETCs) have been proposed in the literature as an alternative to periodic time-triggered controllers ✩ This project was funded from the European Union's Horizon 2020 Frame- work Programme for Research and Innovation under grant agreement No. 674875 (oCPS).The material in this paper was not presented at any conference.This paper was recommended for publication in revised form by Associate Editor Dimos V. Dimarogonas under the direction of Editor Christos G. Cassandras. in order to decrease the communication load in NCSs, while at the same time preserving stability and performance requirements see, e.g., Åström and Bernhardsson (2002), Behera et al. (2018), Heemels et al. (2012), Heemels et al. (2008), Lunze and Lehmann (2010), Molin and Hirche (2014), Nowzari et al. (2019) and Tabuada (2007) and the references therein.In a loop with an ETC, data transmissions between agents (sensors, controllers, actuators) are triggered based on well-defined events such as abrupt changes in the value of data or when estimation errors exceed certain thresholds.A large number of studies has been carried out so far in this research area with promising results in reducing the communication burden of the control loops, see, e.g., Antunes and Heemels (2014), Araujo et al. (2014), Mastrangelo et al. (2019), Mazo and Tabuada (2008), Postoyan et al. (2011), Weerakkody et al. (2016) and Wu et al. (2013).In some studies, ETCs are designed in order to guarantee stability of the system (Mamduhi et al., 2017;Mazo & Tabuada, 2008;Postoyan et al., 2011).Others also provide guarantees on an average quadratic cost of the event-triggered control-loops (Antunes & Heemels, 2014;Araujo et al., 2014;Asadi Khashooei et al., 2018;Balaghi I. & Antunes, 2017;Balaghi I. et al., 2018;Brunner et al., 2018;Goldenshluger & Mirkin, 2017).
Another important performance criterion for control-loops is the ℓ 2 -or L 2 -gain, which captures the worst-case disturbance attenuation level from an exogenous input to a performance output of the control loop for discrete-time or continuous-time systems, respectively.In a networked control configuration with communication limitations, the setup as depicted in Fig. 1 is of interest, where a feedback controller K attenuates the effect of the disturbance input w on the performance output z of the https://doi.org/10.1016/j.automatica.2020.1094120005-1098/© 2020 Elsevier Ltd.All rights reserved.plant G. Here, a scheduler S determines the time instances when the measured state should be communicated to the controller through a communication network N. The event-triggered scheduler should be designed together with an appropriate controller to guarantee a certain ℓ 2 -or L 2 -gain bound for the closed-loop system, while the available communication network should be able to handle the required data transmissions.
In recent years, researchers took different approaches in order to design event-triggered ℓ 2 -or L 2 -controllers.In particular, conditions for the L 2 -stability of the proposed event-triggered transmission policies in a sampled-data control system configuration are given in Peng and Han (2013) and Yan et al. (2015), by constructing Lyapunov-Krasovskii functionals.In Kishida et al. (2017), finite-gain L 2 -stability is guaranteed for an uncertain linear system by jointly designing an event-triggered mechanism in updating the control inputs and a self-triggered mechanism in determining the next sampling time of the sensors.The exponential stability and L 2 -gain analysis of a NCS, where the sensor to controller and the controller to actuator communications are both based on event-triggered mechanisms, is studied by using the delay system approach in Hu and Yue (2013).Moreover, there are some other studies establishing the L 2 -stability of the systems with ETCs (Wang & Lemmon, 2009;Yu & Antsaklis, 2013) or providing guaranteed values for the ℓ 2 -gain of discrete-time linear systems with an ETC (Heemels et al., 2013).In addition, an ETC is designed for output-feedback linear systems by considering the L ∞ -gain of the closed-loops in Donkers and Heemels (2010).For nonlinear systems, ETCs are proposed in Abdelrahim et al. (2017) and Dolk et al. (2017) that guarantee a finite L p -gain for closed-loop systems and prevent the Zeno behaviour in data transmissions.
In principle, employing an ETC in NCSs is beneficial only if it results in a better performance in comparison to time-triggered periodic control when both transmit with the same average transmission rate.This concept was first introduced in Antunes and Asadi Khashooei (2016) and referred to as consistency.In recent years, consistent ETCs in the sense of average quadratic cost have been proposed in both centralized and decentralized NCS configurations, see, e.g., Asadi Khashooei et al. (2018), Balaghi I. et al. (2018), Brunner et al. (2018) and Goldenshluger and Mirkin (2017), see also an early result for scalar systems in Åström and Bernhardsson (2002).
We can also extend the notion of consistency to eventtriggered ℓ 2 -or L 2 -control loops.Accordingly, an ETC is called ℓ 2 -or L 2 -consistent if it guarantees the same ℓ 2 -or L 2 -gain bound as any periodic time-triggered ℓ 2 -or L 2 -controller, however, with a smaller or at most the same average transmission rate (Balaghi I. et al., 2019).In spite of all works previously mentioned in the context of event-triggered ℓ 2 -or L 2 -control, the design of an ℓ 2 -or L 2 -consistent ETC has not received much attention so far.In fact, we are only aware of two very recent references related to our work, see Balaghi I. et al. (2019) and Mi and Mirkin (2019).Our previous work (Balaghi I. et al., 2019) differs from the present paper as it focusses on designing a fixed, a priori given, transmission sequence, and not a policy, while Mi and Mirkin (2019) derive an ETC with similar L 2 -consistent properties as the one we present in this paper.However, they are given for continuous-time systems (and thus L 2 -gain), and, most importantly, follow a very different approach based on the Youla parametrization, whereas we consider discrete-time systems and follow a game-theoretical approach.As both results are developed independently and follow different approaches for different settings, they are of independent interest.To be precise, in this work, for a given fixed transmission time period, we design a periodic time-triggered ℓ 2 -controller for any feasible ℓ 2 -gain bound, following a game-theoretical approach.
Then, we design an ETC guaranteeing an equal ℓ 2 -gain bound as that of the designed periodic time-triggered ℓ 2 -controller, however, with a larger (or at least equal) average inter-transmission time.In fact, based on our proposed ETC, when the realization of the disturbance input follows the worst-case scenario, then the proposed ETC triggers data transmissions periodically.However, when the disturbance input deviates from the worst-case scenario, then our proposed ETC is able to skip data transmissions thereby guaranteeing a larger average inter-transmission time than the time period of the periodic controller, while they both guarantee the same ℓ 2 -gain bound for the system.
Implicit in the NCS of interest in the current work it that (possibly large) packets of information are sent to the controller (and there is no error in the transmitted values) and it is the objective of the scheduler to keep the number of transmissions (average communication rate) as small as possible, while guaranteeing certain performance objectives.An alternative perspective, also considered in the literature (see, e.g., Ishii & Francis, 2002), is to keep the communication frequency constant, but reduce the size of the packets to be transmitted (and thus there is a discrepancy between the actual measurements and the transmitted quantized value) and thereby also realize a small bit rate.The problem of interest in this line of research is to determine the accuracy (or the number of bits) of each communicated data packet on the relation of a given control objective or to find the minimal number of bits needed in order to realize a certain objective.
Although not considered in this paper, there are also some recent works, where this idea is jointly employed with event-triggered transmission mechanisms, which can at the same time reduce the communication frequency as is investigated, for instance, in Abdelrahim et al. (2019), Ling (2020) and Tallapragada and Cortés (2016), all also the references therein, to achieve exponential and input-to-state stability for linear systems, respectively.
The remainder of this paper is organized as follows.The problem of interest is introduced in Section 2 and an ℓ 2 -consistent ETC is proposed in Section 3. The effectiveness of the novel ETC in decreasing the communication load is demonstrated through a numerical example in Section 4. Finally, Section 5 presents concluding remarks.The proofs of lemmas and theorems can be found in the Appendix.
Notation.For r, s ∈ N 0 := N ∪ {0}, we define N s r = {t ∈ N 0 |r ⩽ t ⩽ s} and ℓ d 2 as the Hilbert space of square summable sequences w := {w k } k∈N 0 , where w k ∈ R d for all k ∈ N 0 , and Moreover, ⌊x⌋ indicates the floor of an x ∈ R, and for matrices A, and B, we define diag(A, B) for the corresponding block diagonal matrix.
Problem setting
We introduce the NCS with periodic communication in Section 2.1 and the NCS with event-triggered communication in Section 2.2.The problem of interest is stated in Section 2.3.
Networked control system with periodic communication
Consider the system architecture in Fig. 1 in which the plant G is given by a discrete-time linear time-invariant (LTI) system the state, the control input and the disturbance, respectively, at discrete time k ∈ N 0 .Let w ∈ ℓ d 2 and assume that the disturbance generator at every time step has access to all the state vectors from the initial up to the current time-step.Therefore, w k = T k (E k ), where for some mapping otherwise.For the periodic transmission policy with a given time period τ ∈ N, we set δ k = π τ k , where Then any periodic control policy can be formulated as where at every k ∈ N 0 , is the information set available for the controller and R Although we use here this general definition, in practice, the periodic control policies of interest (see Lemmas 1 and 2) will only depend on the last transmitted state.Therefore, the controller does not need to store all the received state vectors in memory (which can possibly require a large memory).The goal of an ℓ 2 -controller is to attenuate the effect of the disturbance input w k on the performance output of the system, where we assume that F has full column rank.
Let E T E = Q and without loss of generality we can now assume We need the following assumptions and the definition of global asymptotic stability in the sequel.
Note that these assumptions are rather standard in the ℓ 2 -control context (see also Point 3. after Theorem 1).
Definition 1 (Global Asymptotic Stability Aliyu, 2017).The system (1) with w = (0, 0, . . . ) and a given control input policy is said to be globally asymptotically stable (at equilibrium point i) the control loop is Lyapunov stable, i.e., for every ζ ≻ 0, there exists a δ ≻ 0 such that for all initial states x 0 ∈ R n with ∥x 0 ∥ ⩽ δ, it holds that ∥x k ∥ ⩽ ζ for every k ∈ N 0 , (ii) the corresponding state trajectory x k converges to x e = 0 as time goes to infinity, i.e., lim k→∞ x k = 0. □ Next, we formally define the concept of τ -periodic ℓ 2 -controller for the system (1).
the system (1) and ( 6), where F πτ k follows (5), such that (i) the closed-loop control system given by ( 1) and ( 4) is globally asymptotically stable when w = (0, 0, . . .), (ii) when holds for some positive ϵ (independent of w), is referred to as a τ -periodic ℓ 2 -controller with ℓ 2 -gain bound γ .Moreover, the infimum value of γ ∈ R ≻0 for which a τ -periodic ℓ 2 -controller exists with ℓ 2 -gain bound γ is called the infimal ℓ 2 -gain of ( 1) and ( 6), and is denoted by γ * τ .□ Let us define for all w ∈ ℓ d 2 .Based on ( 7), when w = (0, 0, . . .), the designed τ -periodic ℓ 2 -controller should result in J to be equal or less than zero for x 0 = 0.However, when w ̸ = (0, 0, . . .), then J should always be strictly less than zero.Before designing a τ -periodic ℓ 2 -controller with ℓ 2 -gain bound γ , one should evaluate the existence of such a controller for the given value of γ ∈ R ≻0 , i.e., decide if γ ≻ γ * τ .However, for a given γ ∈ R ≻0 , a τ -periodic ℓ 2 -controller with ℓ 2 -gain bound γ exists if and only if, for x 0 = 0, the following minimax optimization problem results in a nonpositive value, i.e.J * ⩽ 0, where This can be concluded from the arguments in Başar and Bernhard (2008).Therefore, the infimal ℓ 2 -gain of the closed-loop system with τ -periodic transmission is the infimum value of the set of γ ∈ R ≻0 for which the minimax problem ( 9) has a non-positive value.Moreover, if for a given γ ∈ R ≻0 , J * is nonpositive, then the optimal control policy determined based on ( 9) is a τ -periodic ℓ 2 -controller in the sense of Definition 2. In the following two lemmas, we provide a τ -periodic ℓ 2 -controller with ℓ 2 -gain bound γ by solving the minimax problem (9).Lemma 1 considers the special case τ = 1 and Lemma 2 provides the results for general τ ∈ N.
Lemma 1 (1-Periodic ℓ 2 -Controller).Let Assumption 1 hold.Then (i) there exists a γ ∈ R ≻0 such that for every γ ≻ γ , the Ricatti where H = I + (BB T − γ −2 DD T )M, has a positive definite solution M and γ 2 I − D T MD ≻ 0.Moreover, the infimum value of γ for which the above holds coincides with the infimal ℓ ] , (13) 1 We can easily investigate the condition with an unknown initial condition by adding one extra time to the time-horizon and considering the initial condition as the disturbance of the previous time (Başar & Bernhard, 2008).
Before considering the general condition in Lemma 2, where τ ∈ N, let us introduce another time variable ι ∈ N 0 , where ι = ⌊ k τ ⌋ and define the following augmented control and disturbance inputs at every ι ∈ N 0 , Lemma 2 (τ -Periodic ℓ 2 -controller for τ ∈ N).Let Assumption 1 hold.Then (i) there exists a γ ∈ R ≻0 such that for every γ ≻ γ , (10) has a positive definite solution M and .
(ii) for any γ ≻ γ * τ , the control policy where K follows (12) and where u * k follows (16) for all k ∈ N 0 , and where Φ τ := Y 0 , for all τ ∈ N, and Y 0 is determined based on the following backward iteration and for all h Lemmas 1 and 2 do not only provide a τ -periodic ℓ 2 -controller for (1), ( 6), and a given τ ∈ N but also introduce upper bounds for J in ( 13) and ( 21) that will be useful in the design of a NCS with event-triggered communication in Section 3, as we will see.
Remark 1.It is important to mention that Lemmas 1 and 2 still hold without any change if at every time-step k ∈ N (ι+1)τ −1 ιτ , the disturbance generator has access also to the control inputs from k up to (ι + 1)τ − 1, i.e., the information set available for the disturbance generator follows Moreover, the disturbance input policies w * k given in ( 14) and ( 19) are the worst-case disturbance scenarios, when the disturbance generator has access to the information set ( 24) at all times.
Networked control system with event-triggered communication
The NCS we are interested in has the same plant G as in (1) and the information set of the disturbance generator also follows (2) (or ( 24)).However, data transmission to the controller follows a state-dependent mechanism, which is called an event-triggered transmission policy, and we can formulate it as where is the information set available for the scheduler at k ∈ N 0 .Then, any appropriate control policy is defined as where is the information set available for the controller at k ∈ N 0 based on an event-triggered scheduling policy defined in (25) and R Similarly to the periodic control case, in practice, the event-triggered scheduling and control policies of interest (see, e.g., the proposed one in Section 3) will only depend on a few members H k and F µ k , respectively.Therefore, the controller does not need to store all the received state vectors (and thus does not need a large memory).We call an event-triggered scheduler and its related controller an ETC, which is denoted by η = (µ, R µ ).Furthermore, we introduce the average transmission rate associated with an event-triggered scheduling policy µ and a disturbance sequence w ∈ ℓ d 2 as fη (w) = lim sup T →∞ 1 T ∑ T −1 t=0 µ t (H t ) and the average inter-transmission time as Ωη (w) = 1/ fη (w).Next, we define the concept of an event-triggered ℓ 2 -controller.
Definition 3 (Event-triggered ℓ 2 -Controller Yan et al., 2015).Given γ ∈ R ≻0 , an ETC η = (µ, R µ ) for the system (1) and ( 6) satisfying that (i) the closed-loop control system (1) and ( 27) is globally asymptotically stable when w = (0, 0, . . .), (ii) under the assumption of zero initial condition J ⩽ 0 for all w ∈ ℓ d 2 , where J follows (8), is referred to as an event-triggered ℓ 2 -controller with ℓ 2 -gain bound γ .Moreover, the infimum value of γ ∈ R ≻0 , where (i) and (ii) hold for an ETC η designed for (1) and ( 6) is called the infimal ℓ 2 -gain of the event-triggered control loop and is denoted by γ * η .□ Remark 2. One could define condition (ii) in Definition 2 exactly in the same way as in Definition 3.However, in this case, γ 2 I − DT τ Mτ Dτ ⩾ 0 would be the necessary condition for the existence of a τ -periodic ℓ 2 -controller for a given ℓ 2 -gain bound γ and τ ∈ N, while γ 2 I − DT τ Mτ Dτ ≻ 0 is the sufficient condition for the existence of the proposed τ -periodic ℓ 2 -control policies in (11) and ( 16).The current condition (ii) of Definition 2 is important to determine γ 2 I − DT τ Mτ Dτ ≻ 0 as both the necessary and sufficient conditions for the existence of a τ -periodic ℓ 2 -controller for a given ℓ 2 -gain bound γ and τ ∈ N. It is also important to mention that based on Definitions 2 and 3, event-triggered ℓ 2 -controllers have to satisfy a weaker condition than τ -periodic ℓ 2 -controllers.However, since ϵ in Definition 2 is allowed to be arbitrarily small, this difference in definitions is negligible.
Problem statement
The τ -periodic ℓ 2 -controllers with ℓ 2 -gain bound γ determined in Lemmas 1 and 2 periodically update their state estimates based on the full-state measurements of the sensors.In this way, these controllers can guarantee a desired disturbance attenuation level γ for all disturbance inputs.For every τ -periodic ℓ 2 -controller given in Lemmas 1 and 2 we can propose an eventtriggered ℓ 2 -controller counterpart η, which guarantees the same disturbance attenuation level γ for the system.However, based on the realization of the disturbance inputs, its scheduler can skip some of these periodic data transmissions needed by the τperiodic ℓ 2 -controller, thereby requiring fewer transmissions and thus resulting in larger (or equal) values of Ωη (w) in comparison to τ .This ETC is called ℓ 2 -consistent according to the following definition.
. □
The goal of this work is to propose an ℓ 2 -consistent ETC for the NCS depicted in Fig. 1.
ℓ 2 -consistent event-triggered controller
We propose an ℓ 2 -consistent ETC in this section.For simplicity we start, in Section 3.1, with the case in which τ = 1, since the main ideas can already be conveyed for this case.In Section 3.2, we consider the general case in which τ ∈ N.
We know that the control policy (11) requires the state information at every time-step.However, in our desired ETC setting, the controller does not have the state information at all times and can, therefore, only use an estimation xk|k of the state x k at time k ∈ N 0 .In particular, we select the controller associated with our desired ℓ 2 -consistent ETC policy as where K is given as in ( 12) and xk|k is the state estimate in the controller.We propose three state estimators.Two are described as at all k ∈ N for N ∈ {I, A} and x0|0 = x 0 .The choice N = I boils down to keeping the estimated state constant if data is not transmitted to the controller and N = A boils down to updating the estimated state based on the system dynamics by ignoring the effects of both the control input and the disturbance at all k ∈ N and x0|0 = x 0 , is another (possibly more reasonable for some special disturbance inputs) state estimator in the controller.In the following theorem, we propose an event-triggered scheduling policy, which together with ( 29) and (30) (or (31)), results in an ℓ 2 -consistent ETC in the sense of Definition 4.
1: Based on the event-triggered scheduling policy (32), a deviation of the actual disturbance inputs from the worst-case disturbance scenario given by ( 14) acts as a ''reward'' in order to skip data transmissions and let the control inputs deviate from the one determined for 1-periodic ℓ 2 -controller in (11).This reward can counteract the penalty incurred by skipping data transmissions (as then u k ̸ = u * k ).This is the main intuition behind the proposed ETC in Theorem 1.Moreover, as it can be easily concluded, if the disturbance inputs follow w k = w * k for all k ∈ N 0 , where {w * k |k ∈ N 0 } can be seen as a worst case disturbance input, then the proposed event-triggered scheduling policy (32) always triggers data transmissions, i.e., µ k (H k ) = 1 at all k ∈ N 0 , unless ûk = u * k , which is typically not the case.
2:
We can show that for the 1-periodic ℓ 2 -controller determined in Lemma 1, and for all γ ≻ γ * 1 , J * = x T 0 Mx 0 , where M is given in Lemma 1.As we select a smaller value for γ , M will become larger (in the sense that . Furthermore, based on (10), we can conclude that MH −1 will also become larger.Now, then Φ 1 becomes larger and Ψ 1 becomes smaller.Therefore, the scheduling law ( 32) is expected to trigger more transmissions for smaller values of γ and the same disturbance input sequence w.
3: In order to evaluate the event-triggered condition (32) at every time k ∈ N 0 , the scheduler needs the values of , which can be calculated by using given the condition that the scheduler receives x k at every k ∈ N 0 and knows the control policy, from which the control inputs u k−1 can be replicated.Therefore, (ii) in Assumption 1 helps to calculate the values of the disturbance inputs, needed in the event-triggered scheduling policy, based on the state measurements, and there is no need for measuring them independently.
General case τ ∈ N
For any γ ≻ γ * τ , the τ -periodic ℓ 2 -controller ( 16) requires periodic state transmission after every τ ∈ N time-steps.However, in this section, we propose an event-triggered ℓ 2 -controller with ℓ 2 -gain bound γ ≻ γ * τ , which can skip data transmissions at some of these time-steps.Let us introduce the augmented control policy associated with our desired ETC as where Kτ = B0 in which B0 is determined based on (23) and similar to the previous section, we can either have as the state estimator in the controller for all ι ∈ N and x0|0 = x 0 , depending on the characteristic of the disturbance input.In the following theorem, we propose an event-triggered scheduler which together with ( 33) and (34) (or ( 35)) result in an ℓ 2consistent ETC based on Definition 4.
The ℓ 2 -consistency of the proposed ETC policies in Theorems 1 and 2 indicates that for the same γ ∈ R ≻0 as the disturbance attenuation level where γ ≻ γ * τ , in case the disturbance input does not follow the worst-case scenario given by ( 19), the eventtriggered scheduler can skip data transmissions at some times required by the τ -periodic controller ( 16) and results in a larger average inter-transmission time than τ while guaranteeing the same ℓ 2 -gain bound γ .Moreover, for the proposed ETC, the behaviour of the average inter-transmission time with respect to γ when γ ≻ γ * τ is not necessarily increasing and it highly depends on the actual disturbance input of the system.This can be clearly seen in Fig. 3 corresponding to a numerical example.
Numerical example
Consider a scalar system where A = 1.1, B = 1, D = 1 are the parameters of the linear model (1), and Q = 1.Moreover, we take w k = e − k 200 sin( k 25 ), k ∈ N 0 , as the unknown disturbance input of the system.The infimal ℓ 2 -gain of the system for periodic control with the inter-transmission time-steps τ ∈ {1, 2, 3, 4} are γ * 1 = 1.487, γ * 2 = 2.202, γ * 3 = 2.999 and γ * 4 = 3.871.According to Lemmas 1 and 2, for any τ ∈ N and γ ≻ γ * τ , we can design a τ -periodic ℓ 2 -controller with ℓ 2 -gain bound γ .Then based on Theorems 1 or 2, we can design its ℓ 2 -consistent ETC counterpart for this system.Based on Definition 4, the proposed ℓ 2 -consistent ETC can result in the same attenuation level (ℓ 2 -gain bound) as the PTC (11) or ( 16), however, with a smaller (or at most an equal) average transmission rate.However, we will show that for the given system and the disturbance input, it is even possible to achieve smaller disturbance attenuation levels by following the proposed ETC in comparison to the PTC (11) or (16), while they both have the same average transmission rate.
Firstly, we consider τ = 1 and design an ETC based on Theorem 1.The controller follows (29), where the state estimation in the controller is determined based on (30) for N = A. Fig. 2(a) shows the state trajectory related to the ETC when τ = 1 and γ = 1.630 ≻ γ * 1 is its corresponding ℓ 2 -gain bound.This ETC results in Ωη (w) = 2.033, where Ωη (w) denotes the average inter-transmission time of scheduler.However, if the scheduler triggers transmissions periodically with τ = 2, then we know that the infimal ℓ 2 -gain of the system with periodic control is γ * 2 = 2.202.The state trajectory of the periodic controller ( 16) for γ = γ * 2 + ϵ, where ϵ ≻ 0 is a small real number, and τ = 2 is shown in Fig. 2(a), which indicates the better disturbance attenuation of the ETC while they both have almost the same average transmission rate for the given disturbance input w.where the state estimation in the controller is determined based on (34) for N = A 2 .Again, for this system, the ℓ 2 -gain bound of the system with the ETC is significantly smaller than the minimum value that results from periodic control while they both have almost the same average inter-transmission timesteps to the controller for the given disturbance input w.Fig. 3 is more generic and illustrates the ℓ 2 -consistency of the proposed ETC when τ = 1 and τ = 2.For the given disturbance input w, we find the average inter-transmission time of the system with the ℓ 2 -consistent ETC designed for different values of γ ≻ γ * τ , a time horizon of 250 and zero initial condition.The solid line shows the trade-offs one can achieve by following a periodic time-triggered control strategy.We easily see the better trade-offs for the proposed ℓ 2 -consistent ETC in comparison to PTCs.In principle, based on the theory (Theorems 1 and 2), for every τ ∈ N and γ ∈ [γ * τ , γ * τ +1 ) the trade-offs for the proposed ETC ( 36) and (33) (or ( 32) and ( 29) when τ = 1) are guaranteed to be bellow (or at most on) the stairwise curve of PTCs for any linear system (1) and disturbance input w.
Conclusions
In this work, we investigated the design of event-triggered controllers (ETCs) for discrete-time linear systems by considering the ℓ 2 -gain as a performance criterion of the closed-loop system.Firstly, for every transmission time period, we determined a periodic ℓ 2 -controller for a given ℓ 2 -gain bound, following a game-theoretical approach.Then, we introduced the notion of ℓ 2 -consistency, which refers to any ETC that guarantees the same ℓ 2 -gain bound as that of the designed periodic ℓ 2 -controller, however, with a larger or an equal average inter-transmission time.Next, we proposed the design of an ℓ 2 -consistent ETC with some interesting features.When the disturbance input follows the worst-case scenario at every time, the scheduler triggers transmissions periodically in order to guarantee an ℓ 2 -gain bound for the system.However, when the disturbance input is not equal to the worst-case scenario, the ℓ 2 -gain bound of our designed ETC is still guaranteed and equal to that of the designed periodic ℓ 2 -controller, however, with a (significantly) larger average inter-transmission time.Possible directions for future work include considering linear plants with partial state information, non-linear plants and date bit rate constraints.
A.1. Proof of Lemma 1
Parts i and ii can be proved by the arguments in Theorem 3.8 of Başar and Bernhard (2008), in which the stabilizability of (A, B) and the observability of (Q 1 2 , A) are used to guarantee the existence of γ ∈ R ≻0 , where for all γ ≻ γ the Ricatti equation ( 10) has a positive definite solution M. They are also proved (by making τ = 1) in a more general setting in Lemma 2. The only point that is not proved in Başar and Bernhard (2008) is the Lyapunov stability of the control loop when w = (0, 0, . . .), which we postpone it to the end of the present proof.
).Now by completing the squares for w k , we obtain Now, we complete the squares for u k .
J(x
where and u * k = −B T MH −1 Ax k .Now by using the matrix inversion lemma (Henderson & Searle, 1981, equation ( 18)), we can show that Φ −1 Then by summing all the values of J(x k , x k+1 ) over k ∈ N ν 0 for an arbitrary ν ∈ N and considering From this equation we conclude that Since M is a positive definite matrix and x 0 = 0, then ] , which proves part iii.Now we need to prove the global asymptotic stability of the control loop, when w = (0, 0, . . . ) and the control input follows (11).We take V (x k ) = x T k Mx k as the Lyapunov function candidate, where M is a positive definite solution of (10).Considering ∆V k := V (x k+1 ) − V (x k ), then based on (A.1), for w = (0, 0, . . . ) and Therefore, the control loop is Lyapunov stable.Moreover, based on the observability of (Q 1 2 , A) it can be shown that the state of the control loop converges to zero as time goes to infinity, when w = (0, 0, . . . ) and u k = u * k at every k ∈ N 0 , see Başar and Bernhard (2008, page 62).Thus, the system is globally asymptotically stable.
A.2. Proof of Lemma 2
(i) necessary and sufficient conditions for the existence of a τperiodic ℓ 2 -controller According to Theorem 3.8 of Başar and Bernhard (2008), taking into account the stabilizability of (A, B) and the observability of (Q 1 2 , A), there exists a γ1 ∈ R ≻0 , where for all γ ≻ γ1 the Ricatti equation ( 10) has a positive definite solution M.Moreover, it is clear that there exists γ2 ≻ 0 such that for all γ ≻ γ2 , γ 2 I − DT τ Mτ Dτ ≻ 0 holds.Then we can take γ := max{ γ1 , γ2 }, which establishes the first assertion.Now to prove the second assertion in statement i, we resort to an argument in Başar and Bernhard (2008), which indicates that the conditions needed to find a controller satisfying part ii of Definition 2 are the same as the conditions needed to have J * ⩽ 0, where J * is given in (9).Solving the minimax optimization problem in (9) is equivalent to finding an appropriate value function V(x ℓ ) such that the following Isaacs equation holds for every ℓ = ιτ ∈ N 0 and every x ℓ ∈ R n (Başar & Olsder, 1999, Corollary 6.2), In this minimax game, the information structures of the two players are periodic with given, generally not equal, time periods.As a result of Theorem 6.9 in Başar and Olsder (1999) the value function for this two-player zero-sum minimax game is ℓ Mx ℓ , where M is the positive definite solution of (10).Therefore, we have to find the conditions under which the following equality always holds for every In order to solve the optimization problem in (A.3) for τ ∈ N, first we need to follow τ maximization steps and determine ] .
By substituting (1) into the above equation we get where a bounded w * s exists if γ 2 I − D T MD ≻ 0, and w Therefore, the optimal game value at time s is a function of x s and u s .Now, by an induction argument, let us assume that at an arbitrary optimization step h + 1 ∈ N τ −1 1 the optimal game value is , where Θ h+1 , Y h+1 are known positive definite matrices, r = ℓ + h, and Ûr+1 = [u T r+1 , . . ., u T ℓ+τ −1 ] T is the augmented control input. Then By substituting (1) into the above equation, we get w ] , Y h := ] , (A.7) for E h = γ −2 DD T V −1 h .Moreover, Ĵh takes the same form as the one assumed for the Ĵh+1 , and therefore, the quadratic form assumed for the Ĵh is correct.Finally, consider at the optimization step ℓ, after determining w * ℓ the optimal game value follows Ĵ0 (x ℓ , Ûℓ ) = Based on the event-triggered policy (32), there is a state transmission to the controller at k = 0, therefore u 0 = u * 0 .Moreover, we can change the summation in the above equation into two summations as follows where j ∈ N 0 represents the number of transmissions and s j is the time at which the j-th transmission happens.Then if u k at all k ∈ N 0 follows (29) for the data transmission scheduling policy (32), we have for ûi = K xi|i and ŵ * i = S(Ax i + B ûi ).Based on the scheduling policy (32), and the fact that at data transmission times ûs j = u * at every ν ∈ N 0 .Moreover, based on the event-triggered scheduling law (32), when w = (0, 0, . . .), at every ν ∈ N 0 .Therefore, ∆V (x ν ) ⩽ 0 at every ν ∈ N 0 , which indicates the Lyapunov stability of the control loop for the proposed ETC, when w = (0, 0, . . .).Then similar to the proof of Lemma 1, the observability of (Q 1 2 , A) and the boundedness of the performance index (8), i.e., J ⩽ x T 0 Mx 0 , guarantees the convergence of the state to zero as time goes to infinity.Therefore, we can conclude the global asymptotic stability of the control loop.
Therefore, the proposed ETC is an ℓ 2 -consistent ETC according to Definitions 3 and 4.
A.4. Proof of Theorem 2
The proof procedure is similar to the one presented for Theorem 1.However, here, in order to satisfy J ⩽ 0 for all w ∈ ℓ d 2 , where J is given in (8), we just need to consider (21).We just need to simplify the disturbance input as it is given in (19) when the control policy follows (33).
Fig. 1 .
Fig.1.The state feedback ℓ 2 -controller with a resource-constraint communication network.G, K , S and N refer to the plant, the controller, the scheduler and the network, respectively.
are the actual values of disturbances and control inputs, respectively, where for
Fig. 2 .Fig. 3 .
Fig. 2. Illustration of the improved performance of the proposed ETC in comparison to periodic time-triggered controller (PTC) in disturbance attenuation with the same average transmission rate (for the given disturbance input w) when (a) ETC designed for τ = 1 and PTC designed for τ = 2 (b) ETC designed for τ = 2 and PTC designed for τ = 4.
Fig. 2(b) compares similar situations when the ETC is designed for τ = 2 and γ = 2.924 based on Theorem 2. The controller follows(33)
|
2020-12-31T09:02:29.881Z
|
2021-01-01T00:00:00.000
|
{
"year": 2021,
"sha1": "60af8521df40a0dcdae9ad447ea610794747c87a",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.automatica.2020.109412",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "b24d0fd237fe6a40af121f0c9066a50189390ed3",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
119231204
|
pes2o/s2orc
|
v3-fos-license
|
Bulk viscosity in kaon-condensed color-flavor locked quark matter
Color-flavor locked (CFL) quark matter at high densities is a color superconductor, which spontaneously breaks baryon number and chiral symmetry. Its low-energy thermodynamic and transport properties are therefore dominated by the H (superfluid) boson, and the octet of pseudoscalar pseudo-Goldstone bosons of which the neutral kaon is the lightest. We study the CFL-K^0 phase, in which the stress induced by the strange quark mass causes the kaons to condense, and there is an additional ultra-light"K^0"Goldstone boson arising from the spontaneous breaking of isospin. We compute the bulk viscosity of matter in the CFL-K^0 phase, which arises from the beta-equilibration processes K^0<->H+H and K^0+H<->H. We find that the bulk viscosity varies as T^7, unlike the CFL phase where it is exponentially Boltzmann-suppressed by the kaon's energy gap. However, in the temperature range of relevance for r-mode damping in compact stars, the bulk viscosity in the CFL-K^0 phase turns out to be even smaller than in the uncondensed CFL phase, which already has a bulk viscosity much smaller than all other known color-superconducting quark phases.
small compared to the energy gaps of the fermions, and therefore the low-energy properties of the CFL phase can be described within an effective theory for the Goldstone bosons [26,27,28]. We shall make use of this theory in this paper. If m 2 s /µ q is large enough, it is expected that the lightest pseudo-Goldstone bosons, namely the neutral kaons, condense. The CFL phase with K 0 condensation is called the "CFL-K 0 " phase. Goldstone bosons in CFL and their condensation have also been studied in a different approach, using a Nambu-Jona-Lasinio model [29,30,31,32,33,34].
Besides giving rise to masses for the meson octet, a nonzero strange mass induces a mismatch in the Fermi momenta of the quarks that form Cooper pairs in the CFL phase. In fact, the strange mass induces a mismatch in any possible spin-zero color-superconducting phase [35]. This means that at lower densities, the particularly symmetric CFL phase may be replaced by a less symmetric pairing pattern. It is currently not known whether, going down in density, the CFL phase is superseded by nuclear matter or by a different, more exotic, color-superconducting phase. Candidate colorsuperconducting phases have Cooper pairs with nonzero angular momentum [36,37,38,39] or nonzero momentum [40,41,42]. In this paper, we shall only consider the CFL and CFL-K 0 phases.
The paper is organized as follows. In Sec. II we give a brief overview over the properties of the CFL-K 0 phase, in particular we present the low-energy excitations at finite temperature. As an application, we discuss the resulting specific heat of the system in Sec. II E. The calculation of the bulk viscosity is presented in Sec. III. We define the bulk viscosity in Sec. III A. In Secs. III B and III C we collect the ingredients needed for the bulk viscosity, namely the kaon density and susceptibility and the rate of the processes K 0 ↔ H + H and K 0 + H ↔ H. We put these ingredients together in Sec. III D to obtain the result for the bulk viscosity and give our conclusions in Sec. IV.
II. LOW-ENERGY MODES IN THE CFL-K 0 PHASE
In this section, we briefly summarize the theoretical description and physical properties of the Goldstone bosons in the CFL-K 0 phase. More details can be found in the references given in the text.
A. Chiral Lagrangian
We denote the meson nonet in the CFL phase, associated with chiral symmetry breaking, by where θ is an element of the Lie algebra of U (3). The physics of the mesons is described by the Lagrangian [26,27] There is an effective chemical potential given by the "gauge field" where Q = diag(2/3, −1/3, −1/3) andM = diag(m u , m d , m s ) are the electric charge and quark mass matrices in flavor space, and µ Q is the chemical potential associated with electric charge. Moreover, from matching calculations at asymptotically large densities we know f 2 π = 21 − 8 ln 2 18 where ∆ is the fermionic energy gap at zero temperature. It turns out that the neutral and charged kaons are the lightest mesons. They carry flavor quantum numbers K 0 ∼sd,K 0 ∼ds, K + ∼su, K − ∼ūs. In contrast to the usual mesons, however, they are composed of four quarks of the structureqqqq as opposed toqq. The zero-temperature kaon masses and effective chemical potentials are deduced from the Lagrangian and are given by We see that with m d slightly larger than m u the neutral kaon is slightly lighter than the charged kaon. Moreover, electric neutrality disfavors the presence of charged kaons. Therefore, in most of the remainder of the paper we shall ignore the charged kaons. In the following, for notational convenience, we denote the neutral kaon chemical potential and mass simply by µ and m, respectively, Condensation of the neutral kaons occurs if µ > m. Because of the large uncertainty in the quark masses and in the dimensionless quantity a, it is not clear whether this condition is fulfilled for densities present in the interior of a compact star. Using the high-density expressions in Eq. (4) and inserting a quark chemical potential µ q ≃ 500 MeV and an energy gap ∆ ≃ 30 MeV, we estimate f π ≃ 100 MeV, a ≃ 0.01. With m d ≃ 7 MeV, m u ≃ 4 MeV we thus obtain a kaon chemical potential µ ≃ 20 MeV and a kaon mass m ≃ 4 MeV. These values suggest that the kaons are condensed and thus that the relevant phase to consider is the CFL-K 0 phase. It has been argued that for sufficiently large values of the parameter m 2 s /µ q the CFL-K 0 phase is modified to the so-called curCFL-K 0 phase which is anisotropic and exhibits counter-propagating currents from the kaon condensate and ungapped fermions [43,44,45]. Because of the presence of ungapped fermions, the transport properties of this phase can be expected to be very different from the ones in the CFL-K 0 phase and similar to unpaired quark matter. In this paper, we do not consider the possibility of a kaon current but rather focus on the isotropic CFL-K 0 phase where all fermions are gapped.
As we shall see in Sec. III, in order to compute the bulk viscosity we need the nonzero-temperature behavior of the kaons, in particular their masses and excitation energies. The necessary results are provided in Ref. [46] and shall be briefly summarized in the next two subsections.
B. 2PI formalism for kaons
To obtain the Lagrangian for the neutral kaons in the presence of a kaon condensate we start from the Lagrangian (2) and set where T 6 , T 7 are Gell-Mann matrices. We have omitted the other Goldstone modes, proportional to 1, T 1 , . . . , T 5 , T 8 , and have introduced a vacuum expectation value for the kaon field φ (without loss of generality in the T 6 direction) and fluctuations ϕ 6 , ϕ 7 with vanishing expectation value. The Lagrangian is now expanded up to fourth order in the fluctuations. See Ref. [46] for details and the explicit form of the Lagrangian. The tree-level potential is where we have abbreviated the effective coupling constant In principle, one can keep all orders in φ, but for simplicity we have expanded for small values of φ. This restricts our analysis to small condensates. In other words, our results are quantitatively reliable if µ is only slightly larger than m and become unreliable for µ ≫ m. In addition to the tree-level potential, the Lagrangian contains quadratic terms in the fluctuations from which we read off the inverse tree-level propagator Here and in the following we denote four-momenta by capital letters K = (k 0 , k), where the bosonic Matsubara frequencies are given by k 0 = −2inπT with the temperature T . The Lagrangian also contains interaction terms cubic and quartic in the fluctuations. For small condensates we may neglect the cubic interactions. The form of the quartic interactions is needed for the vertex in the kaon self-energy. The nonzero-temperature behavior of systems with spontaneously broken symmetries can be conveniently treated within the two-particle irreducible (2PI) formalism [47,48,49]; for the application of this formalism in chiral models see for instance Refs. [50,51,52]. Naive thermal corrections would lead to unphysical results for a Bose-condensed system, in particular to negative energies for certain temperature regimes. Therefore, we apply the more elaborate 2PI scheme to compute the thermal masses self-consistently. We start from the effective potential FIG. 1: Diagrammatic representation of the two-loop approximation for V2 and corresponding kaon self-energy. The lines represent the full kaon propagator S, to be determined self-consistently.
where the trace is taken over momentum space and over the two-dimensional space given by the two degrees of freedom of the kaon (corresponding to T 6 and T 7 or K 0 andK 0 ). The effective potential is a functional of the vacuum expectation value φ and the kaon propagator S. The term V 2 [φ, S] is the sum over all 2PI diagrams. We employ the two-loop approximation for this infinite sum and only consider the "double-bubble" diagram, see Fig. 1. In this case V 2 does not explicitly depend on φ.
One can now determine the condensate and the thermal kaon mass self-consistently. This is done by solving the stationarity equations where Σ = 2 δV2 δS is the kaon self-energy, see Fig. 1. With the ansatz for the full inverse propagator the stationarity equations read Here, M is the temperature dependent mass, related to M ± in the propagator (13) with and the Bose distribution function The kaon excitation energy ǫ k is a pole of the propagator (13). Its form is crucial for the thermodynamic and transport properties of the CFL-K 0 phase and will be discussed explicitly in Sec. II C, see Eq. (19). The second pole of the propagator corresponds to theK 0 excitation. This excitation has a (temperature-dependent) energy gap which is larger than the K 0 excitation by at least 2µ. We will neglect theK 0 mode in our analysis, because µ is expected to be on the order of ∼ 10 MeV, so theK 0 would only play a role at temperatures in the tens of MeV range. This is close to the critical temperature of the CFL phase itself, and is not physically relevant because compact stars cool below such temperatures within minutes of their formation. Our results are therefore reliable at temperatures of order 1 MeV or lower. In some cases we will continue our expressions to higher temperatures: this represents a theoretical exercise in studying the properties of the pure K 0 condensate, not a physical prediction. For the calculations of the specific heat in Sec. II E as well as for the density and susceptibility in Sec. III B we need an explicit expression for the pressure P which is the negative of the effective potential (11), P = −V eff . In the given approximation we can write P as a function of φ 2 and M 2 , The pressure in the physical state is then given by inserting the values for φ and M at the stationary point.
C. Kaon excitation energies
For the calculation of transport properties we are interested in the kaon dispersion relations which are the poles of the propagator (13). Dropping theK 0 excitation, see remark below Eq. (17), the only relevant kaon dispersion relation is [46] with M (T ) determined from the stationarity equations (14). We have eliminated φ from the expression for ǫ k by using Eq. (14a). The critical temperature of the second-order phase transition for kaon condensation is denoted by T c , i.e., for temperatures below (above) T c we have φ = 0 (φ = 0). The value of T c is estimated to be at least of the order of tens of MeV [46]. This shows that for all temperatures relevant for compact stars, the kaons will be condensed (provided the parameters m and µ are such that there is condensation at zero temperature). The self-consistent treatment ensures that for all temperatures M (T ) > µ, and thus the energies given in Eq. (19) are real and positive as they should be. Moreover, from Eq. (19) we see that ǫ k=0 = 0 for T < T c . This shows that there is a massless Goldstone mode associated with kaon condensation. 2 It is the presence of this gapless mode that causes thermodynamic and transport properties of the CFL-K 0 phase to differ from those of the CFL phase. Strictly speaking, this Goldstone mode also has a small energy gap. This energy gap does not show up in our treatment within the effective theory given by the Lagrangian (2). The energy gap is rather induced by the weak interactions and can be estimated to be in the keV range [53]. Most of the temperature scales we show in this paper do not include temperatures below this range. Therefore, we shall neglect this effect and speak of an exact Goldstone mode with massless dispersion relation given by Eq. (19). It is instructive to expand the kaon dispersion below T c for small momenta, For small momenta, the dispersion is linear in k. However, for T → T c , the coefficient in front of the linear term goes to zero since M (T → T c ) → µ. Consequently, around T c the quadratic part of the dispersion becomes important.
D. One-loop approximation for the superfluid mode H Next we turn to the H boson which is associated with the spontaneous breaking of U (1) B . This Goldstone boson can be described by the effective Lagrangian [19,26,54,55] where v H = 1/ √ 3. For notational convenience, we shall from now on abbreviate both v H and v π by v, For the calculation of the bulk viscosity we need the one-loop self-energy of the H, given by the diagram in Fig. 2, where the vertex (squared) is and the inverse tree-level propagator is D −1 0 (K) = k 2 0 − v 2 k 2 . We need the imaginary part of the retarded self-energy [19] Im Π(P ) = 4π 3 81µ 4 where F (e 1 vk) ≡ F (P, K)| k0=e1vk . The angular integration can be performed exactly. One finds that the self-energy assumes different forms depending on the sign of p 0 − vp, where Here we have abbreviated Denoting the one-loop H propagator by and neglecting the real part of Π we have an approximate form of the imaginary part of the propagator, which we will need in the calculation of the bulk viscosity in Sec. III.
E. Specific heat
The self-consistent formalism from Ref. [46], summarized above, provides us with the tools to compute thermodynamical quantities of the CFL-K 0 phase. As a first application, we shall discuss the calculation of the specific heat. This is of physical relevance for instance for the cooling behavior of compact stars; see for instance Refs. [1, 56,57] for the specific heat of other quark matter phases. We will not need the result for the calculation of the bulk viscosity. However, the calculation shows in an exemplary way how to compute thermodynamical quantities in the 2PI formalism and will then in Sec. III B be applied to the calculation of the kaon susceptibility.
The definition of the specific heat at constant volume c V is where s is the entropy density. It seems straightforward to take the second derivative of the pressure in Eq. (18) with respect to the temperature. However, the self-consistent treatment complicates this procedure. Since the pressure is a function of the self-consistent quantities φ and M , the implicit dependence of φ and M on the temperature and the constraint of the stationarity equations have to be taken into account. We present the details of the calculation in Appendix A. The result is where I is the integral defined in Eq. (15), and ∂M 2 /∂T is given in Eq. (A8b). From this expression one obtains the discontinuity ∆c V of the specific heat at the critical point, For all interesting parameters, the first term on the right-hand side of Eq. (32) is dominant and ∆c V is small, as we shall see from the numerical results. For small temperatures only small momenta of the kaons contribute. Therefore we can use the small-momentum kaon dispersion from Eq. (20) to approximate the specific heat at small temperatures. The linear part of the dispersion produces a contribution cubic in the temperature, where we have approximated the thermal mass by its zero-temperature value, The dispersion (20) shows that at the critical temperature the linear term vanishes and the dispersion becomes quadratic in the momentum. In this case, the contribution to c V goes like T 3/2 , The kaon contribution has to be compared to the H contribution. Approximating the dispersion by the linear behavior vk, cf. the inverse tree-level propagator below Eq. (24), we find for low temperatures By comparing this expression with Eq. (34) we conclude that the low-temperature contribution of the K 0 has the same T dependence as the H contribution, but that the K 0 contribution is always larger by a numerical factor which depends on the zero-temperature kaon mass and chemical potential. We show the full numerical result of the kaon specific heat and its comparison with c lin V , c quad V , and c H V in the left panel of Fig. 3. We see that there is a significant temperature regime in which c quad V is a good approximation to the specific heat, i.e., c V behaves as T 3/2 as opposed to T 3 . The presence of this temperature regime depends on the parameters. Here we have chosen the kaon mass and chemical potential such that this regime is relatively large (µ only slightly larger than m). For larger values of µ, this regime is typically smaller or hardly visible (see right panel of Fig. 3). Note also that our temperature scale extends to very small temperatures in the keV regime. In this regime we expect the small mass of the kaon, which is not present in our formalism, to play a role, see remark below Eq. (19). I: Collection of thermodynamic properties of neutral kaons in the CFL-K 0 and CFL phases. Here "high T " means in particular T > Tc, i.e., there is no CFL-K 0 phase at these temperatures. The zero-temperature kaon mass is denoted by m, and δm ≡ m − µ with the kaon chemical potential µ. In the CFL-K 0 phase we have δm < 0, and we show the low-T results to lowest order in δm/m which corresponds to small condensates φ/fπ ≪ 1. In the CFL phase, δm > 0.
The discontinuity at the critical point is small and not visible in the plot. At large temperatures, the specific heat again goes like T 3 , however with a different prefactor than at low temperatures. In Table I we have listed the low-T and high-T behavior for the kaon specific heat together with the density and susceptibility, to be computed in Sec. III B. In the table we also show the behavior for the case of no kaon condensation, i.e., for m > µ.
So far we have ignored the charged kaons. Due to the electric charge neutrality condition their number density is suppressed at small temperatures [46]. However, in the isospin symmetric case they give rise to an additional Goldstone mode [46,58,59] which in principle gives a large contribution to the specific heat. To illustrate this contribution we ignore the neutrality constraint for simplicity and compute the specific heat for a fixed charged kaon chemical potential µ K + . For low momenta and exact isospin symmetry µ = µ K + , m = m K + the additional Goldstone mode has a quadratic dispersion, ǫ k = v 2 k 2 /(2µ 2 ). Consequently, its contribution to the specific heat goes like T 3/2 , just as the contribution from Eq. (35), We may use the formalism of Ref. [46] to compute the thermal masses in the two-component system of neutral and charged kaons and derive the specific heat analogously to the above one-component system. The numerical result is shown in the right panel of Fig. 3. For this plot we have chosen the kaon masses and chemical potentials such that the isospin symmetry is almost exact. We see that there is a large contribution from the charged kaon mode which disappears for sufficiently small temperatures. This is due to the small isospin symmetry violation in the parameters and hence a small charged kaon energy gap.
III. BULK VISCOSITY IN THE CFL-K 0 PHASE
We now turn to the main part of the paper where we use the results of the previous sections to compute the bulk viscosity in the CFL-K 0 phase.
A. Definition of bulk viscosity
The definition of the bulk viscosity and the derivation of its expression in terms of the kaon rate can be found in Ref. [24]. Therefore, here we only briefly summarize the most important relations and the underlying physics. In general, a relativistic superfluid may have more than one bulk viscosity, related to stresses in the superfluid flow with respect to the normal flow [11,60,61]. Here we neglect this effect and compute the bulk viscosity related to the normal fluid.
We are interested in a system with volume V 0 which undergoes a volume oscillation with amplitude δV 0 ≪ V 0 and frequency ω, In the astrophysical setting, the oscillations are local volume oscillations with an (inverse) time scale typically of the order of the rotation frequency of the star which can be as large as ω/(2π) ∼ 1 ms −1 . The periodic change in volume induces a change in density. This change in density, in turn, may induce a change in the chemical composition of the matter. In CFL-K 0 matter, there will be a change in the strangeness content, described by an induced nonzero δµ, i.e., the equilibrium kaon chemical potential µ is shifted to the nonequilibrium value µ + δµ. The response of the system is to create (or annihilate) kaons in order to reequilibrate. The dominant processes considered here that change kaon number (and thus strangeness) are These processes have also been considered in Ref. [24] for the case of uncondensed kaons. If the kaons are condensed there is also a cubic interaction, induced by attaching one leg of the quartic interaction to the condensate, which directly moves strangeness between the condensate and the thermal kaon gas. Here we neglect these processes since they are proportional to the condensate. This is consistent with our 2PI treatment which is only valid for small condensates, i.e. (µ − m)/m ≪ 1. The processes (39) induce, in response to the external oscillation V (t), an oscillation in the kaon chemical potential µ(t). If the external oscillation and the system's response are out of phase, there will be dissipation, resulting in a nonzero value of the bulk viscosity. The bulk viscosity is maximized if the external oscillation and the rate of kaon production (annihilation) is on the same time scale. In this sense, bulk viscosity is a resonance phenomenon. In fact, it is the exact analogue of an electric circuit with alternating voltage that responds by an induced alternating current [20].
The definition of the bulk viscosity is where the dissipated power is given by The imaginary part of the complex amplitude δP is given by Im δP = n Im δµ + n q Im δµ q , where the complex amplitudes δµ, δµ q account for the oscillating change in the chemical potentials, Here, n q and µ q are the quark number density and chemical potential, while n and µ are the corresponding kaon quantities. Note that the effective potential (11) also depends on the quark chemical potential µ q . In particular, the quark density is nonvanishing and there is an induced oscillation also in µ q . This effect, however, is small. In Appendix B we present the general derivation of the bulk viscosity, taking into account the quark density effect. Here we proceed by only keeping the kaon terms. In this case, there is a single differential equation for δµ, ∂n ∂µ The left-hand side of this equation simply means that the kaon density is a function of the kaon chemical potential, hence the time dependence of n can be expressed in terms of the time dependence of µ. The right-hand side expresses this change in terms of the volume change (first term) and the kaon rate (second term). The kaon rate Γ K 0 is defined as the change in kaon number per time and volume due to the processes (39). For the microscopic definition of Γ K 0 see Eq. (58) in Sec. III C. For sufficiently small δµ we can approximate Γ K 0 by This defines the factor λ to be positive (for δµ > 0, the system responds by annihilating kaons, hence Γ K 0 < 0; on the other hand, if δµ < 0, the system responds by creating kaons and thus Γ K 0 > 0; in both cases we thus have λ > 0). Inserting Eqs.
where we have denoted the kaon number susceptibility In the following we shall evaluate the bulk viscosity (46). To this end, we shall compute the equilibrium density n and susceptibility χ in Sec. III B, the kaon rate λ in Sec. III C, and put the results together in Sec. III D.
B. Kaon density and kaon susceptibility
The kaon density is given by the negative of the derivative of the effective potential (11) with respect to the chemical potential, Here we have only taken the explicit derivative with respect to µ. Implicit dependencies through the self-consistent condensate φ and the kaon mass M drop out of the density when we take the value of n at the stationary point.
Inserting the tree-level potential (8) and the propagators (10) and (13) yields (see also Ref. [46]) with the integral I defined in Eq. (15) and the kaon excitation energy ǫ k from Eq. (19). At zero temperature, the integral over the Bose distribution function (as well as the integral I) vanishes, and the density is nonzero only because of a nonzero value of φ; in other words, all kaons are condensed. Note that the effective coupling α, see Eq.
(9), depends on the kaon chemical potential. With the zero-temperature value of the condensate φ 2 = (µ 2 − m 2 )/α we easily obtain the low-temperature limit of the density, The expansion of this limit for small condensates and the opposite limit for high temperatures are shown Table I. The two limits are compared with the full numerical result in the left panel of Fig. 4. The derivation of the susceptibility χ is more complicated. The susceptibility is the second derivative of the pressure with respect to the chemical potential, and the dependence of φ and M on µ does not drop out. Hence we have to compute χ analogously to the specific heat, which is also a second derivative of the pressure. Details of the calculation are presented in Appendix C. The result is where χ 0 , χ 1 , andχ 1 are given in Eqs. (C4) and (C5). The zero-temperature limit is obtained from χ 0 upon approximating the condensate by its zero-temperature value, φ 2 ≃ (µ 2 − m 2 )/α, The high-T behavior is dominated by the momentum integral in Eq. (51). Table I shows both limit cases. The full numerical result for the susceptibility and its comparison to the limit cases is shown in the right panel of Fig. 4. In contrast to the density and the specific heat we observe a strong discontinuity at the critical point. The final ingredient to the bulk viscosity (46) is the rate Γ K 0 due to the processes (39). To this end, we need the imaginary part of the kaon self-energyΣ given by the diagram in Fig. 5. This is simply the imaginary part of the one-loop H propagator (30) multiplied by the square of the with the Fermi coupling G F and the entries of the CKM matrix V ud , V us . (Microscopically, the K 0 H interaction can be understood as the weak process d +s ↔ u +ū, for details see Ref. [24].) Consequently, we obtain We can now compute the kaon production rate Γ K 0 . In the closed-time path formalism (for calculations in a similar context using this formalism see for instance Refs. [20,57,62,63]) the rate can be written as Here, the self-energies are given byΣ and the propagators are In these propagators we have introduced chemical nonequilibrium by a nonzero δµ which enters the kaon excitation energy ǫ ′ p ≡ ǫ p (µ → µ + δµ). In ǫ ′ p the self-consistent mass M is determined according to the modified chemical potential. This is necessary since the value of the kaon mass M is governed by the strong interaction and thus the mass adjusts itself instantaneously compared to the equilibration time of the weak processes (39). The energy ǫ ′ p + δµ in the δ-functions of the propagators can be understood from the case without kaon condensation. In this case, we have ǫ p = E p − µ, and δµ simply cancels, ǫ ′ p + δµ = ǫ p (ignoring the change in the self-consistent mass). In the case with condensation, this cancellation is only partial. Inserting Eqs. (56) and (57) into Eq. (55) yields From this expression it is clear that δµ = 0 corresponds to chemical equilibrium because in this case forward and backward rate cancel each other, and Γ K 0 = 0. For small δµ we may approximate This shows that the sign of the rate is determined by the sign of δµ. For positive δµ the system reacts with annihilating kaons, and Γ K 0 < 0, while for negative δµ kaons are created, and Γ K 0 > 0. With the approximation (59) we see that, to lowest order in δµ, the rate is in agreement with Ref. [24]. We shall use Eq. (60) for the numerical evaluation of the rate. Before we turn to the results let us first rewrite the rate into a more instructive form. Inserting the H self-energy from Eqs. (26) and (27) into Eq. (58), we obtain This expression is useful to discuss the separate processes that contribute to the rate. From the structure of the Bose distribution functions (interpreting f as an ingoing and 1 + f as an outgoing particle) we see that the kaon rate is composed of four contributions: the processes H + H → K 0 and H → K 0 + H (first and second term, respectively) and the inverse processes K 0 → H + H and H + K 0 → H (third and fourth term, respectively). As expected, we see that the kaon-creating processes yield a positive contribution to Γ K 0 while the kaon-annihilating processes yield a negative contribution. Furthermore, we conclude that in the CFL-K 0 phase (and for infinitesimal δµ) only the processes H ↔ K 0 + H are allowed. This can be seen from the dispersion relation in Eq. (19) which implies ǫ p < vp for T < T c . Then, due to the step functions in Eq. (61), the process H + H ↔ K 0 is excluded. This is of course a direct consequence of energy conservation. For small temperatures we can derive an analytic expression for the rate in the CFL-K 0 phase. The derivation is presented in Appendix D. The result is where Q(m, µ) is a complicated dimensionless function of the zero-temperature mass m and chemical potential µ given in Eq. (D7). We can simplify this expression by defining and expanding for small values of |δm|/m, which corresponds to small kaon condensates. This yields The full evaluation of the rate has to be done numerically; we have checked that the expression (62) is in very good agreement with the full result up to temperatures T 1 MeV. The additional approximation (64) is accurate to within 30% for values of |δm|/m 0.05.
In the left panel of Fig. 6 we show the numerical result for λ as a function of the temperature. From Eq. (53) we have G ds ≃ −7.8 · 10 −13 MeV −2 ; moreover, we use f H ≃ f π ≃ 100 MeV. We compare the rate in the CFL-K 0 phase with the one in the CFL phase. To realize these two cases we choose the kaon chemical potential slightly above and slightly below the kaon mass.
The main features of the two rates are as follows. At very small temperatures, the rate in the CFL-K 0 phase is larger than in the CFL phase; parametrically, the former behaves as T 7 while the latter is exponentially suppressed. At very large temperatures, the two rates are almost identical. In the intermediate temperature regime however, which is the regime relevant for neutron stars, the rate in the CFL phase is much larger. The reason is a larger phase space for the weak process, as we shall explain in detail in the following.
The kaon width is largest for kaons of momentum p =p such that the internal H particle in the kaon self-energy diagram Fig. 5 is closest to being on shell: the H and K 0 with this momentum both have the same energy, ǫ p = vp, and the denominator of Eq. (60) is then minimized. From the kaon excitation energies in Eq. (19) we concludē In the CFL-K 0 phase there will be much less phase space associated with the maximum in the kaon width, because it occurs at zero momentum, so one expects the total kaon rate to be lower. Whether the additional phase space in the CFL case is available and has a significant effect on the rate depends on the temperature. For sufficiently large temperatures, T ≫p, where states well above p =p are thermally populated, the effect of the peak at p =p is negligible. Moreover, for large momenta the kaon dispersion is independent of whether there is a condensate or not; the dispersion then is simply linear, ǫ p ≃ vp. This is the reason why the two curves in the left panel of Fig. 6 are, at large T , almost on top of each other (the small difference in the curves is simply due to the different values of µ, not because of any qualitative difference between the phases). In other words, besides the values forp given in Eq. (65), ǫ p = vp is also satisfied asymptotically for large p (in both CFL-K 0 and CFL phases alike). This momentum regime becomes available at large temperatures, and thus the rates become almost identical. For smaller (but not too small) temperatures the peak in the integrand in the CFL phase is responsible for the large kaon rate compared to the CFL-K 0 phase. For sufficiently small temperatures, T ≪p, states around p =p are no longer populated. Therefore, the rate in the CFL-K 0 phase, which has its largest contribution from momenta close to zero, becomes larger than the rate in the CFL phase which becomes exponentially suppressed.
D. Results for the bulk viscosity
We can now put together the results from the previous subsections to compute the bulk viscosity. We insert the numerical results for the density n, see Eq. (49) and left panel of Fig. 4, for the susceptibility χ, see Eq. (51) and right panel of Fig. 4, and for the kaon rate λ, see Eq. (60) and left panel of Fig. 6, into the expression for the bulk viscosity (46). We shall first analyze the features of the bulk viscosity below the critical temperature and compare the result with the bulk viscosity in the CFL phase. We shall use δm = m − µ as a parameter to distinguish between the CFL phase (δm > 0) and the CFL-K 0 phase (δm < 0). Then, we analyze the behavior of the bulk viscosity at the critical point. This of academic rather than physical interest since the critical temperature is likely to exceed temperatures reached in a compact star. Finally, we shall discuss the parameter dependence on δm and compare the results with the bulk viscosity in other quark matter phases.
CFL-K 0 vs. CFL bulk viscosity
We first use the parameters of the rates shown in the left panel of Fig. 6 to compute the corresponding bulk viscosities of the CFL and CFL-K 0 phases. The result is shown in the right panel of Fig. 6. We have chosen an oscillation frequency ω/(2π) = 1 ms −1 which is typical for a compact star. Let us first discuss the gross features of the result, valid for both CFL and CFL-K 0 phases. As explained in Sec. III A, if the prefactor n 2 /χ 2 is held constant, the bulk viscosity becomes maximal if the frequency ω and the rate of the processes (39) are on the same timescale. More precisely, we have to compare ω with the effective rate λ/χ which appears in the bulk viscosity. As a very rough estimate, we read off from Fig. 4 that the susceptibility is of the order of χ ∼ 10 4 MeV 2 (using f π ≃ 100 MeV). This means that the kaon rate has to be of the order of λ ∼ 10 −13 MeV 3 to render the effective rate λ/χ of the order of a frequency in the ms −1 regime. We see from the left panel of Fig. 6 that this is the case for large temperatures, T 20 MeV. And indeed we observe that the bulk viscosity has a maximum in this temperature regime. Far below these temperatures we have ω ≫ λ/χ, and the bulk viscosity can be approximated by ζ ≃ λn 2 /(χ 2 ω 2 ). We now discuss the CFL-K 0 and CFL results separately in this low temperature regime.
In the CFL-K 0 phase, the entire temperature dependence of the low-temperature bulk viscosity is given by the rate λ because the density n and the susceptibility χ tend to constant values at low T . This can also be seen by comparing the two curves in the left and right panels of Fig. 6. With the low-temperature expression for the rate (64) and the values for n and χ from Table I we obtain the bulk viscosity in the CFL-K 0 phase at small temperatures. To lowest order in |δm|/m, corresponding to small condensates, we find In the CFL phase without kaon condensation, the rate behaves very differently at small temperatures: it is exponentially suppressed, λ CFL ∝ exp(−δm/T ). The prefactor n 2 /χ 2 contributes an additional factor of T 2 , ζ CFL = T 2 λ CFL /ω 2 . In Fig. 6 we confirm that the bulk viscosity at low T in the CFL phase has a different T dependence than the kaon rate.
Critical behavior of the bulk viscosity
Next we turn to the behavior of the bulk viscosity at the critical temperature where kaon condensation disappears, and there is a transition from the CFL-K 0 phase to CFL. This is probably not relevant to astrophysics, for two reasons. Firstly, one would have to fine tune the kaon mass and chemical potential to bring this critical temperature down to typical compact star temperatures. Secondly, the critical temperature for kaon condensation is of the same order as the critical temperature for the underlying CFL condensate itself [46], so there may really be a transition to unpaired quark matter. However, from the theoretical point of view it might be interesting to compare the critical behavior of the bulk viscosity in the present context with the critical behavior in related systems. For example, the bulk viscosity in a hot quark-gluon plasma has recently been studied [64,65,66], and simple models not unlike the one we use for kaons in this paper have been employed to study the behavior at the critical point [67], see also [68].
With the help of Eq. (46) we can make qualitative predictions of the behavior of the bulk viscosity at the critical point. For small values of the frequency, ω ≪ λ/χ, the viscosity behaves like ζ ≃ n 2 /λ. Both the density n and the rate λ are continuous at the phase transition, hence the bulk viscosity is continuous too. On the other hand, for large frequencies, ω ≫ λ/χ, the bulk viscosity behaves like ζ ≃ n 2 λ/(χ 2 ω 2 ). From Fig. 4 we know that the susceptibility χ is discontinuous at T c (and large for temperatures below and close to T c ). Consequently, we expect the bulk viscosity to be discontinuous too (and small slightly below T c ). We show the numerical result for the bulk viscosity around the critical point in Fig. 7. The case of an almost continuous behavior as well as the case of a strongly discontinuous behavior is shown (for the latter we have chosen a frequency ω/(2π) = 3 ms −1 which is larger than any known compact star rotation rate).
Parameter dependence and comparison to other quark phases
Finally, in Fig. 8, we present the bulk viscosity for several values of the kaon chemical potential µ and compare the result with the bulk viscosity of unpaired quark matter [17]. As discussed in Sec. II A, the values of µ and the kaon mass m are poorly known at densities relevant for compact stars. We therefore vary the relevant parameters over ranges of values that are plausible for conditions in a compact star. We fix the kaon mass at m = 10 MeV and study six values of the effective kaon chemical potential µ; for three of them δm = m − µ is positive (corresponding to the CFL phase) and for the other three δm is negative (corresponding to the CFL-K 0 phase). We have restricted our plot to temperatures appropriate to compact stars, i.e., we have not shown temperatures larger than 15 MeV.
In the CFL phase, the energy gap for the kaon is δm, so the thermal population of kaons, and hence the kaon-decay contribution to the bulk viscosity, will be very sensitive to the value of δm, and will drop rapidly as exp(−δm/T ) for T ≪ δm. In the CFL-K 0 phase, on the other hand, there is always a massless Goldstone kaon, so the bulk viscosity due to kaon decay should be less sensitive to δm, and should not drop exponentially at low temperatures. These expectations are borne out in Fig. 8. Although the CFL-K 0 phase, thanks to its Goldstone mode, has a higher bulk viscosity at very low temperatures, we see that in the range 10 keV T 10 MeV the CFL phase has a larger kaon-decay bulk viscosity than the CFL-K 0 phase. This is because of the phase space available at the K 0 ↔ H "resonance" (see the end of Sec. III C). There is another contribution to the bulk viscosity, from H ↔ H + H, which starts to become comparable to the contribution from kaon decay at T 1 MeV [22].
The bulk viscosity of CFL/CFL-K 0 matter is only comparable to that of unpaired quark matter at relatively high temperatures, of order 10 MeV. At lower temperatures T 5 MeV the bulk viscosity of CFL/CFL-K 0 quark matter is several orders of magnitude smaller. This is the case not only in comparison to the unpaired phase but to all other color-superconducting phases with ungapped fermionic modes-these have bulk viscosities comparable to that of the unpaired phase, and even larger at high temperatures [20]. The only other color-superconducting phase besides the CFL/CFL-K 0 phase in which all fermions may be gapped is the color-spin locked phase [36,39,69,70]. For a discussion of its bulk viscosity see Ref. [21].
IV. SUMMARY AND CONCLUSIONS
We have computed the bulk viscosity of kaon-condensed CFL quark matter (CFL-K 0 phase). Kaon condensation affects the low-energy properties significantly and therefore has a significant effect on thermodynamic and transport properties of color-flavor locked quark matter. In particular, the CFL-K 0 phase has a massless bosonic excitation associated with kaon condensation which is absent in the pure CFL phase. In both CFL-K 0 and CFL phases, there is also a massless Goldstone mode H associated with superfluidity. For most of the thermodynamic properties, the effect of the additional Goldstone mode is important, but rather easy to predict. We have shown this for the specific heat, which acquires a contribution of the kaon mode which, as expected, has the same temperature dependence as the contribution of the superfluid mode. The prefactor of the former, however, is typically larger than that of the latter such that the kaon contribution in fact dominates the specific heat at low temperatures.
The effect of the additional Goldstone mode on the bulk viscosity is more complicated. We have used the results of our earlier work [46] which provides a self-consistent description of the CFL-K 0 phase for arbitrary temperatures. Using the resulting thermal kaon mass and excitation energy, we have computed the density and the susceptibility of kaons, both of which are needed for the bulk viscosity. Moreover, we have computed the rate of the processes K 0 ↔ H + H and K 0 + H ↔ H, where we denote by K 0 both the neutral kaon in the CFL phase and the massless Goldstone mode arising upon kaon condensation in the CFL-K 0 phase. These weak processes serve to re-establish chemical (flavor) equilibrium in response to an external volume oscillation, hence giving rise to a nonvanishing bulk viscosity.
At very high temperatures, T 10 MeV, the difference in the kaon excitations in the CFL and CFL-K 0 phase is negligible. Consequently, in this case the kaon production (and annihilation) rate is almost identical for the two phases. At smaller (but not too small) temperatures, 10 keV T 10 MeV, the masslessness of the Goldstone mode in the CFL-K 0 phase suppresses this rate because of a smaller available phase space for the weak process. Since the timescale of the rates in both phases is smaller than the typical oscillation (and rotation) frequency in a compact star, this effect decreases the bulk viscosity of the CFL-K 0 phase compared to the CFL phase. Another effect is given through the different susceptibilities. In the condensed system, the susceptibility at low temperatures is much larger than that of the uncondensed system. This effect works in the same direction, further decreasing the bulk viscosity compared to the uncondensed system. For even smaller temperatures, T 10 keV the phase space actually is larger in the CFL-K 0 phase and consequently the bulk viscosity is larger too. It is interesting to note that for the neutrino emissivity the effect of kaon condensation is quite different: neutrino emissivity in the CFL-K 0 phase is larger than in the CFL phase for all temperatures T 10 MeV [71].
We now have a fairly complete understanding of bulk viscosity in color-flavor-locked phases of quark matter. The suppression of the bulk viscosity due to the absence of ungapped fermionic excitations was predicted in Ref. [16]. Subsequent more careful calculations took into account the contribution of the superfluid [22] and kaonic [24] Goldstone modes. With the result of the present paper we have shown that the conclusion already drawn in Ref. [16] is, for temperatures T 1 MeV, not changed by the contribution of the Goldstone modes: color-flavor locked quark matter, even in the presence of kaon condensation, has a much lower bulk viscosity than all other known phases of dense quark matter and than nuclear matter. Only at large temperatures, and thus in very young neutron stars, can the contribution of the Goldstone modes render the bulk viscosity comparable to that of unpaired quark matter.
We finally mention that besides the bulk (and shear) viscosity, other properties of color-superconducting quark matter also deserve attention. Its equation of state may be used to put constraints on the mass-radius relation of hybrid stars with a quark matter core and a hadronic mantle. These calculations rely on simple models whose parameters are poorly known in the strong-coupling region of interest. While NJL model calculations, mainly due to their relatively large predicted strange quark mass, tend to find no stable hybrid star with a CFL core [72,73], other parametrizations of the equation of state allow for hybrid stars with masses compatible with the observations [74]. Other observables that may distinguish between certain phases of color-superconducting quark matter or between quark matter and nuclear matter are for instance the cooling curve of the star or glitches (sudden spin-ups). The corresponding transport properties of color superconductors have already been computed in the literature, see for instance Refs. [57,63,75,76] and Ref. [77] for neutrino emissivity and shear modulus, respectively. It is an interesting and promising task for the future to extend these calculations and compare them with more and better astrophysical data in order to understand matter inside a compact star, and, ultimately, map out the phase diagram of cold and dense quark matter.
APPENDIX A: SPECIFIC HEAT IN THE 2PI FORMALISM
Here we derive the expression for the specific heat in the 2PI formalism. We have to take into account that the condensate φ as well as the full propagator S are implicit functions of the temperature T (remember that S contains the self-consistent thermal mass M (T )). With the definition (31) we thus have to compute under the contraint that at the stationary point We find From the constraint (A2) we obtain (A4) Inverting the matrix explicitly and inserting the result into Eq. (A3) shows that the expressions in parentheses vanish separately, and we are left with We thus have to apply the differential operator D on the explicit derivative of P with respect to T . From Eq. (18) we find At the stationary point, the second term in this expression vanishes. This leaves the first term as the result for the entropy density. In order to compute the second derivative, however, we have to keep the second term. By applying the differential operator D on this expression, the specific heat can be evaluated in a purely numerical way. For a further analytical evaluation we proceed as follows. In terms of the self-consistent quantities φ and M we can write c V as (since P only depends on the squares φ 2 and M 2 we may take the derivatives with respect to the squares) The second derivatives of the pressure are straightforwardly obtained from Eq. (A6). The derivatives of φ 2 and M 2 with respect to the temperature depend on the stationarity equations. Below the critical temperature, one obtains the derivatives from Eqs. (14a) and (14b). Above the critical temperature, Eq. (14a) is automatically fulfilled and the only equation is Eq. (14b) with φ set to zero. Consequently, the derivatives assume different functional forms below and above the critical temperature. We find Now we can insert these expressions and the second derivatives of the pressure obtained from Eq. (A6) into Eq. (A7) and evaluate the result at the stationary point, i.e., we use Eqs. (14). The result is With Eq. (A8b) one then obtains the results below and above the critical temperature. The discontinuity at T = T c then is (A10)
APPENDIX B: BULK VISCOSITY WITH QUARK NUMBER EFFECTS
In this appendix we derive the bulk viscosity taking into account the oscillations in the quark chemical potential as given in Eq. (43b). Instead of the simplified single differential equation (44) We insert the form of the volume oscillation (38) and Eqs. (43) into these differential equations and find the solution Im δµ q = δV 0 V 0 ω λ( ∂nq ∂µq n − ∂n ∂µq n q ) (detJ) 2 ω 2 + ( ∂nq ∂µq λ) 2 ∂n q ∂µ , where J denotes the Jacobian of the two-valued function [n q (µ q , µ), n(µ q , µ)], This is the general form of the bulk viscosity where the derivatives of the chemical potentials with respect to the densities are obtained by inverting the Jacobian (B3). We now assume that the quark chemical potential enters the densities only through the kaon chemical potential, see Eq. (5). This means that we neglect the dependence on µ q through f π and a in Eq. (4). Defining the small dimensionless quantity Consequently, Eq. (B4) becomes Neglecting the term proportional to η yields the result (46) that we use in the main part of the paper.
APPENDIX C: NUMBER SUSCEPTIBILITY IN THE 2PI FORMALISM
In this appendix we derive an expression for the kaon number susceptibility χ. Since χ is given by the second derivative of the pressure (18) with respect to the chemical potential, we can use the same formalism as in Appendix A, where we have computed the second derivative with respect to the temperature. Consequently, with the same arguments as in Appendix A (cf. Eq. (A7)) We determine the derivatives of φ 2 and M 2 with respect to µ from the stationarity equations (14). They are α (∂ µ I + 4µ∂ M I) + I∂ µ α 1 + 2α∂ M I for T < T c , where we have abbreviated ∂ µ ≡ ∂/∂µ, ∂ M ≡ ∂/∂M 2 . Note that the effective coupling constant α depends on the kaon chemical potential µ, see Eq. (9). Next, we compute the second derivatives of the pressure appearing in Eq. (C1) with the help of Eq. (18). Putting everything together we obtain the susceptibility, for T < T c , where the zero-temperature part is
|
2008-09-03T17:06:05.000Z
|
2008-06-02T00:00:00.000
|
{
"year": 2008,
"sha1": "b4f4750065c3ee7c87753446a298b3882200edb1",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/0806.0285",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "b4f4750065c3ee7c87753446a298b3882200edb1",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
249487503
|
pes2o/s2orc
|
v3-fos-license
|
The Role of Atmospheric Blocking in Regulating Arctic Warming
Using ERA5 reanalysis we find positive trends in poleward transport of moisture and heat during 1979–2018 over the winter Barents Sea sector and summer East Siberian Sea sector. The increase in blocking occurrence (blocking days) can explain these trends. Blocking occurrence over the Barents Sea sector significantly increased in the last 40 winters, inducing increasingly stronger poleward transport of moisture and heat. The high linear correlation between poleward energy transports and temperature over the Barents Sea sector suggests that poleward energy transports dominate the regional warming trend there. Meanwhile, in summer, more frequently occurring blocking over the Beaufort Sea sector causes a positive trend of poleward moist and heat transport over the East Siberian Sea sector. The high linear correlation between the blocking occurrence and temperature suggests that the increasing shortwave radiation and subsidence within the more frequently occurring blocking contribute to the regional warming trend.
blocking anticyclones contribute to the increasing poleward energy transport during last 40 year • In winter, this increasing poleward energy transport leads to regional warming trend over the Barents Sea • In summer, regional warming trend is dominated by the subsidence and solar radiation inside the more frequently occurring blocks
Supporting Information:
Supporting Information may be found in the online version of this article.
There are many blocking detection algorithms available in the literature, such as the TM index (Tibaldi & Molteni, 1990), the pv- index (Tyrlis et al., 2015(Tyrlis et al., , 2019(Tyrlis et al., , 2020)), and the Absolute Geopotential Height reversal method (AGH; Davini et al., 2012).The TM index has been utilized to identify Ural blocking, believed to contribute to the accelerated sea ice loss over the Barents Sea (Gong & Luo, 2017;Luo et al., 2016).This index requires a positive meridional height gradient and a significant westerly flow within the 40°-60 • N and the 60°-80 • N latitude bands, respectively.For a blocking detection with TM index, defining a longitude band is also necessary, which brings spatial limitation for Arctic blocking detection.Tyrlis et al. (2021) studied high-latitude blocks with both the pv- and the AGH indexes and showed that although they agree on the detection of midlatitude blocks, they have a large discrepancy in the frequency of the high-latitude blocks.To avoid the problems brought by above-mentioned blocking detection methods, the blocking track algorithm based on potential vorticity (Schwierz et al., 2004) is used here.This algorithm tracks and quantifies all blocks globally and works well also over the high Arctic (Wernli & Papritz, 2018).It follows the entire life cycle of blocks, quantifying their intensity, occurrence and persistence.Using this blocking track algorithm, we attempt to study if the amplified Arctic warming is attributable to the increasing Arctic blocking occurrence.
Methods
As atmospheric blocks occur, a negative potential vorticity anomaly (APV) is always located in the immediate vicinity of the tropopause.Therefore, a major negative APV is one indicator of a strong anticyclonic circulation, providing the dynamical basis for the blocking track algorithm used here (Schwierz et al., 2004;Sprenger et al., 2017).In this algorithm, the vertically averaged APV between 500 and 150 hPa is used as the indicator of blocks, defined as the closed contour of APV at the value of −1.3 PVU (1 PVU equals to 10 −6 m 2 s −1 K kg −1 ).In other words, the areas with negative APV which is not larger than −1.3 PVU are considered blocks.More negative APV indicates stronger blocks.If the block at nearby time steps overlaps more than 70%, then it is assumed to belong to the same blocking life-cycle (Figure S1a in Supporting Information S1).Here, we only consider blocks with a life-time of at least 5 days.The increase/decrease of blocking life-time can decrease/increase the magnitude of blocking frequency but does not change its geographical distribution.This blocking track algorithm is applied to the European Center for Medium-Range Weather Forecast latest reanalysis, ERA5 (Hersbach et al., 2020), with a 6-hourly temporal resolution and a 0.75 • horizontal resolution.We explore all blocking life-cycles from 1979 to 2018, using a 1-day running average before applying the blocking track algorithm.
Considering the winter (DJF) and summer (JJA) seasons separately, a record of the locations of each identified blocking is kept throughout its life cycle.Figure S1a in Supporting Information S1 shows schematic examples of the 6-hourly footprints of two individual blocks.The occurrence and the number of individual blocks are accumulated at each grid point; occurrence is defined as the total number of blocking days at each grid point.For instance, in Figure S1a in Supporting Information S1, the blocking occurrence at grid points A, B, and C are 1.25, 2, and 1.75 days, respectively, as the paths of the two blocks partially overlap.Blocking number is defined as the number of individual blocks that pass over each grid point.Hence, in Figure S1b in Supporting Information S1, the blocking number at grid points D, E, F, and G are 3, 4, 2, and 1, respectively.The averaged APV at each point when blocking occurs is considered blocking intensity, while the accumulated blocking occurrence divided by accumulated blocking number is considered the mean blocking persistence.
Monthly mean poleward moisture (PMT) and heat transport (PHT) are evaluated in the ERA5 reanalysis data set by the vertical integral of northward water vapor and heat fluxes from the surface of the Earth to the top of the atmosphere, respectively, as shown by following equations: where v is northward velocity, T is air temperature, q is specific humidity and g is gravitational acceleration.Besides, all other variables used in this paper, such as temperature, geopotential height, and surface energy-budget terms are derived from ERA5 reanalysis data set.
Winter Poleward Moisture and Heat Transport
We identify positive trends of PMT and PHT during recent 40 winters over the Barents Sea sector, at 3 kg m −1 s −1 decade −1 and 1.2 × 10 8 W m −1 decade −1 , respectively, while a negative trend of PMT is located over the western Siberia (Figure 1a).Similar monthly trends of PMT have also been found by Rinke et al. (2019).South of 85 • N , a positive trend of PHT is located slightly to the west of its PMT counterpart, but they overlap north of 85 • N (Figure 1).All trends are significant at the 95% level and we consider the region with a significant positive PMT trend a sensitive region (highlighted by green dots at Figure 1a).No significant trend of zonal moisture or heat transport was found over this sensitive region.The mean sea ice concentration (SIC), temperature at 850 hPa (T 850 ), PHT and PMT in the sensitive region have large interannual variations (Figure 2b) but display significant linear trends (Figure S2a in Supporting Information S1).Mean T 850 , PHT and PMT have positive trends, while SIC has negative trend (Figure S2a in Supporting Information S1).Not surprisingly, temperature has high linear correlation with PHT and PMT; 0.78 and 0.85 respectively.Linear regression indicates that positive PMT is correlated to the blocking over the Barents-Kara Sea sector (Figure S3a in Supporting Information S1).As this blocking occurs, warm and moist Atlantic air is transported into the high Arctic (positive T advct in Figure S3d in Supporting Information S1), leading to a positive temperature anomaly over the Barents Sea sector (Figure S3c in Supporting Information S1), also exerting positive anomalies of turbulent heat and longwave radiation fluxes at the surface (Figure S4 in Supporting Information S1).Consequently, it contributes to sea ice decrease over the Barents Sea sector (Figure S3b in Supporting Information S1).Many previous studies have reached similar conclusions (Binder et al., 2017;Gong & Luo, 2017;Kim et al., 2019;Luo et al., 2016;Sedlar et al., 2011), but few elaborate on the cause of this positive PMT trend over the Barents Sea sector.
Since the blocking can statistically explain the positive anomalies of PMT and PHT over the Barents Sea sector, we further explore if trends in blocking occurrence can also explain the positive trends of PMT and PHT.The trend of 2D moisture/heat transport (vector field in Figures 1a and 1b) suggests a positive trend of anti-cyclonic circulation over the Barents Sea sector, to some extent, supporting this hypothesis.To test this hypothesis, we quantify Arctic blocking occurrence with the method described in Section 2. During the last 40 winters, blocking occurrence has increased at a rate of ∼3 days decade −1 (Figure 1c) at exactly the location where the positive GH anomaly is located (Figure S3a in Supporting Information S1).Meanwhile, in the vicinity of these more frequently occurring blocks, T 850 has increased by > 1 K decade −1 (Figure 1e), while SIC decreases by > 20% decade −1 (Figure 1d).The spatial structure of the trends in SIC, T 850 , temperature advection (T advct ) in Figure 1 are all similar to their corresponding anomalies based on regression in Figure S3 in Supporting Information S1, further supporting the hypothesis.
To further understand the cause of the increased blocking occurrence, we quantify if this positive trend comes from the increase in blocking number or blocking persistence.As shown in Figures 2a and 2b significant positive trend in blocking persistence is found exactly in the region where positive trend of blocking occurrence occurs, which implies that the increasing blocking occurrence is due to longer blocking durations, rather than to an increase in number of events.The blocking intensity is also significantly strengthened to the east of Svalbard in the last 40 yr (Figure 2c).Even though positive trend in blocking number also occurs near the northwestern Scandinavian coast, and the eastern and northeastern coast of Greenland (Figure 2a), this does not contribute to any significant positive trend of blocking occurrence over the Barents Sea sector.
The anticyclonic flow around the blocks leads to equatorward flow to the east of the block.As the blocks occur more frequently over the Barents Sea sector, a negative trend of PMT, PHT (not significant) and temperature advection are found to the east of the blocks, indicating more frequent cold air outbreak over Siberia (Figures 1a and 1b).In conjunction with more positive PMT and PHT events occurring over the Barents Sea sector, the climate pattern of warm Barents and cold Siberia has become more frequent during past 40 yr.1a and 1b indicates statistical significance at the p < 0.05 level from a Student's t test.The westmost and eastmost limit of Barents Sea sector is highlighted by cyan dash lines.
Summer Poleward Moisture and Heat Transport
In summer, significant positive trends in both PMT and PHT are found over the East Siberian Sea sector, with significant negative trends over the Barents Sea sector (Figure 3).We take the region with significant positive PMT trend as a sensitive region.Similar to the winter PMT/PHT over the Barents Sea sector, the positive mean PMT and PHT in the sensitive region can also be statistically explained by an Arctic blocking (Figure S5a in Supporting Information S1).This Arctic blocking transports warm and moist air from lower latitudes into the Arctic, inducing warming anomalies (Figure S5c in Supporting Information S1) and sea ice melt (Figure S5b in Supporting Information S1), presumably due to positive anomalies in longwave radiation (Figure S4b in Supporting Information S1) and turbulent heat flux (Figures S6d and S6e in Supporting Information S1), despite negative shortwave radiation anomalies (Figure S6c in Supporting Information S1).Similar results are also shown by Tjernström et al. (2015b) and You et al. (2020You et al. ( , 2021a)).However, unlike its winter counterpart where the positive temperature anomaly is located between positive and negative GH anomalies (Figures S3a and S3c in Supporting Information S1), the summer positive temperature anomaly is located almost entirely in the area with positive GH anomaly (Figure S5b in Supporting Information S1), while the net surface energy-budget is greatly enhanced within the anticyclonic anomaly and not in the area with warm advection (Figure S6a in Supporting Information S1).Although T 850 is positively correlated with PMT and PHT (correlation coefficients 0.39 and 0.45, respectively), the correlation is weaker than that in winter (Figure S7 in Supporting Information S1).This implies that, in summer, subsidence and solar irradiance in blocks (Figure S6c in Supporting Information S1) are more important to the local warming than the poleward energy transport, in agreement with Kay et al. (2008) and Papritz (2020).The preliminary results show that the regional warming over the Baffin Bay is dominated by increased solar irradiance, while the regional warming over the Beaufort Sea sector is dominated by the enhanced subsidence.The exact contribution from subsidence and net solar irradiance will be quantified in the future case studies.
The positive trends in PMT and PHT during summer can also be statistically explained by increasing blocking occurrence (Figure 3c).Blocking occurrence is increasing by ∼1 day decade −1 over the Beaufort Sea sector.It is decreasing over the East Siberian Sea sector (∼ − 0.8 days decade −1 ), probably suggesting that low-pressure systems (cyclones) are occurring more often over the East Siberian Sea sector (Akperov et al., 2019).Between the cyclone and blocking exist great geopotential height gradient which induces strong poleward moist and heat transport.As both of the cyclone and blocking occur more frequently, both PMT and PHT have positive trends between these two more prevailing systems; 7 kg m −1 s −1 decade −1 and 1.2 × 10 8 W m −1 decade −1 , respectively.This hypothesis is also supported by the similarity in T advct between Figure 3f and Figure S5d in Supporting Information S1, as well as the similarity in SIC between Figure 3d and Figure S5d in Supporting Information S1.The increasing blocking occurrence over the Beaufort Sea sector is mostly contributed by an increasing blocking number (∼4 decade −1 ) while the intensity of blocks is also intensified in the last 40 yr (Figure 4).
The linear correlation coefficient of 0.74 between T 850 and blocking occurrence over the Beaufort Sea sector is larger than that between T 850 and PMT (0.39) in the sensitive region (Figure S8a in Supporting Information S1).T 850 shares the same magnitude of linear trend as blocking occurrence in last 40 summers (Figure S8b in Supporting Information S1), which implies that in summer the blocks contribute more to the Arctic warming than the warm advection.This is the case not only over the Beaufort Sea sector; there is always a positive trend of blocking occurrence in the region with a positive trend in T 850 .Over Baffin Bay for example (Figure S9 in Supporting Information S1), the linear correlation coefficient (0.85) is even higher than that over the Beaufort Sea sector.
Discussion
Low-pressure systems over the Iceland always cooperate with the blocking over the Barents Sea to transport moisture and heat into the high Arctic (Figure S3a in Supporting Information S1).As we trace the development of the blocking over the Barents Sea sector, it is found that it often originates from a low-pressure system near Iceland about 6 days before the PMT peak (Figure S10 in Supporting Information S1).After 6 days as the PMT reaches the maximum, the positive NAO develops into a Rossby wave train (Figure S10 in Supporting Information S1), as also discussed by Luo et al. (2016).Low-pressure systems have been found to contribute substantially to establish Arctic blocks by transporting low PV air cross-isentropically from the lower troposphere into the upper troposphere (Murto et al., 2022;Steinfeld & Pfahl, 2019;Wernli & Papritz, 2018).Therefore, we hypothesize that more predominant low-pressure systems lead to the increasing blocking occurrence.We attempted to quantify the occurrence of the low-pressure systems with similar methodology as the blocking detection algorithm, but find no significant trend over the Iceland in winter.
Due to the amplified Arctic warming, the poleward temperature gradient decreases at high-latitude region, weakening the westerly winds and therefore weather patterns like blocks may move eastward more slowly (Overland et al., 2015;Yao et al., 2017).This could explain that the increasing blocking persistence mainly contributes to the more frequently occurring blocks over the Barents Sea sector.Luo et al. (2017) carried out idealized numerical experiments under strong and weak mean flow to test this hypothesis.These more persistent blocks would pump more heat and moist into the Arctic, more intensively warming the Arctic and in turn further weakening the mean flow, as synthesized in Figure 5a.Some studies also argued that storms also contribute to the PMT transport (Rinke et al., 2017).However, without the poleward steering of blocks, transient moisture transport by a storm may not contribute much to the year-to-year trend of PMT/PHT during last 40 yr, as the positive PMT/ PHT caused by the east side of the storm may offset the negative PMT/PHT induced at the west of the storm.The other point is that blocking as a larger scale pattern can continuously bring even warmer air mass into the high Arctic for days.In some cases when storm and blocking occur together, they can transport extremely more warm air mass into the Arctic (Binder et al., 2017;Murto et al., 2022).
In summer, the development of Beaufort blocking also statistically originates from the low-pressure system over the Bering Sea (Figure S11 in Supporting Information S1), in agreement with case studies in Wernli and Papritz (2018).Different from its winter counterpart, however, the more frequently occurring blocks over the summer Beaufort Sea sector result from the increased blocking number, which cannot be explained by weakened westerlies.Unlike the winter Barents Sea sector, the main causes of summer Arctic warming are the enhanced subsidence and surface solar irradiance in the more frequently occurring blocks.Since the ice albedo feedback only can explain ∼30% of surface Arctic warming in summer (Graversen & Wang, 2009), therefore contributors related to the blocks, such as solar irradiance, subsidence and solar radiations within the blocking anticyclones could be potentially responsible for the rest (70%) of Arctic warming, as synthesized in Figure 5b.
Conclusions
Using ERA5 reanalysis, the Arctic warming amplification is attributed to the increasing occurrence of Arctic blocking.According to previous studies, poleward moist and heat transport contribute to the Arctic warming.Positive PMT and heat transport is statistically related to blocks.In this research, a positive trend of PMT and heat transport during 1979-2018 is identified over the winter Barents Sea sector (3 kg m −1 s −1 decade −1 ; 1.2 × 10 8 W m −1 decade −1 ) and for the summer East Siberian Sea sector (7 kg m −1 s −1 decade −1 ; 1.2 × 10 8 W m −1 decade −1 ).We have attributed these positive trends to the increased blocking occurrence as quantified with a blocking tracking algorithm.Unlike other blocking detection methods, this algorithm captures Arctic blocks by tracking each blocking separately through its entire life cycles.Based on this algorithm, we evaluated the trend of blocking occurrence, blocking persistence, and blocking number at each grid point.
We find that Barents Sea blocks occur more often and are more intensive during last 40 winters, at the rates of ∼3 days decade −1 and −0.25 PVU decade −1 , respectively, and induce increasingly stronger poleward moist/heat transport over the Barents Sea sector, leading to surface warming and sea ice melt.This increasing blocking occurrence is mainly due to longer blocking persistence (by ∼8 hr decade −1 ).Meanwhile, more blocks over the Beaufort Sea sector (∼1 day decade −1 ) cause a positive trend of poleward moist and heat transport over the East Siberian Sea sector (7 kg m −1 s −1 decade −1 and 1.2 × 10 8 W m −1 decade −1 , respectively), inducing surface warming and sea ice melt.However, in summer, enhanced solar irradiance and subsidence within the more frequently occurring blocks contribute more to the surface warming than the warm advection that occurs to the west of blocks.Over the Beaufort Sea sector, the blocking occurrence has large linear correlation with temperature during last 40 summers.The increased summer blocking occurrence is mainly contributed by the increased blocking numbers (∼4 decade −1 ).Other regions, like Baffin Bay and Far East Russia, also display a larger occurrence of blocks, with corresponding temperature increase.However, the potential reason of the increased summer Arctic blocking number is still unclear and deserves more attentions from atmospheric scientists.
Figure 1 .
Figure 1.The linear trend of (a) poleward moisture transport (PMT) and 2D moisture transport (vectors), (b) poleward heat transport (PHT) and 2D heat transport (vectors), (c) blocking occurrence, (d) sea ice concentration (SIC), (e) 850 hPa temperature (T 850 ), and (f) temperature advection (T advct ) during 1979-2018 winter.The stippling including vectors in Figures1a and 1bindicates statistical significance at the p < 0.05 level from a Student's t test.The westmost and eastmost limit of Barents Sea sector is highlighted by cyan dash lines.
Figure 2 .
Figure 2. The linear trend of (a) blocking number, (b) blocking persistence and (c) blocking intensity during 1979-2018 winter.The stippling indicates statistical significance at the p < 0.1 level from a Student's t test.The westmost and eastmost limit of Barents Sea sector is highlighted by cyan dash lines.Note that negative trend in blocking intensity means intensified blocks since the absolute value of the blocking intensity is negative by definition.
Figure 4 .
Figure 4.The linear trend of (a) blocking number, (b) blocking intensity during 1979-2018 summer.The stippling indicates statistical significance at the p < 0.1 level from a Student's t test.The westmost and eastmost limit of Beaufort Sea sector is highlighted by magenta dash lines.
Figure 5 .
Figure 5. Conceptual figure for winter (a) and summer (b) Arctic warming mechanisms.
|
2022-06-09T15:07:04.401Z
|
2022-06-06T00:00:00.000
|
{
"year": 2022,
"sha1": "ea2510db3188cf0a43cae812562824bfe20aaa8d",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1029/2022GL097899",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "198d9329cdd634f87f07035103a1238e797c1185",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": []
}
|
54671347
|
pes2o/s2orc
|
v3-fos-license
|
Forced current sheet structure, formation and evolution: application to magnetic reconnection in the magnetosphere
By means of a simulation model, the earlier predicted nonlinear kinetic structure, a Forced Kinetic Current Sheet (FKCS), with extremely anisotropic ion distributions, is shown to appear as a result of a fast nonlinear process of transition from a previously existing equilibrium. This occurs under triggering action of a weak MHD disturbance that is applied at the boundary of the simulation box. In the FKCS, current is carried by initially cold ions which are brought into the CS by convection from both sides, and accelerated inside the CS. The process then appears to be spontaneously self-sustained, as a MHD disturbance of a rarefaction wave type propagates over the background plasma outside the CS. Comparable to the Alfv énic discontinuity in MHD, transformation of electromagnetic energy into the energy of plasma flows occurs at the FKCS. But unlike the MHD case, “free” energy is produced here: dissipation should occur later, through particle interaction with turbulent waves generated by unstable ion distribution being formed by the FKCS action. In this way, an effect of magnetic field “annihilation” appears, required for fast magnetic reconnection. Application of the theory to observations at the magnetopause and in the magnetotail is considered.
Introduction
In recent studies, much evidence has appeared as to the existence and evolution of thin current sheets (CS) in space plasmas.An impressive evidence of this kind relating to the Earth's magnetosphere was presented at the STAMMS conference (Orléans, France, May 2003).
Theoretically, it is generally accepted (e.g.Syrovatskii, 1971;Parker, 1979) that slow, quasi-static evolution of mag-Correspondence to: A. P. Kropotkin (apkrop@dec1.sinp.msu.ru)netized space plasma objects often leads to the formation of CS of relatively small thickness.Later development of the configuration is believed to involve transformation of stored magnetic energy into energy of plasma flows, and dissipation.Actually that means the formation of specific "dissipative structures" (Nicolis and Prigogine, 1977;Haken, 1983) and is a manifestation of self-organization in open nonlinear systems.The relevant particular nonlinear dynamics is still an open problem.
Small transverse scale of CS arising during quasi-static evolution may often mean that only its kinetic description is appropriate.On the other hand, we then can treat the CS locally as a one-dimensional structure.
In the best documented case of the Earth's magnetosphere, the processes of CS formation and thinning are substantial constituents of its global dynamics.It involves a phase of quasi-static evolution that is followed by a catastrophe of equilibrium (Kropotkin and Sitnov, 1997;Kropotkin, 2000;Kropotkin et al., 2002a, b).In the global equilibrium state, i.e. on the quasi-static evolution stage, stability of the system is presumably dependent on the current density in the near-Earth portion of the magnetotail CS, where a transition takes place from the dipole-like to the tail-like field line structure (transition region, TR).A typical feature of TR is an intense CS thinning during substorm disturbances (Sergeev et al., 1993;Baker et al., 1999).Before the substorm activation (onset) the process may be treated as slow, quasistatic (Kropotkin and Lui, 1995;Schindler 1998), but during the activation it should become much faster.We propose that it is then induced by a fast, explosion-like nonlinear process of current filamentation in the tearing mode that occurs in a somewhat more distant portion of the magnetotail (Kropotkin, 2000;Kropotkin et al., 2002a, b).
We argue that outside the nonlinear tearing instability site, the induced process of CS thinning eventually results in the formation of a specific kinetic structure, namely an anisotropic forced kinetic CS (FKCS), and inside that structure, transformation takes place of the magnetic energy stored in the magnetotail into the energy of ions accelerated inside such a CS.That structure provides merging of magnetic field lines, and thus facilitates the global effect of magnetic reconnection.
In recent papers by these authors and their co-workers (Kropotkin and Domrin, 1995, 1996, 1997;Kropotkin et al., 1997;Sitnov et al., 2000;Sitnov and Sharma, 2000) a theory of a stationary FKCS was constructed.Substantial earlier work started by the studies of Speiser, and continued in papers by Eastwood, Hill, Frankfort and Pellat, Burkhart et al., Holland and Chen, Pritchett and Coroniti, provided certain guidelines for those recent studies.
The theory of the FKCS as a regular nonlinear plasma structure is based on the existence of a specific quasiadiabatic invariant, corresponding to ion oscillations about the central plane, during their motion on Speiser orbits, and also on the relation of that invariant to the magnetic moment being the adiabatic invariant of the ion motion outside CS.The CS is presented as a self-consistent structure.A profile of the equilibrium sheet was determined, along with its dependence on features of the ion distribution function.
A solution exists if outside the CS the distribution function is highly anisotropic: there is a pair of counter-streaming ion flows along B with a relative velocity equal to 2V A , V A =B 0 / √ 4πmn c is the Alfvén speed.The characteristic scale of the structure is determined by the Larmor radius ρ c = √ 2T c /m/ 0 , and the inertial length λ c of ions outside CS: (ion flows outside the CS have shifted Maxwellian distributions with temperature T c , 0 =eB 0 /mc).
In the real configuration, ions convecting in the z direction towards the CS in the crossed E y and B x fields, have a nearzero x component of the bulk velocity.At the same time, those ions that run away from the CS, having been accelerated by the electric field inside the CS, have the bulk velocity equal to 2V A in that direction.The corresponding outward energy flux may be evaluated, and it coincides with the inward Pointing flux value.That energy transformation is just a manifestation of magnetic merging in the CS.
A substantial question remained as to whether the FKCS can really arise as a result of thin CS evolution.In Sect. 2 we analyse the process of FKCS formation (Domrin andKropotkin, 2002, 2003).To this end we have worked out a special particle code.In Sect. 3 we present arguments to substantiate the adopted model of fast initial disturbance, for the particular case of substorm activation, and for a later stage of time-dependent evolution, we present the results of a theory of spontaneous magnetic field merging self-sustained by the FKCS action.Section 4 is devoted to the general situation of a thin CS as a planar discontinuity in collisionless plasma, with counterstreaming flows allowed.FKCS is a particular case of such a situation.More general cases are discussed, along with analysis of corresponding observational evidence.
In the concluding Sect.5, the results are discussed in a more general context of the fast magnetic reconnection problem.
2 Numerical simulation: the problem formulation, the solution method, and the results We have worked out a one-dimensional model for numerical simulation of the CS evolution, in order to reveal the way in which a stationary FKCS type structure arises (Domrin andKropotkin, 2002, 2003).Dynamics of the hot plasma ions belonging to the initial plasma sheet and of the surrounding cold plasma are described by kinetic equations.A solution for ions is found by means of a particle code; electrons form a massless cold background.Self-consistent electromagnetic fields are determined by a solution of the Maxwell equations.
In the initial stationary state the system consists of a hot plasma with the ion temperature T , forming a onedimensional Harris-type CS, a uniform, relatively cold plasma background (with parameters n c , T c T ), and the magnetic field B (B t (z) , 0, B n ).We set B n B 0 , with B 0 being the tangential field B t at z L 0 , where L 0 is the characteristic scale of the Harris sheet.The plasma in CS is in equilibrium over the z direction owing to the tangential field component B t .Thickness of the simulation box (SB) Z 0 is much greater than L 0 .
Evolution of the system is started by a MHD disturbance; it is a fast magnetosonic wave incident on the CS in the normal direction.Far from the CS the incident wave is set weak, and it propagates with the Alfvén velocity V A .The wave magnetic field is directed along the x-axis while the electric field is in the y direction.
At the SB boundary Z 0 , the electric field of the incident wave, E i (Z 0 , t) is set given: E i =0 at t<0, E i =const =0 at t>0.As a result of the wave interaction with the CS, a reflected wave appears, propagating outwards.At the Z 0 boundary, the electromagnetic field is a sum of fields of the incident and the reflected waves.Setting the boundary condition we assume that the reflected wave is weak as well; this identifies its velocity outside the SB, namely the Alfvén speed also, and the amplitude ratio of the wave electric and magnetic fields.The magnetic field B t at the second boundary of the SB, z=0, is set equal to zero, due to symmetry.For the particle motion, conventional boundary conditions were adopted.The boundary condition involves both the inward convective ion flux and the outward freely escaping flux.
The translational symmetry over the x, y directions is effectively used.Then the field equations may be reduced to an ordinary differential equation for the y component of the vector potential.A full system of equations consists of equations of motion for separate particles, of a "material" equation coupling the current density and the electric field values, and of the Maxwell equations.Solutions of the equations are sought at particular instances (over a sequence of time steps with duration t).At every time step, k t, the number of particles is identified that are convected into the SB, and their distribution is generated.To calculate the smoothed current density j 1 at the spatial grid points, a linear weighting scheme is used.Then j 1 is inserted into a finite-difference equation for the vector potential.The equation is supplemented with conditions at both SB boundaries.Solution of the resulting set of algebraic equations is found by means of the Thomas algorithm.To solve the equations of motion of a particle, values of the magnetic and electric fields are recalculated at the particle position, by interpolation from the grid points.Then solving the equations of motion, the particle position in the phase space is determined at the next time moment (k+1) t.
The initial Harris sheet with the plasma (protons and electrons) density profile n(z)/n(0)= cosh −2 (z/L 0 ) is in equilibrium with the magnetic field B t (z)/B 0 = tanh(z/L 0 ); here n(0)=B 2 0 /8πT .The nonzero magnetic field component normal to the sheet is set B n =0.2B 0 .The CS thickness is L 0 =ρ/0.0587,ρ=(2T /m) 1/2 / 0 .The sheet is embedded in a uniform plasma with density which we have set equal to the maximum density of hot ions, n c =n (0) =N 0 ; the background plasma temperature is set equal to T c =0.1T .The time step is taken equal to 0.1 −1 0 .The SB size was 3L 0 .The initial state of the system was modeled with 15 000 macroparticles in the Harris sheet and 45 000 macroparticles of cold background plasma.The code allows one to take into account the escape of accelerated particles from the SB, and the influx of new ones across the Z 0 boundary, because of convection.Thus, the total particle number in the SB is not fixed in the course of evolution.
In Fig. 1 the dependence on z is shown of the total (dimensionless) density N(z)/N 0 , of the magnetic field intensity B t (z)/B 0 , and the electric field intensity E(z)/E 0 (E 0 =V A B n /c), for the time moments t=1/10 0 , 5/ 0 , 10/ 0 , 15/ 0 , 20/ 0 , correspondingly.It is seen how in the uniform plasma a fast magnetosonic wave is propagating from the boundary z=Z 0 , in the form of a weak collisionless shock.According to theory, peak-to-peak distance equal to the wavelength of the shock oscillatory front should be 2π/λ c , where λ c is the ion inertial length.In simulation, this is valid with good accuracy.In theory also, in the background plasma the wave propagates with the Alfvén velocity V A .An estimate based on simulation is in a very good consistency with theory as well.The plasma density and magnetic field near another boundary, z=0, stay undisturbed, and the electric field is nearly zero.
During the time 30/ 0 <t<60/ 0 the wave propagates from the boundary of the Harris sheet to its central plane.Later on, a reflected wave may be seen.In the z=0 vicinity, the electric field is then already nonzero.A sheet with enhanced plasma density and a sharp change in the magnetic field starts to form at t∼80/ 0 .
At even later times, t∼200/ 0 , in the vicinity of z=0 a thin embedded sheet has been formed.A structure is also seen which resembles the field profile in the initial Harris sheet.It stays there because of the hot plasma diamagnetism.
It is seen in Fig. 2, for t>600/ 0 , that an almost uniform nonzero electric field has been established, and its magnitude changes only slightly.While it is seen that there is still much hot plasma in the magnetic trap, it is now involved into the electric drift in the x direction.At the same time, at t∼650/ 0 , a balance is established of the ion convective flux into the SB, and the flux of accelerated ions outgoing along the field lines.
The established thin central CS is the same FKCS which is predicted by the theory, see Sect. 3.This is clear from comparison of the B(z) profile with a theoretical profile calculated for the same value of the v T c /V A0 =0.33 parameter, taken from Sitnov et al. (2000), see Fig. 3.The sheet thickness estimates taken at the B 0 /2 level differ for large t∼1000/ 0 by several per cent only.In addition, according to the FKCS theory, we have E/B n =V A /c.In simulation, for E we obtain E/B n =0.97VA /c. Energy spectra have been calculated and compared with the theoretical spectrum.There is a good similarity in the spectrum profile for low energies (particles convecting slowly towards CS), and for high energies ∼40 keV corresponding to ions accelerated inside the FKCS.The disturbance propagates along the field lines towards the Earth.Propagating in a plasma with CS, such a disturbance gets deformed to a great extent.Near the CS, at small distances as compared to its scale along the Sun-Earth line, L x , the disturbance is reduced to a pair of fast magnetosonic waves incident from both sides on the initial CS.Even if they are weak outside CS, as they interact with the CS, in the region with strongly depressed field B inside the CS, the waves become nonlinear.Outside the CS reflected waves appear.
The FKCS is being formed eventually.
In the initial disturbance, behind the weak shock fronts, plasma moves only in the perpendicular to CS direction (convection).At the nonlinear stage, ions are accelerated in the E-field inside the CS, are rotated in the B n field, and ejected from the CS along the magnetic field lines.In such a way, during the CS evolution, the plasma gets involved into mo- tion along the x-axis.And this is just the way a well-known nightside particle injection may appear during a substorm.
The above scenario of the global time-dependent structure involving the FKCS, may be analyzed in some more detail for later stages.Based on numerical simulation, a thin CS located in a background cold plasma, is known to be able in a short finite time to attain the specific features of the FKCS.A solution has been constructed (Domrin and Kropotkin, 2004) that joins several time-dependent zones with different plasma properties, see a schematic in Fig. 4. The extension to MHD of the gas dynamics rarefaction wave (zone III) joins the zone of undisturbed cold isotropic plasma at rest, existing at a large distance from the CS (zone IV) with the zone of convective isotropic plasma flow (zone II), where the convection velocity is determined by the above FKCS solution.The latter zone is linked with the expanding zone of ion anisotropy (zone I), where a pair of counterstreaming ion flows exists so that the net flux of matter is equal to zero.The double-stream zone, as it expands, is, however, always left behind the rear edge of the rarefaction wave running from the FKCS; so the presence of this zone does not affect the linkage of the two other zones, II and III.
In the rarefaction wave, the field B t decreases from B 0 down to B 1 B 0 −B n .It may be thus argued that a time dependent solution has been found which describes spontaneous "annihilation" of the magnetic field in a plasma with a current sheet; it occurs following the transformation of the CS into FKCS.
Planar discontinuities in collisionless plasma
Observations carried out aboard a number of spacecraft while intersecting the magnetopause have revealed a complex structure of the CS forming the magnetopause, manifesting itself in the magnetic field variation during intersection.A complex and varying structure of the ion distribution function has also been found out; its most significant feature is a strong difference of that function from the thermodynamically equilibrium one.Outside the CS this is expressed in a multi-flow feature of the motion (Fuselier et al., 1991).
Such a difference is in general quite natural to expect for a relatively thin CS structure, in conditions typical for manifestation of collisionless kinetic effects.But in such conditions, for current-carrying discontinuities in a magnetized plasma, with a nonzero magnetic field normal component, classification existing in MHD (Landau and Lifshitz, 1974) should be modified.Along with the MHD rotational discontinuity, a structure of the FKCS is also now possible.It produces a transformation of the electromagnetic energy into the energy of a double-stream motion.In this way, a source of "free" energy appears for plasma turbulence development, with later dissipation, owing to the relaxation of the ion distribution on the turbulent waves.So a structure which much more resembles a rotational discontinuity than a shock, however, appears to be capable of dissipation of the electromagnetic field energy.By means of that structure, magnetic field "annihilation" can take place in collisionless space plasma.
Besides the limiting symmetric case corresponding to the FKCS, where the net bulk plasma flux across the discontinuity is absent, "intermediate" situations with multi-flow motion are also possible, with nonzero bulk flux but with transformation of electromagnetic energy into energy of counterstreaming ion flows still taking place.
Kinetic discontinuities with current, with the characteristic multi-flow ion motion on both sides of the CS, have been evidently observed at the dayside magnetopause.The observational data for a particular case (Avanov et al., 2001) indicate that there is a nonzero bulk ion flux, but smaller than for the MHD rotational discontinuity, across the discontinuity, while outside the CS the ion distribution function has a characteristic counterstreaming feature; this corresponds to the mentioned "intermediate" situation.
Discussion and conclusion
The possibilities of kinetic simulations are in general restricted, especially in the range of time and spatial scales: only a relatively small part of the magnetospheric system may be simulated, and on a relatively short time interval.In this situation, the choice of boundary conditions is crucial.In our approach, we point out the critical role played by a thin CS.On the one hand, a large amount of work, both in theory and in simulation, has been done (see, e.g.Birn et al., 2003 and references therein), which indicates that in processes of reconfiguration occurring in systems that globally are well described in ideal MHD, a thin CS should inevitably arise.At that point, MHD becomes no longer valid: a singularity appears in the current density distribution.On the other hand, those singularities are one-dimensional, in the sense that they appear as current sheets, with the spatial scale over one direction, across the CS, being much smaller than over the other two.This greatly simplifies the problem of the KINETIC approach becoming necessary at that stage: spatial and temporal limitations arise quite naturally.Only kinetic processes occurring in such a "prepared" CS system are studied.They are considered to be started by a fast MHD disturbance appearing in the CS vicinity, in this case taking the form of one-dimensional plane waves.This means a specific formulation of boundary and initial conditions, and thus differs our study from the earlier work (see, e.g.Pritchett and Coroniti, 2002, and references therein).The appearance of the FKCS in our simulation means that specific kinetic effects come into play, involving nonadiabatic ion motions, strong anisotropy appearance, etc.This is totally different from the existing general situation: "The present simulations essentially exhibit fluid behavior.While it is reassuring that a collisionless kinetic plasma can behave like a fluid in its cross-field dynamics, the parallel dynamics for both waves and ballistic particle transport are not treated realistically" (Pritchett and Coroniti, 2002).
There is another problem with kinetic simulation of a dynamical process; it concerns the electrons.There are two aspects of that problem.First is the formal aspect of dealing with electrons in a hybrid code.They are treated as a massless cold background compensating the ion charge.Second is the underlying physical reasoning.If we speak of a stationary equilibrium configuration, then the electron distribution may be taken in any form which only satisfies the requirement of self-consistency in the collisionless Vlasov formulation.However, if we turn to the problem of time evolution, then we can do nothing more today than to follow it in the presented hybrid code manner, implicitly assuming that electrons, rather cold initially, cannot gain too much heating during the process of FKCS formation.The 1-D hybrid code is of course unable to reproduce any fast, generally 2-D or 3-D processes, with electric fields involved, which could form wave turbulence producing electron (and ion) heating.What is, however, needed in order that the electron current could be comparable to the ion one, and thus substantially change the solution?It is easily shown that the electron temperature should then increase to the temperature of plasma sheet hot ions which form a pressure balance with the lobe magnetic field pressure.We just assume that this does not occur on rather short time scales involved.True, this might be verified theoretically; and that is one of the goals of future studies.By the way, note that a reliable solution of this problem in simulation needs a realistic particle-in-cell description of both species, i.e. that with a real electron-to-ion mass ratio, which is still unattainable in such simulations.
With the structure of the FKCS being established in the discussed way, it may be viewed as a "sink" for the convecting plasma flows, directed from both sides towards the central plane z=0.Indeed, as we have shown in Sect.4, the flows of accelerated ions escaping the sheet along the field lines, and to which those convective flows are converted, have no influence on the pattern of the latter ones, and on the corresponding electric field E y .
On the other hand, while the FKCS provides a mechanism responsible for spontaneous "annihilation" of the magnetic field, the magnetic field energy disappearing in a time unit, being transformed into the energy of ion flows, is given by the value of the Pointing vector, (2) To take an example, let us estimate the rate of energy transformation associated with a substorm activation in the geomagnetic tail.Take for this estimate B t =30 nT, B n =1 nT, N 0 =1 cm −3 in the hydrogen plasma, and we obtain P =0,02 erg s −1 cm −2 .Take into account that the activation process is localized, and assume that the localization scales over the x and y at the CS plane in the tail are equal to 3 R E .Then the rate of energy transformation is 10 17 erg/s.This estimate is not in contradiction to the available observation-based estimates of energy dissipation during substorms.
In general, the energy transformation takes place simultaneously over the whole CS surface, and this fact allows the transformed energy flux to be arbitrarily large.In that relation, the situation is similar to that of the Petschek (1964) MHD model, where the transformation occurs over a pair of slow shocks.However, as is the case there, the necessary magnetic reconnection, in the strict sense, i.e. as reconnection of magnetic field lines, does not occur on the CS itself.Such a reconnection can occur in a two-dimensional model, and not in a one-dimensional one which we are limited to.In the Petschek MHD model and in its time-dependent extension (Pudovkin and Semenov, 1985), the two-dimensional feature is expressed in the presence of the magnetic field neutral line, and of a diffusion region in its vicinity.The field line reconnection occurs right there, and it is provided by the finite conductivity.In collisionless plasma, on small scales comparable to the FKCS thickness, such a concept is not adequate.Presumably, in that case, the required dissipation is provided by the Landau damping during the tearing instability.
An important difference in the overall pattern is a consequence.In the MHD case, in the region of outer flow, the whole pattern of slow dependence on (x, z, t) for the velocity field and the magnetic field may be determined, as the (in general, varying in time) rate of reconnection on the neutral line is postulated.To the contrary, in the collisionless situation, the reconnection rate at the neutral line is itself regulated by the large-scale dynamics of the system, being matched to it (see Kuznetsova et al., 2001, and references therein).Namely the characteristics of that large-scale dynamics are revealed in this paper: the above rate is determined by the rate of energy transformation (2).
Note here another difference from the MHD model (Pudovkin, Semenov, 1985), where an essentially twodimensional pattern of the outer zone flow appears from the very beginning.In the collisionless situation, first a fast nonlinear process may take the place of the FKCS formation and of the appearance of magnetic field annihilation at the sites of initially occurring extreme thinning of the equilibrium CS.Only later will that essentially one-dimensional process govern the rate of magnetic reconnection in the vicinity of the neutral line, as indicated by the cited studies involving numerical simulation of reconnection (Kuznetsova et al., 2001).
Fig. 1 .
Fig. 1.Profiles of plasma density, the magnetic field tangential component, and the electric field, for initial moments of the CS evolution.
Fig. 3 .
Fig. 3. Comparison of the magnetic field profile with the theoretical profile of the FKCS.
Fig. 4 .
Fig. 4. Variations of density, magnetic field, and velocity in the plasma flows on both sides of the FKCS.
|
2018-12-07T02:01:47.882Z
|
2004-07-14T00:00:00.000
|
{
"year": 2004,
"sha1": "49e8c95c7ce0123bf653a70bce16f4c72e9bd398",
"oa_license": "CCBY",
"oa_url": "https://angeo.copernicus.org/articles/22/2547/2004/angeo-22-2547-2004.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "49e8c95c7ce0123bf653a70bce16f4c72e9bd398",
"s2fieldsofstudy": [
"Physics",
"Environmental Science"
],
"extfieldsofstudy": [
"Physics"
]
}
|
210385434
|
pes2o/s2orc
|
v3-fos-license
|
Ability to Improve Learning Practices through the Lesson Study Model in the Field Experience Program at IAIN Pontianak
In general, this study aims to improve the ability to carry out Learning Practices in the Field Experience program (learning practices) through the Lesson Study model. The method used is qualitative with work procedures using a research model of action by Stephen Kemmis and Robin MC. Taggart uses a spiral cycle carried out at the Department of Islamic Education, Tarbiyah Faculty and Teacher Training at the State Islamic Institute (IAIN) Pontianak. Data was collected by inventory, observation sheets, focus group discussions and documentation. Content validation was analyzed by expert judgment, while data analysis was carried out descriptively. The results of the study showed that the lesson study model implemented in the learning practices Education activities could improve the teaching ability of prospective PAI teachers at IAIN Pontianak. The results of this study recommend that the implementation of the Education learning practices be carried out with the Lesson Study approach.
Introduction
The Field Experience Program (learning practices) is an intra-curricular activity that must be taken by almost all citizens studying in higher education, including education science students. The aim of learning practices activities is to provide space for students (practical) to gain factual learning experiences in the field, their place, later, will work professionally. It is realized that learning experiences based on authentic experiences such as learning practices are very important ( Many terms are used which refer to this Educational Field Experience Program (learning practices). Among them are "Field experience" (Freeman, 2010); (Liakopoulou, 2012); (Eisenhardt, Besnoy, & Steele, 2012); and (Hixon & So, 2009) ; "Teaching practice" (Sally E. Arnett, Beth Winfrey Freeburg, 2008; and "apprenticeship" (Kennedy, 1999); (Liu, 2005); (Korpan, 2014).
Learning practices activities such as educational students are organized in the form of limited learning (Micro Teaching), guided training, and independent training directed at the formation of teacher skills, which are scheduled systematically under the guidance of tutors and tutors.
The State Islamic Institute (IAIN) Pontianak as one of the educational personnel education institutions, also organizes performance-based learning practices activities. Experience in the field as one of the members of the Team for the Implementation of learning practices activities at the FTIK PAI IAIN Pontianak department found several things that needed to be improved. Among them is the knowledge and skills of carrying out the learning possessed by students during the learning practices process is very colored by the abilities possessed by tutors and supervisors. It was also found that the learning community among fellow learning practices students, supervisor teachers during learning practices activities in training schools was not formed.
The main principle of Lesson Study is to improve the quality of learning in stages by learning from one's own experience and the experience of peers. Lesson Study originates from Japan, with the name 'jugyokenkyu', which is understood as a model of educator professional development through the study of collaborative and sustainable learning based on the principles of collegality to build learning communities (Directorate General of Higher Education, Ministry of National Education, Book 2: Guide to Implementation of Lesson Study, Program for Lesson Study for Strengthening LPTK).
The formulation of the problem of this research, in general is: "How is the implementation of lesson study on learning practices activities for prospective teachers of FTIK PAI IAIN Pontianak academic year 2015/2016 In particular, the questions that will be searched for are: a. What is the process of implementing the learning practices that implements The Lesson Study model is actually not a new program. Lesson Study is an adaptation of a program to improve the quality of learning carried out in Japan. With the concept of forming a school-learning community and implementing Lesson Study, a collapsed school has risen and revived (Ashintya Widhiartha, Dwi Sudarmanto, Nining Ratnaningsih: 2008) The main principle of Lesson Study is to improve the quality of learning in stages by learning from one's own experience and the experiences of others in conducting learning activities.
Methods
This research has action research with the work procedures of the research model of the actions (McTaggart, 1991), which is in the form of a spiral cycle. Each cycle consists of four steps: planning, action, observation, and reflection. There are several action studies in learning practices activities, including those conducted by José Federman Muñoz Giraldo (Giraldo, Federman, Quintero Corzo, & Munévar Molina, 2002); Daniel B. Robinson and William Walters (2016); (Lattimer, 2012); as well as (Albakri, Abdullah, & Jusoh, 2017). The followings are presented in detail the four stages of the first cycle.
Planning stage. During holding down planning, researchers along with the learning practices committee, PAI department leaders, and several supervisors discussed and conducted Lesson Study socialization for prospective tutors and tutors, distributed learning practices participants, formed an implementation team, and developed a learning practices Manual Lesson Study model draft. . Socialization activities present Lesson Study experts, Dr. Agung Hartoyo, M.Pd (FKIP-UNTAN lecturer) as a resource person. The socialization was carried out with the assistance of the Pontianak IAIN Quality Assurance Agency in the form of Lesson Study Workshops, which were held in 2014, namely on July 19-20 2014 at Pontianak Beringin. The number of participants present in this activity was 16 people, namely 8 tutors and 8 lecturers at Pontianak IAIN. The product produced from this socialization is the learning practices Education Lesson Study Model manual.
Before the students go down to the location of the learning practices, by Pantia the learning practices implementers hold debriefing for students in general, and end with symbolic surrender to the training schools. On the last day of the debriefing activity (16 July 2016) a socialization of learning practices was implemented to implement Lesson Study for students. The socialization was delivered directly by the researcher and accompanied by a supervisor and tutor from the school / madrasah pilot.
After debriefing and socialization, each supervisor brings students guidance to the location where the learning practices is held.
Form an Implementation Team. The team consists of Chairperson, secretary and members. This team is tasked with planning the program and schedule of student learning practice, followed by developing a draft learning practices Manual Lesson Study model. Implementation phase. At this stage, students carry out learning activities with the material set by the tutor teacher. The steps taken in the implementation of Learning Practices follow the Education Field Lesson Study Program (learning practices) model adapted from the Cerbin and Kopp version (Cerbin & Kopp, 2004).which was modified specifically for Pontianak IAIN students as presented in Figure 1, During learning practice activities, students practice in accordance with their respective roles that have been agreed upon. There are those who act as teachers and others as observers. When they carry out RPP planning discussions and reflection discussions, there are those who act as moderators, note takers and others as members of the discussion, according to the guidebook provided.
Observation stage. The symptoms observed were: the process of discussing the draft RPP, the process of implementing learning practice activities, the process of discussion of reflection. In making observations, researchers are assisted by students, tutors and supervisors. Observation activities are carried out by recording and recording all events based on observation guidelines that have been prepared in advance.
Reflection stage. The last stage of this cycle is students, tutors and supervisors reflect on the activities that have been carried out. At this reflection stage, data obtained from observations and recorded notes are analyzed qualitatively. The question that needs to be answered if resistance reflection is: is learning practices by implementing lesson study able to improve student learning skills? In addition, the shortcomings that will be corrected in the second cycle are also sought with stages such as in the first cycle.
Research Instrument. There are three types of instruments used. The first instrument was used to collect data about the process of implementing lesson study. The instrument of this study consists of 12 questions to collect data about the planning process, 11 questions to capture data about the implementation of Learning Practices, and 10 questions to collect data about the reflection process in lesson study activities.
The second instrument is used to assess RPP and implementation of learning. This instrument is used by supervisors and tutors to provide an assessment of the performance of students' Learning Practices. There are 9 questions to assess RPP and eight questions to assess the Learning Practices. The third instrument consists of 23 questions that are used to express the opinions of students, tutors and supervisors about implementing learning practices that implements lesson study.
Data analysis. Qualitative analysis is done by summarizing the results of minutes of notes in discussions, the value of lesson plans, the value of Learning Practices, and notes on the reflection stage.
Research result
This learning practices activity was carried out in three schools: I MTsN, SMPN 8, and SD Al Azhar. There were 14 students participating in the distribution of the three, 4, 5, 5 people respectively. Each student is accompanied by a supervisor and one tutor teacher. This learning practices activity is carried out in the form of lesson study. So, enthusiasm begins with planning, implementation, observation and reflection.
Learning material is determined by the tutor teacher in accordance with the Learning Practices cable curriculum, students practice compiling lesson plans, preparing learning resources and doing the learning process in the classroom that has been set by the tutor teacher. Before RPP, they used to discuss it with fellow students at the same school.
Pamong teachers, in addition to setting classes and teaching materials for each student who is guided, also evaluate the quality of lesson plans and the lessons learned and feedback.
The supervisors, in addition to helping students who are guided in developing instructional materials through adequate reading sources, also conduct external observations when students carry out learning activities, assess lesson plans and the process of learning and provide feedback.
There are three questions that the answers sought in this study. First concerns the process of implementing this LEARNING PRACTICES senacan, the second is the impact on students and third is the opinion of stakeholders.
Making Lesson Plan
Before being used, RPP is discussed in advance among students at the same school. Suggestions for improvements that are often delivered include: formulation of subject identity, formulation of basic competencies, formulation of indicators, formulation of learning objectives, formulation of learning scenarios, and formulation of evaluation.
In addition to suggestions, there are several things that are worth noting. Among them are: lesson plans are not always distributed to all preliminary students, not all participants give advice, there are students who are not happy with their suggestions and comments, and there are students who make lesson plans while following the discussion.
Learning Practice
Unlike learning practices in general, the learning practices implements the Study Lesson, when a student implements the learning process not only with the tutor teacher but also fellow practitioners. They are tasked with observing the learning process that takes place in accordance with the list of observations prepared by the learning practices Team.
Recapitulation of observers' notes about the implementation of the learning process shows the following things. Some parking students don't do all the learning steps that have been determined. Some 20 Ability to Improve Learning Practices through the Lesson Study Model in the Field Experience Program at IAIN Pontianak practitioners reported that the choice of learning strategies was inappropriate. Some also lack mastery of teaching materials. Learning media are not well prepared. There are also students who have not used the language properly and correctly as an instruction language in the class. There are a number of students who have not been able to master / manage the class optimally. And, what is also important to note is that there are a number of displays of students who are not right as educators.
Reflection Activity
At the end of the cycle, reflections were carried out followed by all students, tutors, supervisors, and researchers. During the reflection activity, participants flashed what had been done during that period. Most of the participants reflected that the implementation of the learning practices by implementing the lesson study had been carried out well. However, there are several things that need to be improved.
Among them is the mastery of the material of the student practitioner, the obedience to follow the rules that Learning Practices and the observer as an educator. The results of the first cycle reflection were used as the basis for improvement in the second cycle.
Impact on Student Practice
In accordance with the purpose of the study to improve the ability to carry out chasing activities, the impact of students is indicated by the value of the practice of studying students from the tutors and supervisors. Tables 1, 2, and 3 respectively present a recap of values per school for learning practices locations.
Dishes per school location were chosen because the three schools had different characteristics. MTsN as a religious school, Junior High School I as a public school and elementary school al Azhar, besides the general division is also affiliated with religion. And it will be better if it is presented separately.
The combined three tables are presented in Table 4 using the agreed range of scores shows an average score of 27.52. Means, learning practices by implying lesson study produces prospective teachers who are skilled enough to carry out learning activities. Average scores according to location schools are 30.12 (MTsN I), 28.06 (Al Azhar Elementary School) and 23.50 (SMPN 8).
By paying attention to the suggestions on the reflection of the first cycle, it turns out that the results achieved by the students in the second cycle increased significantly. The recapitulation of the second cycle is presented in Table 5.
The average score is 45.76. Achievements in each location school are approximately the same 45. 54-45.76, in the The average score is 45.76. Achievements in each location school are approximately the same 45.54-45.76, in the highly skilled category.
Stakeholder Opinion
Towards the end of the second cycle reflection meeting, the researchers asked all participants to fill in a questionnaire that aimed to capture data or information about learning practices participants' responses to the implementation of the lesson study model.
From the data from the questionnaire filling learning practices participants gave varied responses both positively (strongly agree or agree) and negative towards the implementation of lesson study-based learning practices.
All stakeholders agreed that the learning practices that implements lesson study can improve the quality of the learning implementation of the students both in their lesson plans and in the implementation of classroom learning.
Discussion
The results of this study indicate that Lesson Study can actually improve the ability of students to carry out Learning Practices. The results of this study are in line with the results of research conducted by Mulyatun (Mulyatun, 2017), Siti Malikhah (Towaf, 2016), (Anwar & Rahmawati, 2014), and (Kostas, Galini, & Maria, 2014). The results of Mulyatun's study conducted on Walisongo IAIN Chemical 22 Ability to Improve Learning Practices through the Lesson Study Model in the Field Experience Program at IAIN Pontianak Tadris S1 students showed that learning practices with Lesson Study could improve the competence of prospective learning practices teacher students in IAIN Walisongo Chemical Tadris S1. The research conducted by Siti Malikhah on the teaching practices of Social Sciences students also showed the same results. Likewise the research conducted by Rahmad Bustanul Anwar and Dwi Rahmawati and Kostara et al., showed that there was an increase in the ability in micro teaching students after using lesson study in their learning.
The results of this study are also supported by the results of research conducted by Fitri Budi Suryani et al. On students of English Language Education, Semarang State University, which showed: Despite their lacks in teaching practice which are still found in their third teaching practice, the better areas the student teachers perform indicate that they have implemented the steps in microteaching lesson study better than those in their previous teaching practices.
After the teaching practice, together with their group members they evaluated their teaching practice and discussed for further improvements for their next teaching (Fitri Budi Suryani, dkk, 2017) The results of research supporting this study indicate that although there is still a lack of Learning Practices at the end of the research cycle, it seems that the better performance of Learning Practices carried out on students who are taught with Lesson Study than those who are not treated with Lesson Study.
The results of the calculation of the application of lesson study in improving the ability to carry out the practice of learning carried out for two cycles and 4-5 stages, has shown a very skilled score. It is based on the results of the calculation of observation scores which are then converted to predetermined categories. With the essence of Lesson Study is able to form a learning community. Students who join PPL with the Lesson Study model are able to form learning communities that consistently carry out continuous improvements both at the individual, group level and on a more general system. The results of this study are in line with the results of Ciptianingsari Ayu Vitantri's research, (Vitantri & Asriningsih, 2016), which states that the Learning Practices of lesson study based learning is one way to improve the quality of prospective teachers in Indonesia. With the provision of learning experiences for prospective teachers who are based on lesson study it is expected that prospective teachers who have completed their studies have good quality competency in science and teaching competencies (Fitri Budi Suryani, et al, 2016) (eg Brown et al., 2005Joshua & Fleming, 2002;....,,. Increasing the score of Learning Practices conducted by Pontianak IAI PAI students through the Learning Practices of the lesson study model illustrates that students, tutors and supervisors as learning practices participants have been able to implement the steps to implement the lesson study model in accordance with the manual given. In other words, if the Lesson Study model is Learning Practices consistently, then the results of Learning Practices conducted by Pontianak IAIN PAI students will increase. This result is in line with research conducted by Penny Lamb which stated: "The surveys revealed that 100 per cent of the PSTs felt Lesson Study contributed in a positive way to their professional and pedagogical knowledge development" (Lamb, 2015). The increase in the success of Learning Practices by Pontianak PAI IAIN students conducted through lesson study can be explained that Learning Practices are carried out through joint observation activities and supported by multi-directional reflection. In this way, there will be openness for each student to accept a lack of self and try to improve their abilities.
The increase in scores on the Learning Practices of the lesson study model illustrates that students, tutors and supervisors as learning practices participants have been able to implement the steps in Learning Practicing the essay study model in accordance with the manual given. Exactly What is said by Marit Ulvik & Kari Smith (2011), that "practicums that focus educatively will help prospective teachers to understand the scope of the teacher's role, develop the capacity to learn from future experiences and to achieve the main goals of learning". If the learning practices model is conventionally still wanted to be maintained, then Dewey, suggests it is very important to have qualified tutors as mentors (Tuli, 2009).
Another result of the study was that there was a positive response delivered by parties involved in learning practices activities carried out through the lesson study model. There are variations in the response of participants to lesson study-based learning practices activities, giving positive responses (strongly agree or agree) and negative (disagreeing and strongly disagreeing) to the implementation of lesson study-based learning practices that can improve the quality of learning activities and other matters relating to learning practice activities of students. This finding is in line with the results of a study conducted by Frederik Voetmann Christiansen (Christiansen, Klinke, & Nielsen, 2007) who stated, "By changing the focus from the individual teacher to the group of teachers, lesson studies have the potential to be shared with the shared knowledge base of teachers and lead to less vulnerable educational systems". In other words, changes in focus from teachers to individual groups of teachers, Lesson Study are potential to improve the ability of sharing among teachers so that they can cover the shortcomings in the education system.
The finding that the lesson study model can improve the positive response of PAI IAIN Pontianak students is also in line with the findings of research conducted by Tracy C.
Rock & Cathy Wilson and Mary T. McMahon and Ellen
Hines (Rock & Wilson, 2005) stated that "Participants also indicated that they experienced increased confidence in approaching instruction as a result of engaging in the lesson study experience. (the teachers who were participants in lesson study research also showed that more confidence in implementing the learning approach). In other words, the lesson study model turned out to be able to grow the confidence of prospective high teachers.
The positive attitude that increases in Learning Practices through the implementation of the lesson study model is in line with the findings of Mary T. McMahon (McMahon & Hines, 2008), who stated: After the preservice teachers' participation in the lesson study experience, their responses to this question showed greater consideration of collaboration as an option for improving teaching and learning. They indicated that they would be more likely to seek the advice of a colleague about handling the situation or even requesting peer observation in the classroom. These responses suggest that the preservice teachers valued the collaboration from their lesson study experience.
The results of this study also revealed evidence that the response of learning practices participants to lesson study activities was positive in the sense that they could improve the quality of the learning activities of the students. This is in line with the study conducted by C. Hart et al. (2011) in Mathematics learning, which revealed a good ability among teachers in developing students' ability to respond to teacher questions; (Towaf, 2016) who found that LS is a way to improve the practice of English students' field experience; and, (Hollingsworth & Oliver, 2005), which revealed that LS was able to bridge the gap in teacher understanding of material and pedagogic problems with practice in the classroom.
The results of this study are also in line with the thoughts of Tracy C. Rock & Cathy Wilson (Rock & Wilson, 2005), which explains that LS can build teacher professionalism in learning that is very likely to inspire students to achieve good learning outcomes.
Conclusions and Suggestions
In general, it can be concluded that LEARNING PRACTICES by implementing lesson study can improve the skills and teaching abilities of prospective PAI teachers at IAIN Pontianak. Because 'is still new', before the study, the students, tutors and supervisors were 'introduced' first about lesson study. Another impact of this research is the growing learning community in location schools. Although in the beginning the community only attended to those who participated, it was possible that this community would develop to all school members.
Another finding was the presence of positive responses from stakeholders. Not only because of the novelty but also because it fosters their motivation to progress together. Since there were only 13 students involved in this study, it was suggested that further research would involve more participants. Thus, the ecological validity of this research can be improved. One of the drawbacks of this study was that no control group was provided. It is recommended that in further research be developed experimental research that is able to compare quantitatively between the students who are in Learning Practices lesson study and conventiona l students. Such research needs to be done to improve the internal validity of the results of this study.
|
2019-11-14T17:10:15.965Z
|
2019-09-01T00:00:00.000
|
{
"year": 2019,
"sha1": "2ae689b63e0ad9a8faee5eb9f12c059123f06f39",
"oa_license": "CCBY",
"oa_url": "http://www.hrpub.org/download/20190830/UJERS3-19590729.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "95be0522602ede433f9550e7efe24f3660ccaa2c",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Psychology"
]
}
|
234010027
|
pes2o/s2orc
|
v3-fos-license
|
Influence of different organic sources of nutrients on growth and flowering behaviour of pomegranate (Punica granatum L.) cv. Bhagwa
Rakesh Kumar Jat* Department of Fruit Science, College of Horticulture, Sardarkrushinagar Dantiwada Agricultural University, Jagudan, Mehsana-384460 (Gujarat), India Pankajkumar C. Joshi Department of Horticulture, C. P. College of Agriculture, Sardarkrushinagar Dantiwada Agricultural University, Sardarkrushinagar, Banaskantha-385506 (Gujarat), India Piyush Verma Department of Horticulture, C. P. College of Agriculture, Sardarkrushinagar Dantiwada Agricultural University, Sardarkrushinagar, Banaskantha-385506 (Gujarat), India Mohan Lal Jat Department of Horticulture, College of Agriculture, Chaudhary Charan Singh Haryana Agricultural University, Hisar-125004 (Haryana), India Vishal R. Wankhade Department of Horticulture, C. P. College of Agriculture, Sardarkrushinagar Dantiwada Agricultural University, Sardarkrushinagar, Banaskantha-385506 (Gujarat), India
(Georgia, Armenia, Azerbaijan, Turkey and Iran) regions, while porphyrocarpa is found in Central Asia. Pomegranate has emerged as one of the important fruit crops owing to its high demand for fruits throughout the year besides, hardiness and ability to withstand adverse soil and climate conditions. It is cultivated at a commercial scale in vast regions across the Indian subcontinent, Iran, Caucasus and Mediterranean regions for its fruits. India is one of the leading producers of fruits in the world. Fruit crops are cultivated in India over 6506 thousand ha with a total production of 97358 thousand metric tonnes and productivity of 14.96 t/ha (Anonymous, 2018). In India, pomegranate is cultivated in 233.93 thousand hectare area with 2844.52 thousand metric tonnes production (Anonymous, 2018). Due to its hardiness, drought tolerance nature, high productivity and profitability to the growers, it is valued for replacing subsistence farming and alleviating poverty and acreage under this fruit crop has increased substantially. However, its intensive cultivation involving indiscriminate use of chemical pesticides along with improper nutrient management is deleterious to the plant health and environment also. Due to these practices, the plants also become susceptible to several biotic and abiotic stresses. Such practices are also common among the pomegranate (P. granatum ) growers. Considering the health-importance of plant, soil and environment, more rational approach to different organic sources of nutrients should be employed to restore the depleted soil fertility and enhance the available pool of nutrients to the plants, which could benefit the crop in term of vegetative growth. In this background information, the present study, therefore, was focused and planned with the objective to study the effect of organic sources of nutrients and biofertilizers on growth and flowering behaviour of P. granatum.
MATERIALS AND METHODS
The field experiment was conducted during Mrig bahar (June-January) 2017-18 and 2018-19 at the College Farm, College of Horticulture, Sardarkrushinagar Dantiwada Agricultural University, Jagudan, District: Mehsana, Gujarat on two years old uniform plants of pomegranate cv. Bhagwa in size and growth which planted at spacing of 2.5 m X 2.5 m. Jagudan is geographically situated on 23° 53 ' North and 74° 43 ' East longitude at an altitude of 90.6 metres above mean sea level. The climate of the study area is semi-arid climate with extreme cold winter and hot with dry windy summer. Generally, monsoon commences in the middle of June and retreats by the middle of September. Most of the precipitation is received from South-West monsoon, concentrating in the months of July and August. The experiment was laid out in randomized block design comprising of 22 treatments with 3 replication having 2 plants per replication. Various treatments were-T 1 -Recommended dose of FYM and NPK applied through chemical fertilizers (Control), T 2 -T 1 + Trichoderma viride @ 5 g and Paecilomyces lilacinus @ 5 ml per plant, T 3 -100 % RDN through FYM, T 4 -100 % RDN through vermicompost, T 5 -100 % RDN through poultry manure, T 6 -100 % RDN through neem cake, T 7 -50 % RDN through FYM + 50 % RDN through vermicompost, T 8 -50 % RDN through FYM + 50 % RDN through poultry manure, T 9 -50 % RDN through FYM + 50 % RDN through neem cake, T 10 Application of RDN through different organic manures was also given on the basis of plant age as per treatment which was computed on the inherent availability of nitrogen for the year 2017-18 and 2018-19. The recommended dose of manure and chemical fertilizers were applied in the T 1 and T 2 treatments in the present investigation according to the age of pomegranate plant because the recommended dose of manure and chemical fertilizers varies from year to year recommended by National Research Centre on Pomegranate in Table 1 (Sharma et al., 2011). For both these treatments, a full dose of FYM and half dose of N, P and K were applied at desired leaf fall stage on 20 th June and rest of N, P and K were applied through chemical fertilizers after 60 days of the first split in each year of the experiment. The farm yard manure (FYM), vermicompost, poultry manure and neem cake used in present experiment were analyzed for N, P and K content (%) by using standard methods (Jackson, 1973) before application in the field which is given in Table 2. As per treatment, 50 per cent nitrogen of RDN was applied in the form of FYM, vermicompost, poultry manure and neem cake at desired leaf fall stage on 20 th June and remaining dose after 60 days of the first split in each year of the experi-ment. Application of RDN through different organic sources of nutrients was given on the basis of plant age in the treatment T 3 to T 22 as per treatment. For application of organic manures, a ring having 20 cm depth with 15 cm width was made around the plant canopy and manures were uniformly mixed into the ring which was then leveled. The biofertilizers (Azotobacter culture, phosphate solubilizing bacteria and potash mobilizing bacteria) and biopesticides (Trichoderma viride and Paecilomyces lilacinus) were procured from the Laboratory of Department of Plant Pathology, Navsari Agricultural University, Navsari, Gujarat, India and mixed thoroughly with different organic manures as per treatment before its application. The full dose of biofertilizers and biopesticides were applied at the desired leaf fall stage on 20 th June in each year of the experiment as per treatment. The plants were moderately pruned during the first fortnight of June in each year of the experimentation period. Other cultural practices, such as weeding and plant protection, were done as and when required. Irrigation was applied immediately after the treatment application. Plants were irrigated daily with drip irrigation system as per water requirement except for rainy season. Primary growth in terms of plant height, plant spread (E-W and N-S) and stem girth were measured before and after two months of treatment application. Further, incremental primary plant height, plant spread (E-W and N-S) and stem girth were calculated. The flowering behaviour data were recorded in terms of days taken for the commencement of flowers after treatment application, a number of hermaphrodite flowers and incomplete flowers (male and intermediate flowers) up to two months after treatment application as well as the ratio of hermaphrodite and incomplete flowers. Fruit set, fruit drop and days taken for marketable picking were also calcu-lated. The experiment was conducted in randomized block design with each treatment replicated thrice. The statistical analysis of the data was carried out as per the method described by Cochran and Cox (1963). The treatment effects were tested at 5 per cent level of significance.
Incremental primary growth parameters
The data presented in Table 3 showed non significant differences on plant height, plant spread (E-W and N-S) and stem girth before treatment application which indicates homogeneity of these growth parameters in the experimental plot. Incremental primary plant height, plant spread (E-W and N-S) and stem girth obtained after two months of treatment application as influenced by different organic sources of nutrients are presented in Table 4. It is explicit from the data presented in Ta
Flowering behaviour
It is clear from the data presented in Table 5 that the application of different organic sources of nutrients did not have any significant influence on days taken for the commencement of flowers after treatment application and ratio of hermaphrodite and incomplete flowers up to two months after treatment application. The data pertaining to number of hermaphrodite flowers and incomplete flowers up to two months after treatment application as influenced by different organic sources of nutrients are presented in Table 5. Significantly maximum number of hermaphrodite flowers (85.17) and incomplete flowers (96.50) were recorded with 100 % RDN through poultry manure + 50 ml PSB + 25 ml KMB + 5 g Trichoderma viride + 5 ml Paecilomyces lilacinus treatment up to two months after treatment application. The treatment T 5 was at par with T 6 treatment for a number of incomplete flowers up to two months after treatment application. Whereas, the minimum number of hermaphrodite flowers (66.67) and incomplete flowers (78.67) were recorded with treatment 75 % RDN through FYM + 50 ml Azotobacter culture + 50 ml PSB + 25 ml KMB + 5 g Trichoderma viride + 5 ml Paecilomyces lilacinus up to two months after treatment application. The highest number of hermaphrodite and incomplete flowers in pomegranate might be due to the maximum availability of phosphorus supplied from poultry manure along with PSB which involves in stimulating, enhancing bud development and blooming of more flowers. These could be attributed to the capability of soil microorganisms to produce growth regulators such as auxins, cytokinins and gibberellins which had a positive influence on the flowering process and nutrients uptake which was also recorded by Hassan et al. (2015) who noted that brinjal plants are grown under the organic regime (FYM @ 25 t/ha, poultry manure @ 5 t/ha and mushroom waste @ 10 t/ha) took the least days to harvest as compared to plants grown under the inorganic regime.
Conclusion
On the basis of pooled data, it could be concluded that the application of 100 % RDN through vermicompost + 50 ml PSB + 25 ml KMB + 5 g Trichoderma viride + 5 ml Paecilomyces lilacinus was found significantly (@5%) better to get maximum growth parameters, i.e. incremental plant height, plant spread (E-W & N-S) and stem girth of Punica granatum. While a maximum number of hermaphrodite flowers and incomplete flowers up to two months after treatment application as well as fruit set along with minimum fruit drop and days taken for marketable picking were significantly (@5%) recorded through the application of 100 % RDN through poultry manure + 50 ml PSB + 25 ml KMB + 5 g Trichoderma viride + 5 ml Paecilomyces lilacinus in pomegranate cv. Bhagwa. Thus, the organic sources viz., vermicompost and poultry manure with biofertilizers and biopesticides were found very effective for enhancing vegetative growth and flowering behaviour of the pomegranate.
|
2021-02-05T04:04:13.598Z
|
2021-01-31T00:00:00.000
|
{
"year": 2021,
"sha1": "c05ce4e0d428f4740108560efd213e7d960dab52",
"oa_license": "CCBYNC",
"oa_url": "https://journals.ansfoundation.org/index.php/jans/article/download/2455/1958",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "c05ce4e0d428f4740108560efd213e7d960dab52",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
}
|
250681578
|
pes2o/s2orc
|
v3-fos-license
|
Thermodynamic properties of liquid Sc-Al alloys: model calculations and experimental data
Thermodynamic information available for binary scandium-aluminium system has been obtained earlier by e.m.f. and calorimetry methods and also by semi-empirical approaches described in literature. In this work these data were reconsidered for all intermetallides in Sc-Al system. To calculate the excess thermodynamic functions of liquid Sc-Al alloys special computer program package have been used. The enthalpies of mixing of liquid scandium and aluminium were found in all the composition range of the phase diagram at the temperatures 1873, 1973 and 2073 K.
Introduction
The thermodynamic properties of scandium-aluminium system were investigated by different authors [1][2][3][4][5]. Thermochemistry of solid Sc-Al alloys has been studied mainly in the papers [2][3][4]. In the number of cases different researchers present the data with significant disagreements. Liquid alloys properties systematized in [5] also show the disperancy for different measurement methods.
The circumstances mentioned above caused the need of critical estimation of the thermochemical characteristics of solid scandium-aluminium alloys. Besides that, the modern software packages allow to calculate (on the basis of models accepted) the properties of liquid solutions in this binary system.
The aim of the present work is the estimation and mutual coordination of the experimental data for all the intermetallic compounds (IMC) in Sc-Al system and, also, investigation of the liquid solutions thermodynamics in all the composition range on the basis of ISIP model. Here ISIP is "an ideal solution of interaction products".
The software package "Terra" including ISIP algorithm and using in this study is the result of further development of the known program complex "Astra" [6]. This program contains in the database (DB) thermodynamic properties of a few thousand substances.
In order to calculate the thermochemical properties of any system in wide temperature range the program uses a fixed set of initial data concerning the components the system consists of.
To determine the calculation parameters for the thermodynamic modelling (TM) program different methods can be used: both the expert estimation of experimental data and semi-empirical approaches like Miedema model which described in details, for example, in the monograph [6].
In this study, the thermodynamic properties of pure metallic scandium were taken from [7]. Thermodynamic functions of aluminium were used directly from "Terra" package DB.
Comparing the TM results for ISIP model [8] and the data calculated using simple ideal solution approximation one can find an excess thermodynamic functions of the liquid alloy (enthalpy, entropy and Gibbs energy).
Results and discussion
Four intermetallic compounds were found in Sc-Al system: ScAl 3 , ScAl 2 , ScAl, Sc 2 Al [3]. Measured integral heats of formation of the IMC in Sc-Al system taken from different experimental works are shown in the table 1. One can see that the data of the study [2] (e.m.f.) are in good agreement with the data of the work [3] (calorimetry) for IMC ScAl 3 and ScAl 2 . At the same time, the data differences for the other intermetallics are significant. The data [4] (HCl solution calorimetry) are considerably different also for ScAl 3 and ScAl 2 compounds. Nevertheless, these data were taken into account.
Many attempts to calculate integral enthalpies and other IMC properties using semi-empirical approaches are known. One of most successful of them is the Miedema model. Using of this model for calculation of the concentration dependence of D f H 0 requires the knowledge of the number of adjusted parameters (electronegativity parameter etc.).
The Miedema approach to calculation of standard formation enthalpies was successfully applied to R-Me alloys in the study [9]. Here R is rare earth metal or actinide; Me is p-metal from the group: Al,Ga,In,Tl,Sn,Pb,Sb,Bi. Model parameters found on the basis of the known experimental data allowed to describe these data with the confidence interval for any point approximately + 13 kJ/mol×at. According to [9] this prediction accuracy is considerably higher than the imprecision of the experimental D f H 0 determination. Nevertheless, taking into account great systematic differences between the results of different scientific groups, the model prediction may be very useful to estimate unknown values or to analyze strongly different experimental data.
The assessed values of enthalpies of formation of intermetallides have been calculated by the following procedure. Initially we found the squared deviations of the experimental points from the theoretical curve ( calculated using adapted Miedema model [9]). Further we assumed the statistical weight for each experimental value. This weight was inversely proportional to the squared deviations mentioned. Of course, the weights for each intermetallide were normalized and their sum was reducted to 1. The assessed value of the enthalpy has been calculated by averaging of the experimental values with the corresponding weights.
It can be shown that assessed values of the heats of formation are close to those calculated by the Miedema model adapted for given alloys group [9].
The other thermodynamic characteristics required to obtain the properties of Sc-Al liquid alloys were calculated using semi-empirical equations described in the monograph [6].
Besides the enthalpies of formation, the following data for each Sc-Al intermetallide were estimated: ) which can be defined in the temperature range specified as the polynom: C p (T)=a+by+cy 2 +dy 3 +e×10 5 T -2 , where y=T×10 -3 .
5. Heat capacity at the temperatures higher than melting point. To describe the real composition and thermodynamic properties of liquid Sc-Al melts we used the ISIP model mentioned above [6]. The modelling have been carried out in the inert atmosphere (Ar) at the pressure 0.1 MPa in all the composition range of the phase diagram, in the temperature interval 1873-2073 K.
The thermodynamic functions of the following substances were taken into account: gaseous Al, Al 2 , Sc, Ar and condensed Sc, Al, ScAl 3 , ScAl 2 , ScAl, Sc 2 Al. The composition of "reference" ideal solution included only Sc and Al. According to the ISIP model, the compositions of associates (which are present in the solution) are the same with the compositions of the real intermetallic compounds in the system. Therefore, the "real" solution besides Sc and Al included the associates ScAl 3 , ScAl 2 , ScAl and Sc 2 Al.
The results of molar composition calculation for liquid Sc-Al alloys at 1873 K are presented in figure 1. It can be seen that maximum concentrations of Sc m Al n associates approximately conform to the compositions of corresponding intermetallides. The excess thermodynamic functions (see table 2) can be easily found as the difference of the enthalpy (entropy, Gibbs energy) calculated in ISIP model and the same function determined using ideal solution model.
Integral enthalpies of mixing (DH mix ) for liquid Sc-Al alloys are shown in figure 2. One can see that there is significant difference between calculated and experimental values. The value of deviation is about 5-12 kJ/mol×at for compositions with 10-40 at.% Sc. According to [5] the data of the paper
|
2022-06-28T05:52:28.972Z
|
2008-01-01T00:00:00.000
|
{
"year": 2008,
"sha1": "a53b7b4d4c665f13d46d1e5ba0b3a2eca3c1a086",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/98/3/032017",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "a53b7b4d4c665f13d46d1e5ba0b3a2eca3c1a086",
"s2fieldsofstudy": [
"Chemistry",
"Materials Science"
],
"extfieldsofstudy": [
"Physics"
]
}
|
271189937
|
pes2o/s2orc
|
v3-fos-license
|
Exploring Injury Prevention Strategies for Futsal Players: A Systematic Review
Futsal carries a high risk of injury for players. This systematic review aimed to assess the existing literature on injury prevention strategies for futsal players. The literature was searched using PubMed, Web of Science, and Scopus databases from inception to 20 March 2024. Relevant articles were searched using the terms “futsal” AND “injury” AND “prevention”. Fourteen studies were included in the review. The review identified several injury prevention strategies with potential benefits for futsal players. Structured warm-up routines were shown to improve balance and eccentric strength and to reduce total, acute, and lower limb injuries. Proprioceptive training methods were suggested to improve joint stability and landing mechanics, which may reduce the risk of injury. Furthermore, multicomponent methods that include components such as core stability and flexibility have shown potential for reducing injury rates in futsal players. Finally, by reducing fatigue and improving movement control, strength training procedures designed to correct muscular imbalances may improve performance, which may ultimately minimize the risk of injury. This systematic review demonstrates the potential benefits of different injury prevention strategies for futsal players. The combination of several strategies, such as proprioceptive training, multicomponent programs, warm-up routines, and strength training specifically designed to address muscular imbalances, appears promising.
Introduction
Futsal, a fast-paced and dynamic variant of football played indoors on a hard surface, has become very popular worldwide [1].Its unique characteristics [2,3], including smaller playing areas, an emphasis on tight ball control, frequent rapid changes of direction, and the potential for collisions on a hard surface contribute to a physically demanding sport that exposes players to a high risk of injury [4].Studies have reported higher injury rates in futsal comparable to outdoor football, with ankle sprains, muscle strains, and knee injuries being particularly common [5,6].
These injuries have a significant negative impact, not only on individual players, but also on team success and long-term health [7].When a player is sidelined due to injury, his performance is obviously compromised, potentially creating gaps in the team's strategy and overall success [8].This can be particularly detrimental in a fast-paced sport such as futsal, where individual skill and coordinated team movement are paramount [9].Furthermore, recurrent injuries can lead to long-term health problems for players, potentially forcing them into early retirement or reducing their quality of life even after their playing days are over [10].
Several studies have been conducted on the prevalence of injuries in futsal [11][12][13].For example, Junge and Dvorak examined player injuries over three consecutive World Cups [14].Their study used a well-established injury reporting system in which team physicians reported all injuries on standardized forms after each match.The 93% response rate verified the completeness of the data [14].The study found an alarming injury rate, with 165 injuries reported from only 127 matches.This equates to an injury rate of 195.6 per 1000 player hours, or 130.4 per 1000 matches.Notably, the majority of injuries (70%) occurred in the lower extremities, with contact with another player being the most common cause.The most common diagnoses were lower-leg contusion (11%), ankle sprain (10%), and groin strain (8%) [14].
Although injury prevention strategies have been studied in football [15], the literature lacks an in-depth analysis of futsal.This is concerning given the high injury rates recorded in futsal and its unique playing environment when compared to outdoor football.Systematic research focused solely on futsal injury prevention strategies is essential to determine the most effective methods to protect players and coaches.
Effective injury prevention methods are critical for players of all skill levels.By implementing specific training programs, futsal players can reduce their risk of injury and maintain peak performance throughout the season [16,17].Consequently, prioritizing injury prevention through well-designed training programs and proper recovery techniques is essential to optimize team performance, protect players' physical well-being, and ensure the long-term sustainability of their futsal careers [18].
Therefore, the aim of this systematic review was to assess the existing literature on injury prevention strategies in futsal players.This will allow the identification of injury prevention programs (e.g., warm-up routines, strength training programs) and intrinsic factors related to injury that have been described in the literature.It will also allow the establishment of some recommendations for coaches and physical fitness trainers that could help to reduce the overall number of injuries in futsal players.
Literature Search and Article Selection
This systematic review was carried out according to the recommendations and criteria of the Preferred Reporting Items for Systematic Reviews and Meta-Analysis Statement (PRISMA) [19].The study protocol was registered in PROSPERO under the code CRD42024526674.
A systematic review of studies identified in the Web of Science, PubMed, and Scopus databases was performed up to 20 March 2024.Relevant articles were searched using the terms "futsal" AND "injury" AND "prevention".The search and selection of studies were performed by two independent reviewers (JO and TS).Disagreements between the two reviewers were resolved by discussion; when necessary, a third reviewer (JM) was consulted to reach a consensus.
The keywords used for the review were "futsal" and "injury" and "prevention".Two authors (JO and TS) independently reviewed the titles and abstracts of the identified articles.Disagreements between the two reviewers were resolved by discussion; when necessary, a third reviewer (JM) was consulted to reach a consensus.
Two independent authors (JO and TS) screened the articles for inclusion and assessed eligibility.The same two authors (JO and TS) independently assessed each article in two stages of sorting: the title and abstract and then the full text of the article.
Inclusion and Exclusion Criteria
The eligibility criteria were structured according to the PI(E)COS framework (P: population, I(E): intervention or independent variable (or exposure), C: comparison, control, or comparator, O: outcomes, and S: study design).The population (P) studied consisted of futsal players of any age or gender.The intervention (I) included studies that used injury prevention training programs or prevention protocols as an intervention method to prevent injuries in futsal players.Training programs could include any type of training, such as strength training, proprioceptive training, and multicomponent training: balance; core stability; and functional strength and mobility.Comparators (C) were defined as studies with a control group for comparison.Therefore, the outcomes (O) analyzed were those that in some way assessed the prevalence of injury in futsal players before and after the exercise intervention program.Finally, the included studies (S) ranged from different types of studies such as cross-sectional, longitudinal, experimental, exploratory, descriptive, and randomized control trials and crossover.Summaries of lectures and review articles were excluded.
Exclusion criteria included (1) protocol studies (i.e., those that only provided a detailed description of the study hypothesis, rationale, and methodology of the study, but not the study results) and ( 2) grey literature, websites, and Google Scholar were not included.
Data Extraction
Two authors (JO and TS) independently extracted the characteristics and results of the interventions in each included publication, according to the PRISMA statement [19].From each study, the first author, year of publication, information on the characteristics of the participants (sample size, sex of the participant, mean age, and standard deviation), duration of the intervention, frequency of the training program, the instrument used to assess the injury, and the main outcomes were extracted.
Risk of Bias and Quality Evaluation of Study's Quality
The Downs and Black Quality Assessment Checklist was used to assess the quality of each article [20].The original version has 27 items with a maximum score of 32 points.Adjustments were made to the original version according to the focus of the included studies and the previously modified versions.For example, items 4, 8, 9, 14, 15, 17, 19, and 22 to 26 were excluded if not applicable to the study design (i.e., cross-sectional study), and the last item was configured as "yes" (1 point) or "no" (0 points), instead of five points as described by others [21,22], resulting in a maximum score of 17 points.Quality was classified as follows: (i) low, if the score was ≤50%; (ii) good, if the score was between 51% and 75%; and (iii) excellent, if the score was >75% [23].Agreement between two independent reviewers was calculated using Cohen's Kappa coefficient and interpreted as follows: (i) no agreement, if K < 0; (ii) poor agreement, if 0 < K < 0.19; (iii) fair agreement, if 0.20 < K < 0.39; (iv) moderate agreement, if 0.40 < K < 0.59; (v) substantial agreement, if 0.60 < K < 0.79; and (vi) near perfect agreement, if 0.80 < K < 1.00 [24].
Search and Selection of Publications
The search of PubMed, Web of Science, and Scopus databases yielded 115 records.Sixty-two duplicates and fourteen review articles were removed.The remaining 39 full-text articles were read and assessed for eligibility, and 25 studies that did not meet the eligibility criteria were excluded.Thus, 14 articles [25][26][27][28][29][30][31][32][33][34][35][36][37][38] that met the criteria and objectives of this systematic review were included, as shown in Figure 1, which depicts the PRISMA flowchart for identifying, screening, and checking eligibility.
Quality and Risk of Bias of Individual Studies
The articles included in the final review stage had an average score of 17.29 ± 1.98 points (92.86%-rated as good quality).The main reason why some articles did not achieve a higher score was mainly due to the lack of information on the statistical power calculation.The average quality score of the 14 articles was 90.98 ± 10.41% (74-100%) based on the scoring using our modified Downs and Black Quality Assessment Checklist The most common quality issues were failure to report subject characteristics, confound ing variables, and true probability values (e.g., 0.035 instead of <0.05).Agreement between evaluators was almost perfect (K = 0.97, p < 0.001, 95% confidence intervals: 0.93-1.01).
Characteristics of the Included Studies
Table 1 presents key details from the studies included in the systematic review, focusing on the characteristics of each study, including the name of the first author, the program/protocol selected, its duration and frequency, the total sample, and the age group The sample of all studies consisted of players ranging in age from U13 to Senior players Interventions ranged in frequency from 1 to 3 times per week and in duration from 5 min to an entire season, depending on the program/protocol used.The main injury prevention practices included warm-up protocols, strength, neuromuscular, flexibility, stability-oriented, and Nordic hamstring exercises.Workload management was based on an RPE scale, before and after sessions, and other assessments using other equipment/methods were also applied.
Quality and Risk of Bias of Individual Studies
The articles included in the final review stage had an average score of 17.29 ± 1.98 points (92.86%-rated as good quality).The main reason why some articles did not achieve a higher score was mainly due to the lack of information on the statistical power calculation.The average quality score of the 14 articles was 90.98 ± 10.41% (74-100%) based on the scoring using our modified Downs and Black Quality Assessment Checklist.The most common quality issues were failure to report subject characteristics, confounding variables, and true probability values (e.g., 0.035 instead of <0.05).Agreement between evaluators was almost perfect (K = 0.97, p < 0.001, 95% confidence intervals: 0.93-1.01).
Characteristics of the Included Studies
Table 1 presents key details from the studies included in the systematic review, focusing on the characteristics of each study, including the name of the first author, the program/protocol selected, its duration and frequency, the total sample, and the age group.The sample of all studies consisted of players ranging in age from U13 to Senior players.Interventions ranged in frequency from 1 to 3 times per week and in duration from 5 min to an entire season, depending on the program/protocol used.The main injury prevention practices included warm-up protocols, strength, neuromuscular, flexibility, stability-oriented, and Nordic hamstring exercises.Workload management was based on an RPE scale, before and after sessions, and other assessments using other equipment/methods were also applied.
Data Organization
For the purposes of this review, the studies were grouped into four broad areas according to the type of physical exercise used in the intervention program.These areas included (i) warm-up protocols (7 studies), (ii) proprioceptive training (3 studies), (iii) multicomponent programs (2 studies), and (iv) strength training (2 studies).The statistically significant main results and conclusions of the studies are presented in Table 2.
↑ flexibility in the post-immediate with a non-significant decrease after 15 days.↓ risk of injury due to triggered by the decrease in muscular length Gómez et al. [26] The HIIT + NC group and the HIIT group showed a significant improvement in intermittent work performance after the intervention (p = 0.04 and p = 0.01, respectively).
↑ intermittent work performance in both HIIT and HIIT + NC groups
Hamoongard et al. [27]
A significant improvement was noted in the IG compared to the CG for the dynamic knee valgus at IC (p = 0.02, ES = 0. ↑ landing mechanics in players with knee ligament dominance defects Jebavy et al. [28] The IG had significantly improved the intraabdominal pressure test (p = 0.004), trunk flexion (p = 0.036), and side plank (p = 0.002) in posttest results.
↑ activation of functions of the DSS, and should be prioritized over traditional strength exercises in injury prevention training programs.The use of DSS might prevent injury and overloading in elite futsal players.
Klich et al. [29]
A decrease in FFD under the rearfoot (p ≤ 0.001) and forefoot (p ≤ 0.001) on the right and left sides.Increase in the plantar PPT in all regions of the foot (p ≤ 0.001).
Fascial taping can be an effective method for normalizing the FFD and reducing the PPT.The findings provide useful information regarding the prevention of and physical therapy for lower extremity injuries in soccer and futsal.
Lopes et al. [30] The IG showed higher training exposure and lower BMI and BW.
Performing FIFA 11+ for 10 weeks did not improve static and dynamic balance as well as proprioception in amateur futsal players.
Lopes et al. [31] In the long term, significant gains were obtained after adjustment for baseline differences in eccentric strength for both lower limbs as for the H:Q ratios for the dominant limb.
↑ long-term benefits in eccentric strength.↑ long-term benefits in H/Q conventional and functional ratios of the knee of amateur futsal Players.↓ injuries in amateur futsal players.
↓ overall, acute, and lower limb injuries in amateur futsal players, during the season.
Lorente et al. [33] The incidence of injuries was significantly lower (p < 0.05) among players with fewer warning signs (RPE of 6).In months with a higher training volume, warning signs were effective in reducing the number of injuries sustained by players.
↓ risk of injury when the coach was able to adjust training loads based on the players "warning signs".
Study Main Results Conclusion
Machado et al. [34] A significant (p < 0.01) time × muscle group interaction was observed.Significant reductions (p < 0.01) were noted in KF and KE performance for all parameters measured.KF showed a higher percentage decrease than KE.Significant reductions (p < 0.01) in the H:Q ratio were observed for work, average power, and peak power but not for peak torque.
The high-speed isokinetic fatigue protocol induced performance decrement in both KF and KE, with KF showing superior reductions.The H:Q ratio, calculated from work, average power, and peak power, decreased, contrasting with H:Q derived from peak torque.Peak torque measures exhibited less performance decrement compared to other assessments.
↑ proprioceptive precision.
The effects of the program may persist after it ends, although it may not sufficiently improve proprioceptive acuity and maximum vertical jump.↑ precision and ↓ training load effects.
The FIFA 11+ can be used as an effective conditioning means for improving physical fitness and technical performance of youth futsal players, potentially enhancing performance, technical skills, and reducing injury risk when completed as a warm-up routine.
A futsal-specific warm-up can lower the incidence of contact injuries in amateur players.
With high adherence, the occurrence of all injuries, including LE injuries, may decrease.
Warm-Up Protocols
Several studies included in this review investigated the usefulness of warm-up protocols in preventing injuries in futsal players.FIFA 11+ is a well-known program that was investigated [30][31][32]37,38].Research on the effects of FIFA 11+ on balance and proprioception produced mixed results.While some studies reported no significant change in static or dynamic balance or proprioception after 10 weeks of the program [30], others suggested long-term improvements in eccentric strength and balance [31].However, several studies have shown that the FIFA 11+ program has a positive effect on injury reduction.Two studies found that players who performed the FIFA 11+ had a significantly lower risk of total, acute, and lower limb injuries over the course of the season compared to a control group [32,38].In addition, the program can be a useful conditioning technique for improving the physical fitness and technical skills of young futsal players [37].Futsalspecific warm-up routines are also promising.One study examined a multi-station exercise program as the final component of the warm-up.Compared to a control group, the effects of training load were reduced, but proprioceptive accuracy was increased [36].This study also suggests that strict adherence to a systematic warm-up program may be essential to maximize injury reduction effects [36].
Proprioceptive Training
Proprioceptive training may have a positive effect on injury prevention futsal.One study found that landing mechanics improved after a specific proprioceptive training intervention [27].The authors noted that improved awareness of joint position during the landing after jumping could potentially reduce the risk of lower extremity injury.Although not directly related to exercise interventions, studies of approaches to improve proprioception are also relevant.Low-dye taping is one such technique that has been shown to provide sensory feedback while also improving body awareness [29].Although the study itself does not include training activities, the likely mechanism of improved proprioception warrants its inclusion in this area as it is consistent with the general purpose of this category.A study with a broader-ranging focus on injury prevention, such as workload management, also included a proprioceptive training component, albeit minimal.For example, a multicomponent program that included proprioceptive and neuromuscular training (only 2 and 3% of the total program duration, respectively) was examined and found to reduce overall injuries [35].This suggests that even a small amount of proprioceptive training may be useful when combined with other injury-prevention interventions.
Multicomponent Programs
Combining a variety of exercise strategies, such as multicomponent training programs, has been investigated as an option for injury prevention in futsal players.Some of the elements that are commonly included in such a program are strength training, flexibility exercises, balance training, and core stability work.A study using the Pilates method with two different protocols found that flexibility improved immediately after the intervention [25].Although the flexibility gains were not significantly reduced after 15 days, it can be suggested that the Pilates method could be beneficial for improving the range of motion and potentially reducing the risk of injury.Another article compared a stability-focused intervention with a traditional strength program and again found significant improvements in core strength and trunk control in the stability group [28].Increased core stability such as this may help players maintain appropriate posture and movement patterns during games, thereby reducing the risk of injury.
Strength Training (1) Strength training programs in general
One study looked at the effects of an HIIT program combined with NC exercise [26].Although the authors did not find significant increases in isometric strength, this program resulted in significant improvements in intermittent work performance, a key aspect of futsal.Interestingly, while significant increases in isometric strength were not observed, there was a tendency for increased strength gains when longer training durations were applied or even different program designs.Thus, the authors suggested that strength training programs, such as HIIT-only or HIIT and Nordic hamstring exercises, could improve futsal performance and potentially reduce the risk of injury. (
2) Strength training based on isokinetic assessment
Although an isokinetic dynamometer assessment is not directly a strength training program, isokinetic assessments can be valuable tools for designing targeted programs.One study used an isokinetic dynamometer to assess the performance of the knee flexor (KF) and extensor (KE) muscles in futsal players.The study used a high-speed fatigue protocol [34].The main findings suggested that this procedure resulted in a significant decrease in the performance of both the KF and KE muscles, with the KF showing the greatest decrease.In addition, the hamstring-to-quadriceps (H:Q) ratio, which measures muscle balance, decreased for all parameters except peak torque.The authors emphasized the importance of muscle balance when designing strength training programs.In this sense, isokinetic testing can provide useful information and help guide specific training programs to address this aspect, potentially reducing the risk of injury.
Discussion
The purpose of this study was to review the existing literature on injury prevention strategies for futsal players.The discussion was organized into the same four broad areas used in the Section 3: (i) warm-up protocols; (ii) proprioceptive training; (iii) multicomponent programs; and (iv) strength training.
Warm-Up Protocols
With regard to warm-up, the results provided strong evidence for the value of the FIFA 11+ program in reducing acute, total, and lower limb injuries in futsal players [32,37,38].This program appears to be well adapted to the needs of futsal despite being primarily designed for soccer.Research has suggested that the program provides sustained improvements in eccentric strength and balance, which are critical for maintaining stability and control during intense movements on the hard court [31].However, one study did not find significant improvements in static and dynamic balance or proprioception following a 10-week FIFA 11+ intervention [30].However, another study reported significant improvements in quadriceps and hamstring strength, jumping performance, agility, and balance (fewer falls) following the program in youth futsal players [37].These conflicting findings may be due to the differences in sample characteristics (e.g., age, experience level), fidelity of program implementation, or outcome measures used.On the other hand, one study examined a multi-station program and reported improvements in proprioceptive accuracy after the intervention and concluded that the effects of the program may persist after completion, although it may not sufficiently improve proprioceptive acuity and maximum vertical jump [36].This suggests that such programs may provide additional benefits beyond those observed with the FIFA 11+.It is important to consider these potential limitations when evaluating the usefulness of multi-station protocols for reducing injury risk.Future research could compare the effects of different warm-up protocols, including multi-station programs alongside the FIFA 11+, to determine the most effective strategies for improving both physical performance and injury prevention in futsal players.Future studies should also investigate potential modifications to the FIFA 11+ program to maximize its usefulness for futsal players.For example, the addition of drills or exercises unique to futsal that replicate the rapid directional changes and close ball handling of the activity could increase its benefits in injury prevention.It would also be beneficial to investigate the long-term implementation and adherence rates of the FIFA 11+ program within futsal training regimes.In addition to structured warm-up routines, monitoring training load during training sessions may be another valuable alternative strategy for injury prevention.One study investigated the use of the RPE scale before and after training sessions and found that players with lower perceived exertion levels reported fewer injuries [33].This suggests that coaches can use these tools, such as the RPE scale, alongside warm-up routines to manage training intensity and potentially reduce the risk of overtraining injuries.However, it is important to note that the RPE scale relies on subjective perception, and its results could be influenced by factors such as player honesty and fatigue levels.Future research could explore methods to improve the accuracy and objectivity of RPE-based training load monitoring.Understanding the feasibility and sustainability of these methods in real-world settings will facilitate the development of practical recommendations for coaches and teams.
Proprioceptive Training
Proprioceptive training strategies have been shown in the included studies to improve landing mechanics and possibly reduce the risk of lower extremity injuries, which are common in futsal [27].By increasing body awareness and sensory feedback, methods such as low-dye taping may also be helpful, potentially leading to an improvement in joint stability during movement [29].Notably, in multicomponent programs, even small amounts of proprioceptive training appeared to help when combined with other injury prevention strategies [35].This finding suggests a beneficial effect when different approaches are implemented together.Future research could investigate the most appropriate volume and specific forms of proprioceptive training that work best for futsal players.It would also be beneficial to investigate how proprioceptive training affects injury rates and player performance over a longer period.To encourage widespread use in futsal programs, research into time and cost-effective proprioceptive training methods that could be incorporated into current training programs would also be helpful.Similar to football, ankle sprains are common in futsal due to the rapid changes in direction and the possibility of falling on a hard surface [39].Studies in football have shown that proprioceptive training is effective in reducing the risk of ankle sprains and improving postural control [40][41][42].Specific exercises used in football, such as wobble boards, single-leg balance training, or training with ankle disks, balance boards, and tilt boards, should be investigated and possibly modified for futsal injury prevention programs [43,44].Proprioceptive training, which includes exercises that challenge balance and promote joint awareness, has the potential to prevent ankle sprains and improve movement control in futsal players, thereby reducing the risk of injury.
Multicomponent Training Programs
Research on multicomponent training programs that incorporate a variety of techniques, such as core stability exercises, flexibility training, balance training, and strength training, shows promise in reducing injuries in futsal players [25,28].For example, the Pilates technique can increase the range of motion, which may reduce the risk of muscle sprains and tears [25].Similar to stability training, it can improve trunk control and core strength, which can improve posture and potentially reduce the risk of injuries caused by imbalances or incorrect movement patterns [28].Future research may investigate the ideal design and duration of multicomponent training programs for futsal players.In order to maximize their benefits, it would be beneficial to examine the specific elements of these programs that make the greatest contribution to injury prevention.Additionally, research into the long-term sustainability of these programs within training schedules and their impact on player performance would be informative.The usefulness of multicomponent programs to reduce overall injuries and muscular strain is often reported in the existing soccer literature [45], which may be applicable to futsal due to its similarities.As in football, futsal players can benefit from core strengthening exercises and programs that increase flexibility, as they improve core stability and movement control and reduce the risk of injury [46,47].Core stability is essential for the effective transfer of force between the upper and lower body during basic futsal movements such as jumping, landing, and abrupt changes of direction [46].For example, in the study by Owen et al. (2013) [45], a multicomponent program that included core stability, balance, and plyometric exercises performed twice a week for the duration of the season resulted in a significantly lower incidence of overall injuries in male professional football players compared to a control group.Given the similarities between the two sports, it is expected that futsal players will achieve similar results.Moreover, flexibility training may increase joint range of motion, facilitate more effective movement, and potentially reduce the incidence of muscle strain [48].Muscle imbalances between the dominant and non-dominant leg may increase the risk of injury, according to football studies [49].Futsal players can also benefit from multicomponent programs that include flexibility training to address these asymmetries and improve their overall movement mechanics.
Strength Training
Research suggests that Nordic hamstring exercises and high-intensity interval training (HIIT) are two strength training procedures that can improve intermittent work performance, a crucial component of futsal [26].Improved performance often correlates with a reduced risk of injury, as athletes experience less fatigue and have better control of movement, while the direct influence on injury reduction requires further research.Furthermore, isokinetic evaluations may be useful tools in the development of focused strength training programs that target muscular imbalances that may be a factor in injuries [34].Strength training programs can help athletes maintain ideal movement patterns and reduce their risk of injury by correcting imbalances.Future research could investigate the optimal intensity, volume, and frequency of strength training programs for futsal players, considering the specific demands of the sport and the potential for overtraining.It would also be beneficial to investigate how strength training affects injury rates, player performance, and the development of muscle strength and power over a longer period.Strength training is also commonly used in football, particularly in lower limb and core training, to prevent injuries such as muscle strains and ACL tears [45,48], which are also common in futsal [12].Similar to football, futsal players may benefit from isokinetic training to identify and correct muscular imbalances, which may reduce their risk of injury [47].Studies in football emphasize that addressing hamstring weakness or imbalances in relation to quadriceps strength is critical to preventing hamstring injuries [49,50].This is also true for futsal players, and the included studies [26,31,34] took this into account.Further studies can investigate hamstring-specific strengthening routines that can be incorporated into futsal training programs with larger samples and for longer periods.
Key Findings and Considerations
This systematic review highlights the potential benefits of different injury prevention strategies for futsal players.Implementing a combination of the above approaches, including structured warm-up routines (shown to improve balance and reduce injury risk) [31,32,[36][37][38], proprioceptive training (potentially improving landing mechanics and joint stability) [27], multicomponent programs (combining elements such as flexibility and core stability work to potentially reduce injury) [25,28], and strength training tailored to address muscle imbalances (which may improve performance and indirectly reduce injury risk) [26], appears to be promising.Overall, this review highlights the importance of a multi-pronged approach to injury prevention in futsal.Implementing a combination of these strategies appears to be a promising strategy for reducing injuries and promoting optimal player performance.By prioritizing injury prevention, coaches, trainers, and players can create a safer and more rewarding futsal experience for all involved.
Limitations and Future Directions
As with any systematic review, there are limitations.It is difficult to draw definitive conclusions about specific strategies because of the variability in study designs, interventions, and outcome measures.The primary objective of this review was to compile and synthesize existing evidence rather than to conduct primary research to determine causality.The different age ranges and competitive levels of participants across the studies also contribute to the complexity of drawing firm conclusions.Therefore, while the study provides valuable insights into different injury prevention strategies, the heterogeneity among studies requires caution in interpreting direct cause-effect relationships.Such efforts are essential to strengthen the overall body of evidence in futsal injury prevention.This highlights the need for future research with standardized protocols, larger sample sizes with a more diverse age range, and comparisons between different competitive levels to strengthen the overall body of evidence.It would also be beneficial to investigate the durability of injury prevention programs within training regimens and their long-term consequences.Research into futsal-specific injury processes would also be essential for the development of targeted prevention plans that take into account the inherent demands of the game, such as the smaller pitch, emphasis on rapid changes of direction, and the potential for collisions on a hard surface.
Conclusions
A multi-pronged approach combining warm-up routines, proprioceptive training, multicomponent programs, and strength training showed promise in reducing injuries in futsal players.However, the variability in study designs, interventions, and participant characteristics makes it difficult to draw definitive conclusions.Future research should prioritize high-quality studies with standardized protocols and larger sample sizes with a more diverse age range to provide more robust evidence.Investigating the long-term effectiveness and durability of these programs within training regimens is essential to developing sustainable injury prevention strategies.Furthermore, research into the longterm effects of these programs and futsal-specific injury strategies would be helpful in developing targeted prevention approaches.
Figure 1 .
Figure 1.Flowchart of the systematic literature review.
Figure 1 .
Figure 1.Flowchart of the systematic literature review.
Table 1 .
Main characteristics of the studies included in the systematic review.
Table 2 .
Summary of the main results and conclusions of the included studies in the systematic review.
Significant group-by-time interactions in AAE, with CG presenting higher values at Post10wk compared to baseline, while the experimental group exhibited a reduction at Post6wk and Post10wk (p = 0.028).CG had higher values of AAE than the experimental group at Post10wk (p = 0.050, d = 0.8).The main time effect in RAE with the control group showing higher values at Post10wk compared to baseline (p = 0.004, d = 0.7).IG exhibited lower values of VAE compared to the control group at Post10wk (p = 0.039, d = 1.2).
|
2024-07-15T15:40:03.593Z
|
2024-07-01T00:00:00.000
|
{
"year": 2024,
"sha1": "a5d20fa8af46510d58f8fd260c9035aa4b256322",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/healthcare12141387",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f313a6740895b4f427717a97a1f3b953eac54d8b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
9836543
|
pes2o/s2orc
|
v3-fos-license
|
Sensory and rapid instrumental methods as a combined tool for quality control of cooked ham
In this preliminary investigation, different commercial categories of Italian cooked pork hams have been characterized using an integrated approach based on both sensory and fast instrumental measurements. For these purposes, Italian products belonging to different categories (cooked ham, “selected” cooked ham and “high quality” cooked ham) were evaluated by sensory descriptive analysis and by the application of rapid tools such as image analysis by an “electronic eye” and texture analyzer. The panel of trained assessors identified and evaluated 10 sensory descriptors able to define the quality of the products. Statistical analysis highlighted that sensory characteristics related to appearance and texture were the most significant in discriminating samples belonged to the highest (high quality cooked hams) and the lowest (cooked hams) quality of the product whereas the selected cooked hams, showed intermediate characteristics. In particular, high quality samples were characterized, above all, by the highest intensity of pink intensity, typical appearance and cohesiveness, and, at the same time, by the lowest intensity of juiciness; standard cooked ham samples showed the lowest intensity of all visual attributes and the highest value of juiciness, whereas the intermediate category (selected cooked ham) was not discriminated from the other. Also physical-rheological parameters measured by electronic eye and texture analyzer were effective in classifying samples. In particular, the PLS model built with data obtained from the electronic eye showed a satisfactory performance in terms of prediction of the pink intensity and presence of fat attributes evaluated during the sensory visual phase. This study can be considered a first application of this combined approach that could represent a suitable and fast method to verify if the meat product purchased by consumer match its description in terms of compliance with the claimed quality.
Introduction
Cooked pork ham as meat product made from entire pieces of muscle meat, belongs to the cured cooked meat category which after the curing process of the raw muscle meat, always undergoes heat treatment to achieve the desired palatability (Heinz and Hautzinger, 2007).
Cooked pork ham is a very common product that is consumed worldwide, and is the cured meat product most consumed in Italy (ASSICA, 2014), even if it is not included among Protected Geographical Indications (PGI) or Protected Denominations of Origin (PDO) products. However, the Italian market offers a wide variety of cooked hams that are classified in three different commercial categories: cooked ham, "selected" and "high quality" cooked ham (Ministerial Decree, G.U. n hyperspectral imaging (HSI) is a promising technology that allows to collect information about different physico-chemical properties (ElMasry et al., 2012;Iqbal et al., 2013;Iqbal et al., 2014). On the other hand, also the conventional image analysis represents an useful tool to the study of meat products' appearance characteristics (Sánchezet al., 2008;Fongaro et al., 2015), especially considering its cost effectiveness, consistency, speed and accuracy provided by its automated application (Brosnan and Sun, 2004) Textural characteristics are also very important for the quality of cooked hams and depend on several factors that are related to biochemical constituents (water, fat, protein, connective tissue content etc.), raw meat pH, added non-meat ingredients, chemical reactions (entity of proteolysis and lipolysis prior to cooking) and processing variables such as the extent of heating (Aaslyng, 2002;Toldrá et al., 2010), cooling treatment used (Desmond et al., 2000), smoke flavourings used and storage time (Martinez et al., 2004).
Another highly appreciated characteristic in this product is represented by its flavor, which is mostly related to processing conditions, brining, and spices added (Toldrá et al., 2010).
Very few studies have investigated cooked ham and its physical and chemical properties in relation with the sensory profile to characterize the product, evaluate its quality, and test consumers' knowledge and acceptance (Delahunty et al., 1997;Válková et al., 2007;Tomović et al., 2013;Henrique et al., 2015).
Others studies have focused on the classification of cooked hams manufactured with pork legs produced in different countries and with different percentages of brine injection by a chemometric approach based on the physical and chemical parameters (Casiraghi et al., 2007;Moretti et al., 2009). However, the results from all these investigations are not always easily comparable because they take in account different raw materials and processing procedures (Tomović et al., 2013).
The aim of the present study was to analyze Italian cooked pork hams belonging to the main commercial categories for quality control by applying a combined approach of sensory (descriptive analysis) and fast instrumental (image and texture) analysis.
Samples
The research was carried out on commercial brands of cooked pork ham belonging to different product categories: cooked ham (CH); "selected" cooked ham (SE), and "high quality" cooked ham (HQ). The main characteristics of these three classes are reported in Table 1. In particular, 15 samples (5 for each category) were selected from the Italian market in order to represent the variety of cooked pork hams available on the Italian market.
The set of samples included: balanced number of samples belonging to the different commercial category (CH, SE and HQ); samples belonging to the most common Italian brand of cooked pork ham and presence of samples characterized by different intensities of sensory attribute (a larger set of samples was evaluated during the training as described in the paragraph 2.2).
A sensory description of each product was generated and sensory differences between products were described and quantified by a panel of of highly trained assessors who have been preselected to have good sensory abilities and received general training as described in the following section. Moreover, textural and appearance properties (sensory profiling and instrumental) were measured on the whole set of samples. All cooked hams (pieces of about 5 kg) were stored at 4°C, vacuum packed, protected from light, and physical analysis were carried out in several replicates, whereas for the sensory analysis, the final score was the average of the scores assigned by each judge to these samples in three different sessions.
Sensory characterization by descriptive analysis (DA)
Samples were tasted by a panel of 8 expert sensory assessors, balanced in terms of gender, varying in tasting experience, and previously trained in the assessment of cooked ham. All of them were regular consumers of cooked ham and interested in the study. Only assessors who demonstrated specific characteristics such us acuity, ability to communicate and/or to describe, knowledge of the involved product, interest, motivation and availability to attend both training and subsequent assessments, were selected. The recruitment, the preliminary screening and the training were done according to ISO 8586:2012 and ISO 13299:2010.
The performance of selected assessors should be monitored regularly to ensure that the criteria by which they were initially selected continue to be met.
During different sessions the DA panel generated a list of appearance, aroma, taste and texture attributes using the consensus training (Heymann et al., 2014). The training proceeded in 3 sessions: (i) definition of each descriptor of the sensory vocabulary and the training; in this step the panellists chose a list of 10 nonoverlapping attributes that permit a descriptive analysis of the samples under study and, at the same time, represent an useful tool also for the quality control of the product; (ii) assessment of the intensity and the memorization of the scale; (iii) sensory evaluation and monitoring of performance of selected assessors in terms of repeatability, discriminatory capacity and reproducibility.
An agreement on the meaning of the attributes of the sensory lexicon, must be obtained. For this reason it is important clearly define attribute name, written definition, method of assessment and standards reference able to help judges in the memorization of the different intensity levels for each of the selected descriptors.
After attributes generation, the product assessment protocol must be determined in order to standardize the procedure and avoid bias. This step includes the way in which the product needs to be assessed and methods to reset the senses back to a neutral state between samples. Then, a wide range of samples of a product should be evaluated by rating the intensity of each attribute on a scale. this training improves the judges ability in using the sensory scale and promote the use of the ends of the scale. The performance check is generally carried out by applying statistical treatments to confirm that the panel works in a consistent and reliable way. The conventional profiling method was applied (Meilgaard et al., 2007). The final list of descriptors included 3 relative to appearance: typical appearance (recognition of major muscle), pink intensity (intensity of colour), presence of fat Article No~e00202 (total amount of fat inside the slice); 3 perceived by orthonasal and retronasal routes: overall aroma (intensity of total aroma of the product), spices and flavours (intensity of spices and other flavours), smoky (aroma associated with smoked notes in meat products); 2 gustatory: sweet (basic taste), salt (basic taste); 2 relative to the texture: cohesiveness (resistance to the product separation, to be assessed during the first 3-4 bites), juiciness (amount of juice released from the product during mastication).
Rating of the attribute's intensities was done using a linear unstructured 100 mm scale anchored at their extremes (0: absence of sensation; 100: maximum of sensation intensity) and results were expressed as the mean of three replicates.
Samples were coded with three-random numbers and were presented to the assessors in randomized blocks. Between samples, a break with water rinses and unsalted bread sticks was suggested to reduce the carry-over effects as much as possible. To make it easier to understand and evaluate visual attributes, a group of product images were provided to each judge as references. These images (anchors) were selected taking into account the previously results of the training session and were used to illustrate the maximum, the minimum or average intensity points on the scale of typical appearance, pink intensity and presence of fat. Moreover, in order to standardize the testing conditions as much as possible and avoid bias, panellists evaluated visual attributes by observing the same slice of product inside a plate, whereas evaluation of other attributes (smell, taste, and texture) was performed by providing assessors with a sample minced and placed in plastic cups.
Image analysis
The instrumental measurement of appearance was carried out by an "electronic eye" (visual analyzer VA400 IRIS, Alpha MOS, France), a high-resolution CCD (charge-coupled device) camera (resolution 2592 × 1944p) combined with powerful data processing software. This instrument was equipped with an adjustable photo-camera (16 M colours) in a dedicated measurement room with standardized, controlled and reproducible lighting conditions. The camera imaging was software-monitored, embedded in the cabin for a better protection adapted to quality control environment and equipped with several lenses of different focal length available to accurately assess very small to large products.
Top and bottom lighting (2*2 fluorescent tubes) 6700°K colour temperature and IRC = 98 (near D65: daylight during a cloudy day at 12 AM). It has to be turned on 15 minutes at least before acquisition for lighting stabilization. Samples were placed on a removable white tray, diffusing a uniform light inside the device's 600 × 600 × 750-mm closable light chamber and the CCD camera takes a picture.
Article No~e00202
The instrument is able to undergo automatic calibration with a certified colour checker, and image analysis (RGB scale or CIE L*a*b*) and statistical analysis were carried out using the advanced software available in the instrument (Alphasoft, version 14.0). The data processing software extracts color parameters from the picture and can then correlate these data with data from sensory panels.
TA-Hdi ® texture analyzer
Textural characteristics of HQ, SE, and CH cooked hams were evaluated at 22°C using a TA-Hdi ® texture analyzer (StableMicro Systems, UK) equipped with a 245 N loading cell. Texture profile analysis (TPA), Allo-Kramer (AK) shear force, expressible moisture (EM), and gel strength were assessed in 10 replicates for each sample.
TPA, consisting in a double compression, was run on a 1 cm-high and 2 cm-wide cylindrical-shaped sample compressed up to 40% of its initial height. The test was performed using a 5 cm-diameter aluminium probe and a time of 20 sec was elapsed between two compression cycles. Force-time deformation curves were obtained and hardness (N), springiness, cohesiveness, chewiness (N), and gumminess (N) were calculated according to Bourne (1978).
Shear force test was performed using an A-K shear cell (10 blades) and a crosshead speed of 500 mm min −1 according to the procedure described by Bianchi et al. (2007). From each cooked ham, a 4 × 2 × 1 cm sample was excised, weighed, and sheared perpendicularly to the direction of muscle fibers. Shear force was then calculated as N shear per g of sample.
Expressible moisture (%) was measured following the procedure proposed by Hoffman et al. (1982) with some modifications. A 4 × 1 × 0.3 cm sample was cut, weighed, and placed between four filter papers (Whatman No. 1). The sample was compressed through a single compression cycle with a load of 12.15 N for 5 min and the amount of water released per gram of meat was calculated, conventionally expressed as percentage.
Lastly, gel strength (N × cm) was assessed on a 1 cm-high and 2 cm-wide cylindrical-shaped sample using a 5 mm stainless steel spherical probe according to the procedure described by Petracci et al. (2009).
Statistical analysis
Instrumental data (AK-shear force, gel strength, expressible moisture, hardness, springiness, cohesiveness, chewiness, and gumminess) and the intensity of each sensory attribute (typical appearance, pink intensity, presence of fat, overall aroma, spices and flavours, smoky, sweet, salt, cohesiveness and juiciness) were analyzed with a one-way-ANOVA or Kruskal-Wallis (in case of significance of the Levene Article No~e00202 test) to test the effect of market category (HQ, SE, and CH). Sensory and physical data were explored by principal component analysis (PCA). Pearson's correlations between sensory and instrumental data were performed to check possible relations.
Partial Least Square (PLS) regression was also applied to predict sensory attributes by instrumental variables. A cross-validation method was employed to validate PLS models. The precision and the predictive capabilities of the models were evaluated by the coefficients of determination (R 2 ) and root-mean square error estimated by cross-validation (RMSECV). All statistical analysis were performed by XLSTAT 7.5.2 version software (Addinsoft).
Sensory analysis
Before sensory evaluation of samples, the reliability of the panel's performance and training efficiency was checked to ensure reproducibility and repeatability (data not shown). Sensory profiling results (Table 2) showed that, in general, all visual attribute intensities followed an upward trend passing from CH, SE, and HQ samples; on the other hand, regarding texture attributes, there was a decreasing trend for juiciness and a growing trend of cohesiveness intensity going from CH, SE, and HQ. On the contrary, olfactory and taste attributes did not appear to be able to differentiate the commercial class to which a product belonged.
These results are in agreement with previous studies present in literature which found appearance and texture sensory attributes as more significant in describing and differentiate hams than flavour descriptors (Nute et al., 1987), also when the sensory evaluation was carried out by consumers (Delahunty et al., 1997).
The importance of product appearance was also confirmed by a recent study in which the effect of different factors (visual appearance, price and pack label) in purchasing decision, were investigated (Resconi et al., 2016). between the first and the second quadrant; they were characterized, above all, by the highest intensity of pink intensity, typical appearance and cohesiveness, and, at the same time, by the lowest intensity of juiciness. In the third and fourth quadrants all CH samples and one SE sample, that showed the lowest intensity of all visual attributes and the highest value of juiciness, were positioned.
Similar results were observed also by Tomović et al. (2013) in a study performed on cooked hams processed with different carcass chilling methods (rapid and conventional) and time of deboning in which the colour panel score increased with decreasing juiciness.
Moreover, a recent study of Henrique et al. (2015) in which the Check-All-That-Apply (CATA) questionnaire was applied for the sensory characterization of cooked ham by consumers, showed that appearance attributes (characteristic ham aspect, intense pink colour, uniform aspect) and texture ones (juicy, tender) were positively correlated with the preference and the willingness to pay whereas a pale colour had a negative influence on liking.
In the present study the sensory traits mainly ascribed to the high quality product category are: pink intensity, typical appearance and cohesiveness. On the contrary, the highest intensity of juiciness mainly defined the standard quality of cooked hams; this result could be linked with the effect of the addition of phosphates as ingredient of brine, in increasing the amount of retained water and therefore on texture traits (Toldrá et al., 2010;Resconi et al., 2016).
[ ( F i g . _ 1 ) T D $ F I G ] (loading plot on the right side). CH, cooked ham; SE, "selected" cooked ham; HQ, "high quality" cooked ham (score plot on the left side).
Image analysis
Cooked ham has a typical light pink colour as a consequence of nitrite addition.
During the heating process, the colour of ham changes from red (pork meat) to pink; this physical characteristic depends primarily on the initial content of myoglobin present in the muscles used, and, consequently, is dependent upon the muscle type and age of the animal at slaughter (Toldrá et al., 2010).
To characterize the product's appearance, an "electronic eye" able to quickly assess this property using an acquired image subsequently processed by a specific software, was used. Data processing by the electronic eye allowed to obtain a colour spectra of a sample in RGB coordinates (Red, Green, Blue) that could be In particular, for CH, greater colour homogeneity described by the predominance (> frequency percentage) of a lower number of bars (colours) corresponding to different tonality of pink was seen; on the contrary, for categories "selected" (SE) and "high quality" (HQ), the trend was reversed and the number of bars increased passing from SE to HQ. These results are in contrast with Iqbal, Valous, Mendoza, Sun, Allen (2010), who found that inhomogeneous colour surfaces characterize the lowest quality class, when the images of three qualities of pre-sliced pork with different brine injection level were compared. However, these authors indicated that the lack of homogeneity is due to the presence of discoloured sections of pork muscles while, in this study, is mainly linked to the presence and the visual recognition of major thigh muscles of the pork leg.
To evaluate its ability in discriminating the different categories of cooked ham, data collected by electronic eye on the five samples of each commercial class were processed by PCA (Fig. 3). A selection of the most discriminant variables has been performed in order to improve the separation between samples. The first two components explained 80.68% of the total variance (62.00 for F1 and 18.68% for F2). Considering the locations of products on the surface (PCA score) is possible to note that HQ and SE samples were quite grouped in a cluster, whereas CH samples were clearly differentiated from HQ and SE but divided in two groups mainly as a function of F1. The different direction/location of vectors (PCA loadings), shows which variables (colours) were involved in the appearance variations among samples. Variable "colours-2679" which describe the strongest pink intensity affected mainly the position of HQ samples, on the contrary, variable "colours-3514" which describe the weakest pink intensity, was opposite and characterized some CH samples.
These differences were probably linked to intrinsic variable of raw material such as the different water content that affected the concentration of meat pigments and therefore the ham colour (Moretti et al., 2009). The PCA score plot shows a good [ ( F i g . _ 2 ) T D $ F I G ] Fig. 2. Examples of color spectra obtained from the data processing by the electronic eye. CH, cooked ham; SE, "selected" cooked ham; HQ, "high quality" cooked ham.
Article No~e00202 discrimination between samples: the lowest quality class (CH) was clearly differentiated from the highest one (HQ); however the intermediate category (SE) did not significantly differ from HQ and belong to the same cluster. This is in accordance with the study of Iqbal et al. (2010), who reported that it is easier to differentiate between the lowest and the highest qualities in function of their colour uniformity, homogeneity and fat content and therefore confirms the effectiveness of specific image descriptors of colour in checking the quality specifications.
TA-Hdi ® texture analyzer
The data for gel strength, expressible moisture, sheaf force, and TPA parameters are summarized in Table 3 Overall, these results showed that instrumental traits of HQ hams are different compared with both CH and SE, which seem to be more related, especially considering textural traits. These differences were likely due to the complex dissimilarities such as raw meat characteristics, ingredients, brine injection level, and processing among products of different market categories as noted in previous studies (Casiraghi et al., 2007;Válková et al., 2007;Moretti et al., 2009;Pancrazio et al., 2015). Lower expressible moisture in HQ hams was likely due to the lower total moisture imposed by national legislation. Moreover, HQ hams had also higher shear force and springiness because whole muscles were [ ( F i g . _ 3 ) T D $ F I G ] used and, a lower brine injection level was also found by Casiraghi et al. (2007).
This agrees with the results of Válková et al. (2007) who found that shear force and springiness were negatively and positively correlated, respectively, with moisture content. Casiraghi et al. (2007) did not find any differences in product cohesiveness when hams with increasing brine injection level were compared.
The results of PCA analysis of instrumental texture parameters are shown in Fig. 4. Two principal components were extracted that accounted for 74.88% of the variance in the 8 variables. The first PC was mainly defined by the instrumental traits of gumminess, chewiness and hardness and gel strength, while the second PC was characterised by three instrumental parameters (AK-shear force, springiness, and cohesiveness). Expressible moisture appeared to equally contribute to both PCs. A good discrimination between HQ and the other classes of products (CH and SE) was observed. Positive PC2 values were associated with HQ samples, one SE ham and one CH thus confirming that AK-shear force, springiness, cohesiveness were mainly involved in product category discrimination. Otherwise, hardness, gumminess, chewiness, and gel strength seem to independently vary within each market category.
The relationship between sensory and instrumental data
The data obtained from both sensory and instrumental approaches were also statistically assessed to determine possible correlations; the sensory attribute of pink intensity was correlated with physical parameters (electronic eye and texture analyzer) with Pearson's correlation coefficient ranging between 0.57 and 0.72 (p < 0.05).
In particular the pink intensity attribute showed a positive correlation with AK shear force (0.62), springiness (0.57) and with the variable "Colours-2679" (0.72) that, in this study, were related with products belonging to the high quality category. A negative correlation was found, instead, with the variable "Colors-3514" (-0.66).
On the other hand, no significant correlation was discovered between the attribute presence of fat and instrumental measurements (appearance and texture), in agreement with previous studies (Válková et al., 2007). Considering the texture sensory attributes, only juiciness showed some negative correlations with instrumental parameters of AK shear force (-0.79), cohesiveness (-0.54) and springiness (-0.63) (p < 0.05). In contrast, (Resconi et al., 2015), reported a positive correlation between juiciness and springiness, both enhanced with the increase in polyphosphates while, in the present work, only juiciness characterized the product category with the higher phosphate content (CH).
Among texture instrumental parameters, positive correlations was found between: gumminess and hardness (0.95) as already observed by Válková et al. (2007), springness and cohesiveness (0.76), chewiness and hardness (0.75) and also between chewiness and gumminess (0.89) (p < 0.05), these two latter correlations were also confirmed by Resconi et al., 2015; which found a reduction in hardness and gumminess as a function of the increase of the phosphate content.
In addition, some correlations were obtained also among sensory attributes: pink intensity showed significant positive correlations with typical appearance (0.84) [ ( F i g . _ 4 ) T D $ F I G ] Fig. 4. Principal component analysis (PCA) built using data related to textural characteristics evaluated by texture analyzer (loading plot on the right side). CH, cooked ham; SE, "selected" cooked ham; HQ, "high quality" cooked ham (score plot on the left side). and cohesiveness (0.72) and a negative one with juiciness (-0.64) (p < 0.05); the latter result was in accordance with Tomović et al. (2013) who reported a similar correlation coefficient (-0.51, p < 0.05) confirming that these attributes were significant in the evaluation of the sensory profile of cooked ham obtained from different raw materials and technological process parameters applied.
The instrumental dataset and the sensory attributes related to them was also subjected to PLS regression with the aim to estimate a prediction model for sensory characteristics. For visual and texture sensory attributes (cohesiveness, juiciness, pink intensity and presence of fat), models using data coming from electronic eye and texture analyzer were developed. All PLS results were showed in Table 4. The best results were obtained from models developed using electronic eye data that allowed an effective prediction of pink intensity (R 2 = 0.95, RMSECV = 3.24) and presence of fat (R 2 = 0.88, RMSECV = 5.84) as showed by Fig. 5. For colour prediction, the developed model was better than that obtained by Iqbal et al. (2013) in cooked, pre-sliced turkey hams though by another image system (NIR hyperspectral imaging).
Conclusions
In this investigation, the application of physical-rheological and sensory techniques were able to provide useful information for quality control of Italian cooked ham samples. Sensory analysis resulted effective in defining the profile and the quality of the product. Among sensory attributes, those relating to appearance (pink intensity, typical appearance, and presence of fat) and texture (cohesiveness and juiciness) were the most effective in describing the class of ham providing a significant discrimination especially between the lowest quality market category (CH) and the other two higher quality categories (HQ and SE).
Data obtained by electronic eye were in agreement with sensory ones; on the other hand, considering physical-rheological parameters, AK-shear force, expressible moisture, springiness, and cohesiveness were able to clearly discriminate only the premium class ("high quality") from each others.
The electronic eye applied in this study allowed to develop a PLS models with a promising value of prediction of visual attribute of presence of fat and pink intensity (R 2 = 0.88, RMSECV = 5.84 and R 2 = 0.95, RMSECV = 3.24, respectively).
Based on these preliminary results, the use of physical-rheological parameters could be proposed to complement sensory analysis, for example in the definition of reference standards and for rapid quality control of different categories and classes of the same product. This study permitted to hypothesize a preliminary model for a fast and effective screenings to be conducted by a "one-day" experimental plan suitable for the quality control also of other categories of meat products. This analytical approach could be particularly interesting for food providers, buyers and retailers that intend to protect and promote these products to better addressing consumer needs and enhancing their competitiveness on the market. However, further efforts aimed to differentiate and certify a higher quality product and to improve consumer's knowledge and to direct them towards a more informed choice, are needed. Work in progress includes a consumer study on cooked pork hams to investigate the correspondence between attributes generated by the panel and consumer lexicon used in quality-related communications.
Declarations
Author contribution statement Sara Barbieri, Francesca Soglia: Performed the experiments; Wrote the paper.
|
2018-04-03T05:25:26.725Z
|
2016-11-01T00:00:00.000
|
{
"year": 2016,
"sha1": "5c7f04eb83b1bf1d1522231f30299b33cb3068d2",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.heliyon.2016.e00202",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5c7f04eb83b1bf1d1522231f30299b33cb3068d2",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Geography",
"Medicine"
]
}
|
14855349
|
pes2o/s2orc
|
v3-fos-license
|
Li in NGC 6752 and the formation of globular clusters
Li abundances for 9 Turnoff (TO) stars of the intermediate metallicity cluster ([Fe/H]=-1.4) NGC6752 are presented. The cluster is known to show abundance anomalies and anticorrelations observed in both evolved and main sequence stars. We find that Li abundance anticorrelates with Na (and N) and correlates with O in these Turn-Off stars. For the first time we observe Pop II hot dwarfs systematically departing from the Spite plateau. The observed anticorrelations are in qualitative agreement with what is expected if the original gas were contaminated by Intermediate Mass AGB - processed material. However, a quantitative comparison shows that none of the existing models can reproduce all the observations at once. The very large amount of processed gas present in the cluster does not imply a 'pollution', but rather that the whole protocluster cloud was enriched by a previous generation of stars. We finally note that the different abundance patterns in NGC 6397 and NGC 6752 imply different ejecta of the preenrichment composition for the two clusters.
Introduction
With the advent of 8m telescopes we are able to obtain high resolution and high quality spectra for stars belonging to the main sequence of globular clusters, which have allowed astronomers to derive accurate abundances for them. These abundances have set several limits to the physics of stellar atmospheres, and have shed some light on the long debated problem of the origin of chemical anomalies in globular clusters (Thevenin et al. 2001, James et al. 2004, Carretta et al. 2004. In this context Li abundance studies play a special role, given the fragility of this element which can be easily destroyed in the stellar interior. Indeed, while early Li studies in globular clusters mostly concentrated on the problems related to the primordial nature of the Li plateau (Molaro and Pasquini, 1994, Pasquini and Molaro 1996, Delyiannis et al. 1995, Boesgaard et al. 1998, later studies have emphasized the role of Li for understanding the mixing phenomena in globular cluster stars or for constraining the mechanism responsible for the chemical pollution of the clusters (Castilho et al. 2000, Thevenin et al. 2001 Send offprint requests to: lpasquin@eso.org ⋆ Based on observations collected at the ESO VLT, Paranal Observatory, Chile, Archive data of the Programme 165.L-0263 2002). The only cluster studied in some detailed as faint as the Turnoff is the nearby, metal poor NGC6397, which exhibits an impressively constant Li abundance among TO stars, at the same level of the field stars plateau . Since NGC6397 shows chemical inhomogeneity in the oxygen abundance among main sequence stars and a high nitrogen abundance, as recently reported by Pasquini et al. (2004), it is difficult to explain the plateau Li abundance without a fine tuning. The proposed Li production from intermediate mass (IM) AGB stars should give yelds very close to the values of primordial nucleosynthesis. The presence of a significant amount of beryllium suggests that these IM-AGB stars formed very early after the big bang, and polluted the gas which was later exposed for several hundred million years to the galactic cosmic ray flux before the stars we now observed formed (see the discussion in Ventura et al. 2001). Since the case of NGC6397 would require a fine tuning between Li production and destruction in the IM-AGB to reproduce exactly the Li plateau level observed, it is interesting to investigate the behaviour of Li in other clusters to test if they carry the signature of AGB pollution. NGC6752 has a metallicity of [Fe/H]=-1.43, and with a temperature of about ∼6200 K, its TO stars belong to the Li plateau (Spite and Spite 1982). NGC6752 is therefore an ideal cluster for this study. The cluster is one of the prototypes of globular cluster with chemical anomalies, where the first O-Na anticorrelation has been discovered among TO stars (G01).
Sample selection and observations
We selected the stars of the G01 sample. Stars numbers together with their photometric properties, atmospheric parameters, Li and Na abundances as derived in G01 are listed in Table 1. These stars are in the same metallicity and temperature scale as NGC6397 stars in G01. Although G01 showed that the temperature of the stars is compatible with a single value, since Li abundance is very sensitive to small differences in effective temperature, we also computed the effective temperature of each star by assuming the observed photometric values, the reddening of E(b-y)=0.032 (Gratton et al. 2003 (G03)), a metallicity of -1.43 as derived by G01 and the Alonso et al. 1996 scale. ESO archive was searched to identify UVES observations of similar stars, but no spectra of TO stars were available.
Abundances were computed with the method outlined in Bonifacio et al. 2002. Kurucz models were computed with the appropriate metallicity and temperature, and the Li abundances derived from the observed equivalent widths (EWs). The typical error in the Li abundance is of 0.05 dex. We do expect an error of up to 0.1 in A(Li) 1 when considering possible uncertainties of up to 100 K in effective temperature. However, we shall consider that the moderate reddening of the cluster (E(B-V)=0.040) and the use of the same temperature scale for all the stars implies that most of this uncertainty applies to the absolute value of Li abundance, but much less to the results concerning the dispersion of Li abundance in the cluster. The latter should be dominated by the photometric error in the b-y colour, which should not exceed 0.02. This translates in an uncertainty of ≈ 60 K in the T eff or 0.05 dex in A(Li).
Discussion
The equivalent width measurements of Table 1 indicate at first glance a variability of Li. We know, however, that to establish the real Li variations in a cluster requires a proper analysis of the errors and of the additional, possible hidden biases introduced by the analysis method (see e.g. Bonifacio 2002). Since the stars are consistent with a single effective temperature we start the analysis by considering the equivalent widths only. The S/N ratio of the observations varies between 34 and 67 (per pixel) and the errors estimated in the equivalent widths range between 1.8 mÅ for the best exposed spectra to 4 mÅ for those with lower S/N. (The relative S/N ratio among the stars can be deduced by the errors in the Li abundances given in Column 6). The difference in Li equivalent width among different stars is between 5 to 10 times larger than the typical measurement error for any object. The average of the equivalent widths is of 26.9 mÅ, with a σ of 8.5 mÅ. The dispersion is 2-5 times larger than the measurement errors on the single spectra. In Figure 1 the Li equivalent widths vs. Na abundance are plotted. The figure shows a clear anticorrelation between Li and Na, and the Kendall's τ test provides an anticorrelation probability of 99.78%.
To exclude that Li variations could be an artifact produced by possible temperature differences among the stars, we recomputed the Li abundances adopting the Alonso et al. T eff values given in column 8 of Table 1. The resulting Li abundances show a lower Li mean level (this is not surprising, being the photometric scale 135 K lower) and a Li scatter slightly lower but comparable with what obtained with the unique temperature hypothesis. This confirms the presence of the Li-Na anticorrelation.
With this temperature scale the [Na/Fe] abundances will also change. But, since [Na/Fe] increases by decreasing the temperature ( [Na/Fe] increases by 0.034 dex for a difference in temperature of -100 K, see G01) while Li abundance is decreasing, the Li-Na relationship remains substantially unchanged, as can be seen in Figure 2 and by directly comparing the values tabulated in Table 1: the [Na/Fe] range spanned by the stars is about one order of magnitude, irrespective of the T eff scale used.
We therefore are confident we found evidence for the first time of Li -Na anticorrelation in a group of GC stars with characteristics (metallicity, temperature and gravity) close to those of the Spite plateau.
There is no general consensus about the metallicity at which the plateau ends in the field halo dwarfs. Bonifacio & Molaro (1997) defined this edge to occur around [Fe/H]=-1.5, where the first signs of stellar Li depletion start to appear. NGC 6752 with [Fe/H]=-1.4 is near this edge, being in fact slightly more metal-rich. The metal enhancement is however so small (and also dependent on the zero point adopted) to make the belonging of these TO stars to the plateau unquestionable.
We interpret the Li-Na anticorrelation as evidence that the gas forming NGC 6752 has been contaminated by a previous population which is responsible for the chemical inhomogeneities.
We must analyze the extent to which our targets have been polluted by the processed material. This aspect, in turn, will also provide us with fresh information about the details of the cluster formation.
3.1. Implications on cluster chemical anomalies and cluster formation An important point to recall is that the CNO cycle, which makes the N overabundance and O underabundance, and the Ne-Na cycle which produces the Na overabundance occur at very high temperatures, 20-30 times higher than the ∼2.5 Million K at which Li is destroyed. It is thus expected that in the places where these cycles occur, no Li is left. If 'pristine' and 'processed' material are mixed, then Li, Na and N are expected to show some anticorrelation, and Li and O some correlation.
In figures 1 and 2 There are two additional relevant aspects to be considered. A first aspect concerns the stars with the higher Li. These stars show the lowest Na and highest O and most likely likely they are very little polluted by processed material. If we take the Li abundance of these stars on the G01 temperature scale at face values, their A(Li) is about 2.45, or 0.1 dex higher than the plateau level. We note that these values are also found in NGC6397 stars when adopting the same temperature scale , their Column 7 in Table 2). We interpret therefore this higher Li abundance as entirely due to the use of the G01 temperature scale, which is hotter than the Alonso and the Bonifacio et al. (2002) temperature scales.
A second aspect refers to the most Li-poor stars: it is worth noticing that also in the most Li-poor stars the Li line is always detected, although at an abundance level of A(Li)∼2, or 2-3 times lower than in the stars with the highest Li content. The fact that some Li is preserved even in the most Na-rich stars confirms that the observed chemical anomalies have not been produced by the star itself, but rather that the gas was processed previously somewhere else. This was shown by G01, because TO stars should not reach temperatures so high to ignite the Na cycle. However, this behaviour is different from what ob-served in the metal poor cluster NGC 6397, where A(Li) is constant.
The contamination can be obtained in different ways either through Bondi accretion or through a process involving the whole protocluster cloud. We favour the latter because if the chemical anomalies were limited to the external accreted layers of the star, they should be later washed out when the stars undergo the first dredge up, (as happens for Li, cfr. Grundahl et al. 2002 ). This is not the case, since these anomalies are observed all along the RGB (see e.g. Carretta et al. 2005). The fact that Li is observed even in the most 'contaminated' stars implies, then, that some Li must have been created by the previous generation of (contaminant) stars.
The second possibility however implies such a huge contamination of the protocluster cloud that probably 'contamination' is not the most appropriate term anymore.
The anomalous abundances suggest a precise composition of the contaminating gas. The maximum difference observed in the Na abundance is almost one order of magnitude, the one in the Oxygen abundance is of about 0.8 dex, in the Li abundance is only of about a factor 2.5. At the same time the other heavier elements remain unchanged, and in particular the accreted material was not enriched in s-process elements (James et al. 2004). This shows that the most polluted stars have accreted at least 90% of their gas which was Na rich, with a Li content lower but close to primordial and a negligible content of Oxygen. If a large fraction of the stars' mass is indeed made out by this processed material, it is likely that this is just the signature of a group of stars in a limited mass range. We can draw a scenario where the elements created by supernovae are well mixed in the protocluster, while the products of stellar winds, with lower velocity, would be more inhomogeneous. The ejecta of the previous generation of stars had an upper limit content of A(Li)∼2.0; A(Na) of at least ∼5.4; A(O) of less than ∼7.0 and A(N)∼7.9. The general behaviour is qualitatively consistent with the models by Ventura et al. 2001Ventura et al. , 2002 who predicted the Li-O correlation from an intermediate-mass AGB contamination and that Li should not be destroyed completely.
In more quantitative terms, there is a rather good agreement with the models published by Ventura et al. 2002 for very low metallicity IM-AGB. Ventura et al pre-dict, for a Z=0.0006 initial composition, Li abundance of the order of A(Li)∼1.5-2, Oxygen abundance of ∼6.5-7.4 and N abundance of ∼6.9-8.3 . Nitrogen abundance in NGC6752 TO stars is enhanced (Carretta et al 2005) showing clear evidence for CNO processing. According to the models, the low Oxygen abundance provides a clear indication that the generation of stars producing the chemical inhomogeneities in NGC6752 could only originate from 4-5 M ⊙ metal poor (with Z < 0.0006) AGB stars. The relatively high Li, on the other hand, is predicted to be produced only by fairly low mass (3 solar masses) and relatively metal rich progenitors. Although a full, detailed modelling might change these results, our preliminary conclusions are that the observed Oxygen and Li abundances seem incompatible with progenitors of one type. We note that other works found that the Oxygen -Na anticorrelation cannot be quantitatively explained by the present IM-AGB models (Denissenkov and Herwig 2003, Palacios et al. 2005, Ventura and D'Antona 2005.
Uncertainties in fundamental aspects of AGB evolution such as mass loss rate and treatment of convection at present seem to hamper the generation of realistic predictions for low metallicity AGB stars, and we might be at the stage where observations such as those presented here will serve to constrain evolutionary models rather than the opposite.
Another important aspect is to understand the difference between the Li behaviour in NGC6397 and in NGC6752 if the IM-AGB scheme were acting in both clus- Table 1 of Carretta et al. 2005, who used the G01 temperature scale. ters. A corollary implication would be that the ejecta of the contaminants of the two clusters had different chemical compositions. Following the same argument as above and taking the O and N data from Pasquini et al. 2004, for NGC6397 we expect ejecta which were more rich in Li (about 2 times, or A(Li)∼2.3), with an O-poor content of A(O) of less than ∼6.7, while they had about A(N)∼7.3. As far Na is concerned, the value measured by G01 in the TO stars is at the level of A(Na)∼ 4.5, constant among all stars, but the analysis of the subgiants by Carretta et al. (2005) shows clear variations with values of A(Na) up to ∼4.8.
We finally comment that in order to explain the chemical variations observed in the AGB context, huge pollution is required. The two stars n4907 and n200613 should have been formed by more than ∼90% of IM-AGB processed material. If our sample of stars is indicative of the cluster population, it would imply that a large fraction (say about a half) of the gas which formed the stars we now observe was indeed processed by the previous IM-AGB stars population. The actual cluster mass is about 2× 10 5 solar masses, therefore at least 10 5 solar masses were pro-cessed by IM-AGB stars, leading to a minimum of ∼3×10 4 IM-AGB stars to produce the observed anomalies. Since there is no hint of the presence of low-mass stars belonging to this first generation, this implies a flat-topped IMF. A parallel effect would be He enhancement produced by this IM-AGB processing, which was analyzed by Ventura et al. 2002 andby D'Antona et al. 2003. In addition, a considerable number of remnants should be present in the cluster. These white dwarfs might be, however, not easily detectable: they would be likely in the faint tail of the luminosity function and, in addition, they might have been segregated during the complex dynamical history of the cluster.
Even if at present AGB stars remain the most promising candidates, the problems encountered in explaining all the observed features lead some groups to look for alternative scenarios to explain the observed abundance patterns: Yong et al. (2005) suggested the presence of a new process producing simultaneously light and s elements in globular clusters; Piotto et al. (2005) invoked the possible presence of low mass SNae to explain the He-rich main sequence of ΩCen, and SNe with extensive fall-back were invoked in various flavours to explain the abundance patterns of the most metal poor stars (Umeda andNomoto 2003, Limongi et al. 2003). Massive stars able to eject light elements, while retaining the heavy elements locked in the SN remnant are, in principle, attractive candidates. However, when analysing possible scenarios, we encounter several problems: it is difficult, for instance, to produce the very low oxygen observed, to locate a process of Li production, and, given the enormous mass of processed material required, a very peculiar IMF must be postulated.
Quantitative element analysis, such as that presented here, provide the experimental framework for solving this interesting puzzle.
|
2014-10-01T00:00:00.000Z
|
2005-06-27T00:00:00.000
|
{
"year": 2005,
"sha1": "9ebd143f5dcc2633636a2ac57b664916a42ff637",
"oa_license": null,
"oa_url": "https://www.aanda.org/articles/aa/pdf/2005/38/aa3607-05.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "f7e9204a65cf2624497442c38996ce0b924ea46b",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
247843511
|
pes2o/s2orc
|
v3-fos-license
|
‘Everyday Life’ and the History of Dictatorship in Southern Europe
Dictatorships are of course put into effect from ‘on high’, by the dictators themselves and the loyal individuals who support and enable them. However, in addition to being decreed and imposed ‘from above’, dictatorships are also effectively enacted ‘from below’, in the local spaces and everyday cultures inhabited and performed day-by-day by the people who live through them. Local representatives of the dictatorial state – party officials, civil servants, police officers, and so on – and those with semi-official positions of trust – for example, doctors, midwives, university professors, teachers and journalists – are charged with putting into practice the intended aims of the dictator, but in the process of doing so must absorb and interpret these aims, leading to their potential distortion or modification. Crucially, dictatorships are experienced subjectively by the individuals who live within their borders, who also, through their agency, actions, practices and attitudes, have some capacity – albeit heavily circumscribed in many ways – to contribute to the making – and potentially to the unmaking – of dictatorship. The basic ‘unit of experience’ of dictatorship, therefore, is locally and subjectively bound. This kind of understanding of how dictatorships have functioned in practice and of the need to uncover what can be called the ‘actually-existing dictatorship’ through the examination of the subjectivities and practices that made up their lived experience was pioneered by West German Alltagsgeschichte historians of Nazi Germany in the 1980s
and 1990s, most notably by the (late) historian Alf Lüdtke, but also by historians working in other contexts, who didn't necessarily label themselves Alltagsgeschichte historians, like Sheila Fitzpatrick in her formative work on Stalinist Russia. 2 Whilst historians of the Soviet Union and of the dictatorships of post-war Eastern Europe took up the 'everyday life' frame relatively quickly, it has only been within the past decade or so that the influence of Alltagsgeschichte and of 'everyday life' history approaches (often in concert with Italian microhistory and Anglophone 'history from below') has been extended to the study of southern European twentieth-century dictatorships. Most notably this new work has focused on the dictatorships of Fascist Italy and Francoist Spain, but studies taking an Alltagsgeschichte-informed approach to understanding the Salazarian dictatorship in Portugal and the Colonels' dictatorship in Greece are also beginning to emerge. 3 This special issue showcases the work of several scholars whose research is concerned with uncovering and understanding everyday life and the lived experience of dictatorship, and its aftermath, in twentieth-century Southern Europe. Our articles cover a temporal range from 1922 to the early 1980s and geographically extend from Greek-ruled (and Turkish-invaded) Cyprus to the Portuguese-governed Azores. Our aim in presenting this special issue is both to bring to broader attention the historiographical shifts represented by the extension of an 'everyday life history approach' to national case studies where these had been previously largely absent or under-developed, and to constitute in itself an important contribution and milestone in this development. Moreover, we demonstrate the value of bringing Southern Europe into the comparative frame of existing scholarship on historical dictatorships; with a few notable exceptions, comparative histories of twentieth-century European dictatorships have tended to pivot on Nazi Germany-USSR, Nazi Germany-Fascist 2 Among Lüdtke's works that have the greatest relevance for this issue, see: Alf Lüdtke Italy, Germany-Eastern European axes. 4 Whilst the individual articles in the special issue focus on single national cases, these are (often consciously) understood within thoroughly transnational frames of reference.
Importantly, the articles in this collection take different routes into the exploration of day-to-day lives and 'ordinary' people's lived experiences of dictatorship and what that might entail, methodologically and empirically. 5 Crucially, though, all the articles find some degree of inspiration in Alltagsgeschichte approaches and specifically in the conceptual and analytical tools used by Alf Lüdtke, and those coined by Michel de Certeau in his observations on The Practice of Everyday Life. 6 Since its origins in the 1970s, Alltagsgeschichte has proved, with its attention to daily life and to the fragmentary episodes that comprise it, to be a fruitful avenue of research, helping us to better understand the relationships between dictatorial power and society. As an approach it has, of course, not lacked its critics. As Lüdtke set out himself, the not-insignificant criticisms levelled at everyday life histories, especially by structuralist historians of Nazi Germany, pinpointed its supposed focus on the 'tinsel and trivia' of history with woolly definitions, unscientific methods and anecdata, with results thatthey arguedrisked romanticizing, normalizing and thereby minimizing the (criminal) responsibilities of historical actorsan always heavily weighted accusation but one with the most serious impact in the context of Third Reich history. 7 For Hans-Ulrich Wehler, speaking in 2000, the 'failure, theoretically speaking' of Alltagsgeschichte was evidenced by the rise of the culturalist approach. 8 These charges, not least the accusation of relativizing Nazi crimes, were strongly countered by Lüdtke and others, as they pointed to the possibilities of everyday life histories to complicate our understanding of dictatorial society and its power dynamics, and thereby to complicate, but not dissolve or absolve, the widely-used categories of perpetrator, victim and bystander. Undoubtedly, despite the (West) German social historians' critiques, the approach captured many historians' attention internationally, perhaps especially in the US, and its champions defended its capacity to produce 'relevant' history that shapes, and at the same time is shaped by, politics. 9 Moreover, in the resurgence that Alltagsgeschichte has undergone in more recent decadesdescribed by Steege et al. as a 'second chapter' 10its relationship with culturalist historical approaches has been marked by compatibility and engagement, such that researchers have called for a re-reading of its contributions and the application of these to the study of specific, and novel, cases. 11 The inherent ambiguity of everyday life becomes, paradoxically, a key source of its attraction for researchers from diverse disciplines. Everyday life is polyphonic, complex and dynamic, comprising a scenario of multivalent interactions between elite and mass, between micro and macro, and between public and private. 12 Thus, Alltagsgeschichte has laid bare the permeability of political and societal structures and their discontinuous and unstable nature, in the face of those who would see these as defining schematic frameworks of thought and action. 13 Without overlooking the objective existence of structure, Alltagsgeschichte historians placed emphasis on the agency of individuals and groups to act within and, indeed, upon their immediate surroundings. Hence, scholarly interest is located in the divergences from normative patterns of behaviour and in modes of behaviour and 'lived experience' related to everyday life, giving rise to a messier and less dichotomous account than those derived from perspectives that frame dictatorial society and politics according to a binary of dominator and dominated. 14 In this sense, Alltagsgeschichte enables us to pose an essential question: how were official policies and discourses received? Referring to the attitudes of the German population towards Nazism, Ian Kershaw noted that, 'the cool deliberations of historians can seem painfully detached and remote' and to a certain extent, sterile, to those who lived through that period. 15 Nevertheless, we can offer conjectures based on what was offered to society. We must delve more deeply into the capacity of the dictatorships to disseminate and ensure that their ideas fully penetrated, and did not simply circulate at the margins of the social sphere. 16 However, for this very reason, we must continue to investigate the ways in which historical subjectsactorscreatively shaped and acted upon the reality that surrounded them. 17 In other words, we must analyse what, following an Alltagsgeschichte approach, can be labelled 'appropriation', which refers not only to the ways in which people make grand abstractions 'their own' in order to render them more manageable, but also to the way in which discourses and policies are re-worked 'from below' through complex processes of adaptation and negotiation that take place in the spaces and practices of daily life. 18 In effect, the populations that lived with(in) these regimes made use of a wide repertoire of attitudes and actions that fit within Michel de Certeau's analytical framework of 'strategies' and 'tactics'the former denoting actions taken by the powerful to exercise and maintain their authority and control; the latter describing, conversely, those opportunistic, adaptive and resourceful acts which necessarily operate in the 'space of the other'and/or within James C. Scott's conceptualization of 'weapons of the weak', used to analyse forms of peasant resistance. 19 From Alltagsgeschichte we might employ the concept of the 'patchwork of practices' to highlight in particular the importance of going beyond theory and focusing on the concrete practice enacted by historical actors within the contexts in which they live, what they do and how they do it. 20 The emphasis on historical actors' practices is vital in order to better understand how dictatorships were effectively realized in everyday spheres, the actual development of their projects and their capacity for shaping people's lives. 21 At the same time, the focus on concrete practices connects to two of the principal contributions of the Alltagsgeschichte approach. The first of these is the need to broaden our conception of what we understand as 'politics', extending this beyond political parties and political institutions and linking it to the myriad microsocial interactions that take effect in people's everyday lives. 22 In addition, it is within everyday life that historical actors utilize their own ways of doing politics and the capacity for agency of some historical subjects, for example women, becomes more visible. 23 The second key contribution is the pertinence of paying attention to the everyday experiences of individuals, that is, to their particular modes of perceiving historical processes and the ways in which they interact with articulations and representations of power. 24 Such experiencescomplex, multiple and non-linearwill naturally offer us fragmentary and inconsistent accounts. 25 However, this does not mean that they lose any validity for our analyses.
By attending to these experiences and to how past actors appropriated the lived realities that they were furnished with, we can work to rethink the nature of power relations and move beyond the binary explanations of consent/resistance. 26 This special issue brings together and showcases the work of an international group of scholars, who are all conducting innovative research into questions around symbolic practice, subjectivity, agency, Eigen-Sinn, space and memory during and in the immediate aftermath of four twentieth-century dictatorships in Southern Europe: Mussolini's fascist dictatorship in Italy; the Francoist dictatorship in Spain; Salazar's regime in Portugal; and the Colonels' dictatorship in Greece.
Kate Ferris's article investigates how everyday life and the lived experience of dictatorship was spatially framed in Fascist Italy. Focusing on bars and other spaces in which alcohol was consumed, the article traces the evolution of political practices and interactions enacted in bars, and connected to alcohol consumption, before, during and after the fascist seizure of power and the consolidation of its dictatorial rule. Drawing on a varied source base including osteria guidebooks, Blackshirt memoirs and a 'collage of miniatures' made up of police reports into political crimes of subversion, Ferris makes the case for understanding everyday spaces and practices as operating in mutually impactful interaction to shape modes of everyday political sociability and comportment, including those marked by violence and conflict.
Two articles examine facets of lived experience of dictatorship in Francoist Spain. Claudio Hernández Burgos looks at the Francoist post-war period during the 1940s, exploring the diverse responses and attitudes of ordinary Spaniards to the situation of hunger and scarcity that the population experienced for several years. In this way, his article makes the case that understanding the 'years of hunger'the immediate post-war period marked by extreme poverty and harsh political repression during which it is estimated that between 150,000 and 200,000 people died from hunger and diseases caused by malnutritionthrough the lens of everyday life allows us to better grasp the meaning of the particular practices through which ordinary Spaniards dealt with misery and the ways in which, within the circumstances, they sought to 'normalize' their everyday lives. In her article, Gloria Román Ruiz examines how acts of 'symbolic resistance' functioned during the desarrollismo years of the 1960s in ways that were at once highly individualized and more ideological than comparable dissenting episodes during the preceding 'years of hunger'. Forms of 'everyday expressions of irreverence' towards the dictatorial regime included 'speech acts' such as jokes and blasphemies, texts including pamphlets and graffiti, and material objects such as flags and caricatures, often focused on the figure of Franco. As Román Ruiz shows, these constituted ambiguous but significant episodes of 'symbolic resistance' that pointed out the 'cracks' in, and contributed to disrupting, the caudillo myth, all the while within a framework of repressive, punitive and centralized dictatorial power.
Daniel Melo's article analyses everyday life under the Portuguese Estado Novo. His study explores multiple aspects of societal experience and practice: popular culture, festivities, mechanisms of repression, the world of the press and literature or mass culture. The long time frame allows the author to evaluate the continuities and ruptures in Salazar's policies and discourses as well as in the daily lives of the population. Moreover, Melo pays particular attention to the construction of everyday nationalism, which, through formal and informal/banal instruments, contributed to disseminating the Estado Novo's ideology among wide sectors of the population.
A further two articles explore the lived experiences of dictatorship at the geographical edges of those regimes. In so doing, both demonstrate the heuristic value of examining everyday practices and experiences at societal 'margins', as Michel de Certeau put it, or, as Jon E. Fox alternatively framed it with respect to everyday or 'banal nationalism', at the 'edges' of the entity under examination. 27 Exploring dictatorship at its 'margins' or 'edges', whether these be political, societal or spatial, brings the possibility of uncovering tropes, practices and world views that might more commonly lurk under the surface, 'taken-for-granted' and 'implicit', by examining these at times and places -'edges'when they are rendered (momentarily) explicit. Underpinning this is the premise that (not unlike microhistorians' notion of the 'exceptional typical') uncovering what is extraordinary, unusual and marginal has much to tell us also about what is ordinary, common and normative. In the case of Beatriz Valverde Contreras and Alexander Keese's article, the dictatorial edge they explore is that of Salazar's Portuguese regimewhich, like Francoist Spain and Fascist Italy was also a colonial regimein this case as it was manifested in the mid-Atlantic Azorean islands. Through the examination of PIDE (Polícia Internacional e da Defesa do Estado; political police) files, the article outlines the 'room for manoeuvre' that could be eked between the multiple sources of authority, including police agents, civil governor and the American military base, operating in this peripherical zone of the Salazarian regime. The results of such manoeuvring for some allowed them to evade dictatorial repression. For Huw Halstead, the recollected experiences of Greek Cypriots in 1974, caught between the Greek dictatorship's coup d'état and Turkish invasion, proves a fertile ground for exploring the possibilities for acting with agency amid and despite the chaotic delimitations of military mobilization, warfare, displacement and division, on what effectively constituted a temporal and geographical 'edge' of both regional powers' authority. As Halstead shows, whilst the Greek Cypriots he interviewed considered themselves powerless 'pawns', their testimonies reveal evidence of exercising individual choices, enacting countless 'micro-acts', of 'making do' and of mobilizing oneself in spite of the 'asymmetries of power' in which they remained caught.
Finally, two articles explore the lingering impact of dictatorship on everyday lives following the end of dictatorial rule, in periods of transition towards democracy, demonstrating how the after-effects of the regimes continued to reverberate and to shape 'ordinary' people's relationships with the state and with one another, often long after the official fall from political power of its rulers. Luke Gramith's article investigates how processes of 'defascistization' in Monfalcone, a town close to Italy's north-east border, were indelibly marked by residents' everyday lived experiences of fascism over the preceding twenty years and were understood not only as processes to dismantle the political hierarchies of the regime but also to dismember the structures and manifestations of 'everyday fascism' in their locality. In so doing, Monfalcone residents' understanding and practices of defascistization could be at odds with those of the Allied Military Government which filled the political vacuum left by the fascist regime from 1945, an incompatibility that had significant consequences. In their article exploring the 'Azure Generation' in Greece in the aftermath of the Colonels' dictatorship (1967)(1968)(1969)(1970)(1971)(1972)(1973)(1974), Ursula-Helen Kassaveti and Nikolaos Papadogiannis examine everyday 'cultures of conservatism' through the reception and performance of political songs by young Greeks who identified as Liberals. Listening to and singing particular political songs, composed through top-down and bottom-up interaction, comprised key forms of 'symbolic practice' by members of the youth wing of the Nea Dimokratia (New Democracy) party, ONNED, through which they both reconfigured their understanding of recent Greek history and mobilized themselves in the complex political landscape of their post-dictatorship present.
Individually, each article draws on the peculiarities of their case and sets out the possibilities an 'everyday life' history approach offers to historiographical debates that may be nationally boundwhether this is a way through and beyond the fractious disputes around consent for Italian fascism or a shift in focus that complements, but moves forward, the concentration of research efforts on questions of political violence in Francoist Spain. At the same time, there are marked similarities and nodes of connection between the articles, which are perhaps most visible in relation to their methodological approaches and the analytical and theoretical tools they deploy. Collectively, the issue points to the ways in which histories examining questions of subjectivities, experience, agency and practice allow us to challenge and complicate some of the binary oppositional categories through which the histories of European dictatorships have so often been viewed: those of coercion and consent; ideology/culture and reality; totalitarianism and authoritarianism; perpetrator and victim; fascist and anti-fascist; consensus and resistance.
A key thread linking all the articlesand practitioners of everyday life history more generallyis a focus on the 'microscopic', on individuals, small groups and communities, small-scale events, objects, practices, localities and neighbourhoods, and on what Alf Lüdtke called 'miniatures'the scholarly recreation and investigation of these small, individualized or localized situationswith the conviction that these have much to reveal about the complexity of dictatorial rule (as these regimes actually existed, as well as how they imagined and projected their rule). In this vein, all the articles in this special issue work to amplify our understanding of what might be considered 'political'. As Luke Gramith points out in his article, what the (in this case fascist) dictatorship effectively was for many of those who lived through it, was the dictatorship as it presented itself through its policies, its individual representatives, its structures and its actions in localities and in everyday life.
In addition, the articles are held together by shared theoretical concerns and, often, common conceptual tools. Perhaps above all is the concern for understanding the functioning of agency in dictatorships, of individual human actors, exerted through their own comportment but also relationally, via familial, friendship or other interpersonal relationships. Claudio Hernández Burgos, Gloria Román Ruiz and Huw Halstead find value in Lüdtke's notion of Eigen-Sinn, often (imperfectly) translated as 'self-willed' action or 'stubborn wilfulness' to account for creative acts and modes of behaviours that allowed the actor to disrupt routines and (temporarily) evade the expectations or impositions of those in power; Lüdtke's example was that of factory workers engaging in jokes or pranks or taking advantage of machinery breakdowns to gain (albeit temporary) breathing space from mechanized production lines on the factory floor, which he mapped on to workers' 'patchwork' responses to the political intrusions of Nazism through similarly personalized acts of (temporary) self-assertion. Analogously, de Certeau's notion of 'tactics', already mentioned, operates in a similar fashion, to carve out the possibility of (partially) autonomous action, through opportunistic, flexible practices, whilst still continuing to inhabit and operate within asymmetrical power structures and spaces. Hernández Burgos, in his article, demonstrates how individuals made use of highly heterogeneous strategies, including those which could be labelled as Eigen-Sinn, to cope with the harsh living conditions caused by Francoist economic policies during the 1940s; Román Ruiz shows how mocking the dictatorship was not (only) implemented by political enemies of Francoism, but also by individuals who could be 'in' and 'outside' the regime at the same time. Finally, Halstead's article focuses on the multiple 'tactics' enacted by Greek Cypriot men and women during the Turkish invasion of the island. He pays particular attention to the ways in which people 'navigated' their everyday lives by performing micro-acts which allowed them to meander through official narratives and policies. Despite being conditioned by regime structures, individuals actively manoeuvred themselves to take care of their ordinary needs and concerns.
Along similar lines, Lüdtke's assertion that such opportunistic practices allowed individuals to carve out 'room for manoeuvre' between, around, and indeed within, the confines of Nazi German violence, policy and strictures finds a clear echo in Fascist Italy, Francoist Spain, Salazarian Portugal, and Cyprus under Greek dictatorial rule and Turkish invasion. Daniel Melo documents the roles played by cultural and recreational societies in providing space for the Portuguese to encounter and discuss alternative ways of conceiving and organizing society. In Italy, meanwhile, bar owners and their patrons deployed situative tactical modes of behaviour and agency to negotiate Blackshirted violence, strict licensing and public security laws, police surveillance and informants in order to maintain the long-standing political sociability function that bars and alcohol consumption facilitated, including political activity that was anti-fascist or somehow antithetical to the regime. Such manoeuvrings were not always successfulmany bars known for anti-fascist political leanings were closed down and anti-regime speech acts and practices enacted in bars and/or under the influence of alcohol, if detected, were very often punished as political crimes of 'subversion', including with sentences to confino [internal exile]but the tenacity with which so many Italian bar owners and clients adapted to, and worked their way through, the regime's changing regulation of these everyday spaces and of the comportments conducted therein is striking.
Further, many of the articles identify practices that the authors label as 'symbolic' and even as 'symbolic resistance'. Again following the example of earlier studies, including those of Luisa Passerini, James C. Scott, and Lüdtke, the articles by Kasseveti and Papadogiannis and by Román Ruiz interrogate various forms of symbolic practice, including songs and other speech acts, to ask how and to what end these were deployed as individuals negotiated the dictatorial state. 28 In their article on post-dictatorial Greece, Kasseveti and Papadogiannis suggest that listening to, singing and performing political songs by moderately conservative young Greek Liberals (members of ONNED) constituted sets of ritualized, simultaneously symbolic and prosaic, practice that served to delineate and differentiate themselves both from left-wing youth and from other factional cohorts within the ONNED group. Román Ruiz, meanwhile, makes a compelling case for understanding the speech actsinsults, blasphemies, jokes, toasts, rumours and moreshe studies in Francoist Spain from the 1960s as forms of 'symbolic resistance' which allowed those Spaniards who engaged in them to articulate 'minor' but no less meaningful 'expressions of dissidence'. As she shows, whilst the repressive violence with which the state punished 'daily acts of resistance' may have lessened by the 1960s, the risks those who engaged in forms of 'symbolic resistance' were willing to take increased. What's more, the impetus and shape of these acts changed. Less motivated, necessarily, by economic deprivation and desperation than they had been during the 'years of hunger' (1940s), transgressive acts of symbolic or 'elliptical protest' (to borrow Graham and Labanyi's term 29 ) took a more ideological character and were intended to be seen or heard by others, often taking place in public spaces including streets, squares and taverns.
Elsewhere, our authors present 'miniatures' elucidating practices that are more difficult to pin down and, certainly, to define as forms of 'symbolic resistance'. In Claudio Hernández Burgos's article, we encounter practices that do not fit neatly into moulds of 'resistance', 'opposition', 'support' or 'consent'. Spanish farmers' refusal to comply with government diktats on autarchy and, indeed, their participation in the black market cannot be read straightforwardly as 'resistance'. Motivating their actions were complex rationales and attitudes including the possibility for increasing personal wealth and a perceived need to protect long-standing communitarian interests. Their non-positions or enjoyed a particular status in past societies. As such (and as will be familiar to historians in many other fields, not least those seeking to recover the experiences and legacies of colonialism), everyday life historians deal with a necessarily fragmentary and sparse source base. In addition, our focus on dictatorship and the fostering of climates of violence, surveillance, restriction and censorship (including self-censorship) by the regimes under study add further challenges to the collating of an archive. All the contributors to this collection have had to grapple with these issues as they explore dictatorships from both the 'bottom up' and 'top down', at the meeting point between the intended dictatorship and the actually-existing dictatorship. All make recourse to multiple source types as a means of expiating patchy, incomplete and sometimes problematic source material. Kassaveti and Papadogiannis, for example, construct an archive from newspaper reports, political pamphlets, interviews and autobiographies, as well as song lyrics. Gramith draws on a varied source base composed of multiple archival holdings including the records of the Allied Military Government in post-war Italy, the Italian Communist Party, local government, local employers, the Cantieri Riuniti dell'Adriatico, as well as contemporary national and local publications.
'Official' sources, records housed in (state) archives produced by regime officials, by its ministries, quangos, agents and by the dictators themselves, as well as those produced by semi-official organizations, by commercial companies, by occupying forces, and so on, can be read 'along the grain' for what they tell us about the intended policies, ideas and strictures that dictatorial regimes and allied bodies expected to impose on people's everyday lives. They can also be read 'against the grain' for all that they reveal, albeit indirectly and in fragments, about subjective lived experience and the potential gapsas well as overlapbetween the intention and the reality of both. Valverde Contreras and Keese, Román Ruiz, and Ferris all make use of police reports, which offer (filtered and mediated) access to accounts of political encounters and altercations, speech acts, anti-regime graffiti, jokes, songs and more. Such files often incorporate ownvoice accounts of events as part of the process of reporting and investigating the 'crime', including witness statements, the accused's defence, and in some cases even including written materials and material objects kept as evidence. Hernández Burgos reads local government reports 'against the grain' and in conversation with oral history testimonies to the enrichment of both source types. Contemporary published material, including newspapers, journals and guidebooks are also all in evidence here, often for what they expose about political or societal prescriptions, but also provide another source-type for gleaning own-voice accounts indirectly, though they must always be used with an awareness of the censorial framework that conditioned their publication. Finally, we, of course, make recourse to the ego documents that testify directly (though inevitably also selectively) to 'ordinary' inhabitants' subjective experiences of dictatorial rule, and to their thoughts, acts, practices and beliefs, including those recorded contemporaneously to those experiences (diaries and letters, for example) and subsequently, very often following the regimes' end. These includes memoirs as well as oral testimonies, the latter not only forming a key source for Halstead's examination of Cyprus under dictatorial rule and invasion but also for his reflections on the processes through which historical actors narrated their lived experience of dictatorship and, in so doing, 'made sense' of these
|
2022-04-01T13:09:26.229Z
|
2022-03-30T00:00:00.000
|
{
"year": 2022,
"sha1": "c483844bcb5052dd80ff25d745d547982cedc069",
"oa_license": "CCBY",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/02656914221085120",
"oa_status": "HYBRID",
"pdf_src": "Sage",
"pdf_hash": "c483844bcb5052dd80ff25d745d547982cedc069",
"s2fieldsofstudy": [
"History",
"Political Science"
],
"extfieldsofstudy": []
}
|
266081642
|
pes2o/s2orc
|
v3-fos-license
|
Effect of a detached rib behind a backward-facing step on separated flow dynamics and heat transfer
. Here, we present the experimental results on the influence of a detached rib behind a backward-facing step on the flow dynamics and heat transfer. The height of a slot between the lower edge of the rib and the channel wall was varied in the range dh/H = 0.7–1.3, and the distance between the rib and the step was varied within t/H = 0.2–3.2. The rib height was constant and equal to 0.3H, where H is the step height. The fields of static pressures were measured behind the point of flow separation at the step edge. The study was performed in the range of Reynolds numbers 14,200-42,500. The two-dimensional fields of velocities and their fluctuations were measured using the PIV method. Heat transfer was studied in the regime of constant heat flux on the channel surface, where the step was located. Behaviors of the pressure and velocity profiles, as well as heat transfer with varying the detached rib location relative to the step are shown.
Introduction
A separated turbulent flow around a backward-facing step has been studied in sufficient detail [1][2][3][4].This is explained by the fact that organization of flow separation and its reattachment is one of the most common ways to intensify heat transfer.At the same time, a backward-facing step in heat exchangers can also have a negative effect: the hydraulic resistance increases and heat transfer in the stagnation zone directly behind the separation point deteriorates.These shortcomings can be eliminated by controlling the flow separation.
As it is known, there are active and passive methods for controlling separated flows.Local blowing-suction at the step edge or on the entire surface, introduction of various periodic perturbations, and etc. are used as the active methods.Despite their effectiveness for fine adjusting the size of the recirculation region and intensifying heat transfer behind the step, these methods are technically more difficult to implement in comparison with the passive methods.Additional intensifying elements (transverse ribs, teeth, vortex generators of various shapes), despite their significantly smaller dimensions as compared to the main separated flow, can cause significant restructuring of the flow, both with reduction and expansion of the circulation zone.
The effect of three-dimensional generators of longitudinal vortices is considered in [5][6][7].Threedimensional generators shift the coordinate of boundary layer reattachment towards the step bottom, and the attachment line becomes uneven and approaches the step between the generators, and immediately in a wake behind the generators it shifts downstream.The region of maximum heat transfer approaches the step; heat transfer between the tabs becomes more intensive and decreases behind the tabs.
The effect of a two-dimensional obstacle on the flow dynamics and heat transfer behind the step is studied in [8][9][10][11][12][13].A rib in front of the step forms its own separation area and, depending on the rib location relative to the step, the area behind the step either increases if there is no individual area, or decreases if an individual area has been formed.The heat transfer maximum is located in the attachment region.
In [14], along with other types of two-dimensional deflectors, the case of a detached rib behind a step was considered.The two-hump distribution of the Nu number given for this case relates to the displacement of the recirculation region.
The present study is an extension of [14] on the detached rib effect on flow dynamics and heat transfer.
Experimental setup and procedure
The experiments were carried out using an aerodynamic setup, with included a medium-pressure fan driven by an asynchronous motor, a frequency motor controller with a minimum adjustment frequency of 0.01 Hz, and an aerodynamic channel consisting of a prechamber, a nozzle, and a working channel.The working channel consisted of the 600-mm initial section with a crosssection of 20 x 150 mm, at the end of which there was a step 10 mm high, and a 400-mm section behind the step.The working channel was made of heat-insulating textolite 10 mm thick.
The heat flux on the wall behind the step was created using an electric current supplied to a thin titanium foil 50 μm thick.The foil was glued on a section of 150*400 mm.The surface temperature was measured by E3S Web of Conferences 459, 03008 (2023) https://doi.org/10.1051/e3sconf/202345903008XXXIX Siberian Thermophysical Seminar thermocouples, built into the back wall along the entire length of the center line.The thermocouples were arranged unevenly, converging towards the step with a distance of 2.5 mm and more sparsely towards the outlet with a step of 20 mm.A flat rib, mounted normally to the flow over the entire channel cross-section, was used as the flow diverting element.The height of the rib was 3 mm, and its thickness was 1 mm.In the study, we varied both the height of the slot between the wall and the rib dh/H from 0.7 to 1.3 and the distance from the step to the rib t/H from 0.2 to 3.2.The Reynolds numbers of the flow ReD calculated from the hydraulic diameter and the average velocity was varied within 14,200 -42,500.
In the experiments, the distribution of pressure on the lower wall behind the step was measured in the model symmetry plane.To do this, holes were drilled along the channel with variable pitch; the pitch was 5 mm near the step and it reached 40 mm towards the end.Based on the measured pressures, the pressure coefficient Cp = 2(pi-p0)/U was calculated, where pi, p0 are the pressure on the wall at the point under study and the reference pressure; U is the velocity in the center of the initial channel, is the air density.
Flow dynamics
The fields of average and fluctuating velocities were studied by the PIV method for height dh/H = 1, set at distance t/H = 0.2 from the step.Streamlines were plotted using the fields of average velocities.The flow pattern repeats qualitatively the pattern, modeled numerically by the authors of [14] for the same conditions.Two vortices are formed behind the rib, and the more intense one is located near the lower wall.The second vortex induced by the mixing layer is in the upper part of the rib.The flow deflected by the rib reattaches at a distance of 0.5 calibers, then the boundary layer separates at a distance of 2H, a recirculation region 5.5 calibers long appears, and the mixing layer reattaches at a distance of 7.5 calibers.Strong turbulence is observed above the upper part of the rib, and in the lower part the unsteady character of the flow is manifested to a lesser extent due to the lower level of velocities.
When flowing around a step without a rib, greatest rarefaction is achieved near the step at a distance of 2 calibers, and the pressure is restored at 20H.If the rib is mounted at t = 0.2H, greatest rarefaction is achieved at a distance of 1.5H with Cp = 0.35; at this point the flow deflected by the rib attaches.For the cases with rib setting at t from 1.2 to 3.2, an increase in pressure at the point of attachment of the flow deflected by the rib is characteristic, and maximal heat transfer is observed there.Then there is a slight decrease in pressure and after about one caliber, the pressure starts to recover.For the configuration with t = 1.2 and dh=1.3, the largest increase in Cp is achieved at a distance of 4H, which corresponds to the coordinate of the main flow merging with the flow deflected to the wall behind the step.When the rib is mounted downstream the step at a distance of 1.2H, the pressure rises at a distance equal to one caliber from the step bottom.For distances t = 4.5H, the character of the curve becomes similar to the distribution of pressure without a rib.
Heat transfer
With a turbulent developed flow in the channel, a separation region is formed behind a sudden expansion.
In the recirculation zone, heat transfer decreases, and the heat transfer maximum is located near the flow reattachment area.In our case, for Re = 14.200, the heat transfer maximum is located at a distance of 5.8 calibers from the separation point.Such a conclusion can be drawn from the analysis of data obtained, shown in Fig. 4. When installing a rib, the flow structure and heat transfer, respectively, change dramatically.Thus, for a rib installed at a distance of 0.2 caliber from the step and at dh/H = 1, two maxima are formed in the heat transfer profile along the channel length.The first maximum is localized near the backward-facing step; the second maximum is at a distance of 8.25 calibers.At the first maximum, heat transfer increases by a factor of 2.2 as compared to the case without a rib, the second maximum is comparable to the case of the flow around a step without perturbation.If the rib is displaced downstream at distance t = 1.2H, the heat transfer maximum shifts at a distance of 1.75H, and the second heat transfer maximum is not observed.With further displacement of the rib by one more caliber from the step, the maximum also shifts downstream.Thus, when the obstacle is located at distance 2.2H, the maximum is shifted by 2.75H, and when the obstacle is located at 3.2H, the maximum is shifted by 3.75H.The value of the heat transfer maximum decreases with the distance between the rib and the step.It can be seen in the profiles of heat transfer distribution that when the rib is installed, the flow structure changes starting from 1.2 calibers.The heat transfer curves after the heat transfer maximum fall much faster as compared to the distribution without a rib.The effect of the Re number on the heat transfer maximum is shown in Fig. 5 for three cases (a step without a rib, a rib with slot height 1H and a distance from the step of 2.2 and 3.3 calibers).With an increase in the Re number, the effect of a detached rib on the heat transfer maximum becomes stronger than in the case of the rib absence.
The effect of the height of a slot between the step wall dh at a fixed distance from the rib to the step was studied.Other cases, except for configuration t = 0.2H at the step height, judging by the nature of distribution of the heat transfer curves, have a similar flow and a one-humped heat transfer distribution with a sharper drop in the heat transfer intensity in the region after the maximum.In these cases, the heat transfer maxima are localized at a distance of 2 calibers.In the case when the rib is below the step height, a two-hump distribution of heat transfer is observed, while, starting from the 3 rd caliber, the results are quantitatively repeated with the heat transfer distribution for the step in the absence of control.Such a distribution indicates that a part of the mixing region, deflected by the rib, hits the lower wall near the step; this influence extends to about 3 calibers; at the same time, reattachment of the separated flow is somewhat shifted downstream.
The results of changing average heat transfer are shown in Fig. 6.Thus, considering the rib mounted at a height of 10 mm, we see that the highest NuI value is achieved when the rib is installed at distance 0.2H, and at a distance of 35 calibers this exceeds the case in the absence of a rib by 16%.As the distance between the rib and the step increases, heat transfer decreases.Among the NuI profiles, one can choose the configuration with the most intense heat transfer in the vicinity of the step.The shortest distance L (the effective length of heat transfer intensification L is the distance from the step to the maximum coordinate NuI) is observed at t = 0.2.In this case, the value of L is reduced by 85% as compared to the step without a rib, when L = 9.75H.At a fixed ribto-step distance of 1.2H, for the ribs located above the step, L becomes 2.75H and 3.75H at a close value of NuI.
In the case when the slot height reaches 0.7H, a twohump distribution is observed: one peak is located at distance 2H, and the second one coincides with the value in the rib absence.The coefficient of hydraulic resistance f was determined by formula f = 2(Pn-P0)/(U0 2 ), where P0 is the total pressure in the reference cross-section (4 calibers before the step), Pn is the total pressure behind the step, determined in the area of static pressure recovery of 20 gauges behind the step, U0 is the average flow velocity in the reference cross-section.These data are shown in Fig. 7.The highest hydraulic resistance (95% more than the unregulated case) is achieved for the cases with dh = 10H at distance 0.2H.Raising the rib to height dh = 1.3H at t = 1.3H increases the resistance by a factor of 2 as compared to the classical case.The least resistance is achieved at height 0.7H.
The NuIf number was averaged over a distance of 20 calibers; this is the distance where the pressure recovery occurs (Fig. 8).At the step height, the largest NuIf is achieved near the step and becomes 19% larger than that for a smooth step.When varying the slot NuIf at t = 1.2H, the best result is achieved when the rib is installed at the step height.To assess the thermal-hydraulic efficiency, complex η = (NuIf /NuIf0)/(f/f0) 1/3 is usually used, which, in contrast to the Reynolds analogy factor, allows consideration of power spent on coolant pumping.The highest efficiency, as it is shown in Fig. 9, is achieved at a narrow slot dh = 7H, t = 1.2H and is 1.02; the lowest efficiency is achieved at the largest slot dh = 13H, t = 1.2H is 0.84.
Сonclusions
The effect of a detached rib on the flow dynamics and heat transfer behind a backward-facing step was experimentally studied in the case of a turbulent flow around the step with closed boundary layers and Re = 14.200, calculated from the hydraulic diameter of the inlet cross-section and the average flow velocity.
The PIV method was used to study the effect of a detached rib installed at a distance of 0.2 calibers from the step at the channel height.It was revealed that two vortices are formed behind the rib; the flow attaches at a distance of 0.5 calibers, then the flow detaches with further reattachment at a distance of 8 calibers.
The rib at a distance of 0.2 calibers had the greatest influence on the pressure distribution; the rib recessed under the step (t = 1.2H, dh = 0.7H) had the least influence and deviated slightly from the classical case near the step.
Two-hump distribution of heat transfer was observed for the case of 0.2H at height 1H and 1.2H, dh = 0.7H.In the first case, the first peak is not observed because it goes beyond the measurement region; the second peak corresponds to reattachment.
The research was financially supported by the Russian Science Foundation (grant No. 21-19-00162).
Fig. 5 . 1 𝑋𝑋
Fig. 5.The effect of ReD on Numax For technical applications, it is important to determine average heat transfer.The average Nusselt number NuI is determined by integrating the local values from the step bottom to the considered point Nu = 1 ∫ Nu 0
Fig. 8 .Fig. 9 .
Fig. 8.The effect of slot height dh or distance to the step t on the averaged Nu number along section 20H from the step for Re = 14.200.
|
2023-12-08T16:28:55.939Z
|
2023-01-01T00:00:00.000
|
{
"year": 2023,
"sha1": "217b6b5f37d0c12972c4913426644b26304e9c1d",
"oa_license": "CCBY",
"oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2023/96/e3sconf_sts-39_03008.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "fb659835d0468d2c2d00413369663f2ce8a000cc",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": []
}
|
248776743
|
pes2o/s2orc
|
v3-fos-license
|
PRKCB is relevant to prognosis of lung adenocarcinoma through methylation and immune infiltration
Abstract Background Lung adenocarcinoma (LUAD) is one of the tumor‐related diseases with high morbidity worldwide. Epigenetic modifications such as DNA methylation changes may involve in tumorigenesis. This study aimed to explore new biomarkers that have prognostic significance of LUAD. Methods First, we downloaded the gene expression and methylation data set from Gene Expression Omnibus. R software was then used to identify abnormally methylated differentially expressed genes (MDEGs). Next, R package Cluster Profiler was used to analyze the enrichment and pathway of the MDEGs. Analysis using STRING revealed the protein–protein interaction network. The result was then visualized by Cytoscape and obtained 10 hub genes. Afterward, they were further verified by The Cancer Genome Atlas to select candidate genes. Moreover, quantitative real‐time polymerase chain reaction (qRT‐PCR) and immunohistochemistry were used to verify the expression and prognostic value of candidate genes in LUAD patients. Results The results showed that the expressions of ADCY5 and PRKCB are indeed related to LUAD. The clinical relevance to PRKCB was confirmed by its clinical correlation analysis. Gene set enrichment analysis (GSEA) and tumor immune estimation resource (TIMER) tumor immune correlations showed that PRKCB is involved in the cancer‐related Kyoto Encyclopedia of Genes and Genomes pathway and is involved in immune infiltration. It was also verified by qRT‐PCR and immunohistochemistry that PRKCB was lowly expressed in LUAD patients and correlated with prognosis. Conclusions PRKCB is relevant to prognosis of LUAD through methylation and immune infiltration.
INTRODUCTION
Lung cancer can be divided into small-cell lung cancer (SCLC, about 15%) and non-small-cell lung cancer (NSCLC, $85%) according to histopathology. Lung adenocarcinoma (LUAD) is one of the histological subtypes of NSCLC, and its incidence has increased significantly in recent years. 1 LUAD is usually diagnosed in advanced stages. However, if diagnosed early, survival rate of LUAD patients can be greatly extended. Therefore, to reduce the mortality of LUAD, effective early Jinjie Wang and Muqi Shi contributed equally to this work. identification methods and related biomarkers are urgently needed. Currently, in addition to low-dose tomography widely used in lung cancer screening and postoperative monitoring, potential biomarkers such as autoantibodies, complement fragments, microRNAs, circulating tumor DNA, DNA methylation, and blood protein profiles have attracted widespread attention. 3 DNA methylation is a form of epigenetic modification that can change genetic performance without changing the DNA sequence. This is one of the current research hotspots in tumor and molecular biology. 4 Recent studies have shown that changes in the methylation pattern of tumor cells can be divided into two types: hypomethylation of oncogenes leads to activation of oncogenes, and increased levels of DNA methylation in specific regions cause inactivation of tumor suppressor genes. 5 Because DNA methylation often occurs in lung cancer, 6 we sought to discover new DNA methylation biomarkers in LUAD patients, which may become a prognostic factor for LUAD patients.
With the advantage of big data networks, convenient and public databases such as Gene Expression Omnibus (GEO) 7 and The Cancer Genome Atlas (TCGA), 8 which contain gene expression levels and methylation characteristics of various tumors and normal samples, makes it possible to select the most detectable organism from a large number of potential markers.
In this study, we sought to identify genes that are abnormally methylated in LUAD through systematic bioinformatics analysis, which is a more accurate analysis of huge biological and genomic data. To increase the persuasive power of this study, we verified by quantitative real-time polymerase chain reaction (qRT-PCR) that PRKCB was indeed downregulated in LUAD tissues. At the same time, we further verified by immunohistochemistry (IHC) that LUAD patients with low PRKCB expression level had worse overall survival (OS). In short, PRKCB may act on LUAD patients through methylation and immune infiltration.
Identification of methylated differentially expressed genes (MDEGs) and functional analysis
The raw data of GSE118370 were preprocessed and normalized by using the affy package under the R environment (https://www.r-project.org/). After pretreatment, we used the limma package to identify genes differentially expressed in LUAD tissues with jlogFCj >1 and adjusted p value <0.05. Meanwhile, data of GSE118370, which related to the methylation expression level of genes, were first standardized and normalized in the R environment using the wateRmelon package. Next, we took β value >0.2 and adjusted p value <0.05 as the standard of abnormal methylation. Finally, we used the online Wayne diagram to cross-contrast the DEGs with abnormally methylated to obtain MDEGs. The Gene Ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) pathways of MDEGs were performed by R package cluster profiler based on org.Hs.eg.db database. [9][10][11][12] The GO and KEGG analysis results were visualized using the enrichplot and the GOplot package. 13 GO analysis includes biological processes (BP), molecular functions (MF), and cellular components (CC).
Construction of protein-protein interaction (PPI) network
Potential relationships between MDEGs were identified by the online STRING (https://www.string-db.org/), which is a database that uses bioinformatics to predict the PPI network. 14 Next, Cytoscape (https://cytoscape.org) was used to visualize the PPI and further find hub genes. 15 The top 10 hub genes were then identified using the cytoHubba plugin and the Maximal Clique Centrality method. In addition, core modules of the PPI network with degree cutoff = 2, node score cutoff = 0.2, and K-core = 2 were selected by the plug-in Molecular Complex Detection (MCODE) in Cytoscape software.
Expression and methylation levels of hub genes in TCGA
The Gene Expression Profiling Interactive Analysis (GEPIA) (http://gepia.cancer-pku.cn) combined with TCGA database was used to further confirm the expression levels of hub genes between LUAD and normal tissues. 16 The online The University of Alabama at Birmingham Cancer data analysis portal (UALCAN) (http://ualcan.path.uab.edu) was also used to confirm the methylation levels of hub Genes between LUAD and normal tissues combined with the TCGA database. 17 In addition, cBioPortal for Cancer Genomics (https://www.cbioportal.org) was used to further analyze the correlation between expression and methylation of hub genes.
Survival and prognosis analysis
GEPIA was used to evaluate the relationship between the expression of candidate genes in LUAD and survival rate. With the help of the TCGA database, OS of LUAD patients can be assessed based on the different expression levels of each gene.
Analysis of clinical relevance of candidate genes in TCGA
All gene expression data (594 cases) and corresponding clinical information of LUAD can be downloaded from the TCGA official website. Samples with incomplete clinical information were exclued when investigating clinical relevance.
Gene set enrichment analysis of PRKCB
Genes was classified by gene set enrichment analysis (GSEA) according to the degree of differential expression in two types of samples, and then checks whether the top or end of the preset list is enriched with the preset gene set. 18 GSEA first, generated a ranked table of all genes according to the correlation between the expression of all genes and PRKCB, and second, divides the expression of PRKCB into high expression group (PRKCB-h) and low expression group (PRKCB-l) according to the median. At last, GSEA was then performed to clarify the significant survival differences between PRKCB-h and PRKCB-1. Each analysis was performed 1000 times. The nominal p value and normalized enrichment score (NES) were used to rank each phenotype enrichment pathway.
Correlation analysis of PRKCB and immune cell infiltration
Tumor immune evaluation resources (TIMER) (https:// cistrome.shinyapps.io/timer) is a free online website based on the TCGA database that uses statistical methods to detect the infiltration of immune cells in tumor tissues and its impact on the prognosis of patients. 19 The immune cells in this study included CD4 + T cells, CD8 + T cells, B cells, macrophages, neutrophils, and dendritic cells.
Clinical samples collection
We collected 20 tumor tissues (tumor) and adjacent normal tissues (normal) from LUAD patients, which were taken from the Affiliated Hospital of Nantong University and stored at À80 C for subsequent RNA extraction. In addition, we retrospectively studied a tissue microarray (TMA) of 60 tissues of LUAD patients who underwent surgical treatment at the Affiliated Hospital of Nantong University from January 2010 to June 2017. We extracted the clinical characteristics of the LUAD patients from the medical record, including age, gender, smoking history, differentiation, and pathological TNM stage. This experimental protocol was approved by the Ethics Committee of the Affiliated Hospital of Nantong University.
RNA extraction and qRT-PCR
We selected 20 pairs of LUAD tumor tissues and paired normal lung tissues clinically, extracted total RNA from them with TRIzol reagent (Life Technologies) and transcribed to complementary DNA (cDNA) using the PrimeScript RT reagent kit (Takara). Finally, qRT-PCR was used to analyze the expression levels of candidate genes in tissues. We set the reaction conditions as follows: incubate at 95 C for 2 minutes and then perform 45 cycles at 95 C for 5 seconds and 60 C for 30 seconds. The analysis software (Eppendorf) displayed the cycle threshold of each reaction. The GAPDH gene served as an internal control. The primers of PRKCB were as follows: forward 5 0 -CGTCCTCATTGTCCTCGTAA-3 0 and reverse 5 0 -TGTC TCATTCCACTCAGGGTT-3 0 .
IHC
IHC method was used to detect the expression level of PRKCB in paraffin-embedded LUAD specimens. The samples were first incubated with rabbit anti-PRKCB antibody (1:200), and then incubated with goat anti-rabbit secondary antibody (1:500) for secondary staining. Finally, a microscope (Leica DMR 3000; Leica Microsystem) was used to capture images of each slice at a magnification of 200-fold. PRKCB (brown)-positive staining is mainly located in the cytoplasm. It was scored according to staining intensity and percentage of PRKCB-positive tumor cells. The median was used as the cutoff value for high or low PRKCB expression.
Statistical analysis
The Wilcoxon signed-rank test and logistic regression were used to analyze the relationship between clinicopathological characteristics and candidate genes. Cox regression was used to assess the clinicopathological features associated with overall survival in TCGA patients. Multivariate Cox analysis was used to compare the effect of candidate gene expression F I G U R E 4 The expression of 10 hub genes through GAPIA database (left column reflects tumor data and right column reflects normal data, *p < 0.05) on survival and other clinical characteristics. The hazard ratio referred to the risk of death in LUAD patients as the value of each risk factor increased. Statistical significance of qRT-PCR was determined using Student's t-test. Relationships between PRKCB expression and clinicopathological characteristics were evaluated using the χ2 test or Fisher's exact test. Kaplan-Meier method was used to construct OS curve, and log-rank test was used to analyze the difference between the curves.
Identification of MDEGs
To find genes with differential expression of LUAD, we first downloaded GSE118370, which contains all gene expression datasets from LUAD tissues and paired normal lung tissues, from GEO. We identified 2085 significant DEGs (301 upregulations, 1784 downregulations) (Figure 1(a),(b)) in LUAD from GSE118370. At the same time, the GSE139032 data was processed for the methylation data of LUAD to further obtain 780 methylation differential genes (Figure 1(c)) in LUAD. Next, the genes screened from the two gene sets described above were jointly imported into a Venn diagram (Figure 1(d)). This resulted in 124 overlapping MDEGs.
GO and KEGG analysis of MDEGs
Next, we sought to find out the common biological function of these 124 MDEGs. Therefore, we used R package Cluster Profiler for GO and KEGG analysis. As we can see from Figure 2 activity, and DNA-binding transcription factor binding. In addition, KEGG pathway analysis showed that 124 MDEGs were concentrated in the calcium signaling pathway, circadian entrainment, salivary secretion, parathyroid hormone synthesis, secretion and action, and long-term depression (Figure 2(b)).
Construction of PPI network
To find out key genes, we used an online STRING platform to identify potential protein interactions between these MDEGs. The resulting PPI network graph contains 123 nodes and 97 edges. We used Cytoscape software to visualize the PPI network (Figure 3(a)). Through the Cytoscape plugin, we identified the first 10 hub genes: RYR2, ADCY4, ADCY5, DRD5, PRKCB, NMUR2, ADRA2C, DRD4, SOX17, and FGF2. Meanwhile, the entire network was then analyzed using Cytoscape's MCODE plugin, which identified three subnetworks (Figure 3(b)-(e)).
Expression and prognostic value in TCGA database
To further verify the expression levels of candidate genes, we performed differential expression analysis using GEPIA online tool. It can be seen from the boxmap in Figure 4 that the expression of ADCY4 (fold change = 0.36), ADCY5 (fold change = 0.36), ADRA2C (fold change = 0.28), FGF2 (fold change = 0.24), PRKCB (fold change = 0.62), RYR2 (fold change = 0.19), and SOX17 (fold change = 0.30) in LAUD was lower than that in normal lung tissues. That is, these seven candidate genes may be suppressors in LUAD oncogene. There was no statistical difference in the expression level of the remaining three genes in the tumors and normal tissues. Hypermethylation in the DNA promoter region is an important regulatory mechanism for tumorigenesis, which is widely present in a variety of tumor suppressor genes. Therefore, we speculated that the downregulation of the expression of these seven genes in LUAD may be related to DNA promoter hypermethylation. We used the UALCAN to (fold change = 1.78, p < 1E-12), and SOX17 (fold change = 3.07, p < 1E-12) promoter regions in LUAD tissues were significantly higher than those in normal tissues ( Figure 5). The methylation levels of ADCY4, ADCY5, FGF2, PRKCB, and SOX17 were negatively correlated with transcriptional expression ( Figure 6). Next, we used the TCGA database to study the relationship between candidate gene expression levels and clinical characteristics. We performed univariate analysis of the clinicopathological characteristics and the five candidate genes to further screen prognostic genes (Figure 7(a)). Meanwhile, multivariate Cox analysis indicated that FGF2 and PRKCB were independent prognostic factors (Figure 7(b),(c)). Figure 7€ showed that the median survival time of patients with low expression of PRKCB was about 40 months, whereas the median survival time of patients with high expression of PRKCB was about 55 months. Therefore, we can conclude that highly expressed PRKCB in LUAD patients had better survival results (log rank p = 0.0014). Meanwhile, highly expressed FGF2 did not show the survival advantage (log rank p = 0.74) (Figure 7(d)). At the same time, we can see from Figure 8(a) that the expression of PRKCB in stage I was higher than that in stage II and stage III. The expression of PRKCB in T1 was significantly higher than that in T2 and T3 (Figure 8(b)). The expression of PRKCB in N2 was lower than that in N0 (Figure 8(c)). Although the median expression of PRKCB in M1 was lower than that in M0, it had no statistical significance (p > 0.05, Figure 8(d)). Therefore, we can preliminarily conclude that the expression of PRKCB was lower in the advanced stage than in the early stage.
Identification of PRKCB-Related signal paths with GSEA
To explore the potential mechanism of PRKCB in LUAD, KEGG was analyzed by GSEA method. As shown in Figure 9, genes related to high expression of PRKCB were concentrated on NSCLC, pathways in cancer, B cell receptor signaling, T cell receptor signaling, VEGF signaling pathway, and so on. In contrast, the PRKCB low expression related gene sets were rich in Huntington's disease, oxidative phosphorylation, purine metabolism, pyrimidine metabolism, and base excision repair. Taken together, these results suggested that PRKCB was indeed involved in the cancerrelated KEGG pathway.
Correlation between PRKCB and tumor infiltrating immune cells
It can be seen from GSEA that PRKCB expression was related to immune cell receptor signaling, and we used TIMER software to analyze the relationship between PRKCB expression and tumor infiltrating immune cells The PRKCB-related lung adenocarcinoma gene set enrichment analysis (GSEA) was identified by TIMER software. The nominal p-value and normalized enrichment score (NES) were used to rank each phenotypic enrichment pathway (Figure 10(a)). PRKCB expression and CD4 + T cells (cor = 0.582), CD8 + T cells (cor = 0.478), B cells (cor = 0.6), macrophages (cor = 0.42), neutrophils (cor = 0.605), and dendritic cells (cor = 0.657) had significant correlations. Among them, the expression of PRKCB is more correlated with neutrophils and dendritic cells. In addition, the expression of PRKCB was combined with the expression of each immune cell to analyze its influence on the prognosis of LUAD patients. We found that LUAD patients with high expression of PRKCB combined with high expression of B cells had better prognosis. Meanwhile, LUAD patients with high expression of
PRKCB was less expressed in LUAD tumor tissues
We used qRT-PCR to detect the messenger RNA (mRNA) expression of PRKCB in 20 pairs of LUAD tumors and adjacent normal lung tissues to confirm the role of PRKCB in LUAD. As shown in Figure 11(a), the expression of PRKCB in tumor tissues was significantly lower than that in normal lung tissues (p < 0.05). This result was consistent with the TCGA database ( Figure 4).
In LUAD patients, low PRKCB expression was associated with poor clinical prognoses
We conducted an immunohistochemical study on the tumor specimens of 60 patients with LUAD. Figure 11(b),(c), respectively showed representative images of PRKCB positive and negative staining. According to the median IHC score, the expression can be divided into high PRKCB group (25 cases) and low PRKCB group (35 cases). We next evaluated the relationship between PRKCB IHC scores and clinical characteristics in LUAD patient specimens. The expression of PRKCB had a significant correlation between age and differentiation, but had no obvious correlation between gender, smoking, or pathological TNM stage (Table 1). Survival analysis showed that patients with low expression of PRKCB in the TMA cohort had a lower overall survival rate (Figure 11(d)). This result was consistent III-IV 14 8 6 with the TCGA database (Figure 7). Univariate and multivariate Cox analyses shown in Table 2 indicated that PRKCB gene expression level is an independent protective factor for LUAD patients.
DISCUSSION
With the impact on air pollution and smoking, the incidence and mortality of lung cancer continue to rise. LUAD is the most common histological subtype of NSCLC. It is often diagnosed at advanced stage because of its absence of obvious symptoms. This year, epigenetic modification has received increasing attention, especially in cancer-related research. DNA methylation, one of the epigenetic modifications, controls cell proliferation, differentiation, and apoptosis in eukaryotes, and directly or indirectly controls tumorigenesis. 20 With the continuous study of gene promoter methylation, we not only have a new understanding of the mechanism of tumorigenesis, but also identified useful biomarkers through changes in gene DNA methylation, providing new methods for the diagnosis and treatment of diseases. 21 Therefore, an in-depth understanding of tumor suppressor genes associated with LUAD will be of great value of early clinical diagnosis and treatment of this disease.
In this study, we freely obtained a LUAD gene expression profile (GSE118370) and a LUAD methylation expression profile (GSE139032) from GEO. R software packages were used to analyze LUAD tissue and normal samples. This study was aimed to identify potential biomarkers related to aberrant methylation of LUAD to contribute to the early diagnosis and treatment of LUAD patients. In these two data, 124 overlapping genes were found, that is, 124 aberrantly methylated genes. To further analyze overlapping genes, we used R package Cluster Profiler for functional and pathway analysis. The results of GO analysis indicate that these genes have something to do with the regulation of calcium ion homeostasis and transcription activator activity. Meanwhile, KEGG pathway analysis indicated that overlapping genes are mainly concentrated in calcium signaling pathway. We constructed a PPI network with 123 nodes and 97 edges, and selected the first 10 genes as the central genes using CytoHubba, including RYR2, ADCY4, ADCY5, DRD5, PRKCB, NMUR2, ADRA2C, DRD4, SOX17, and FGF2. We used the TCGA database for further verification. As a result, ADCY4, ADCY5, FGF2, PRKCB, and SOX17 were obtained. They are all hypermethylated and underexpressed in LUAD. Next, we used TCGA database analysis to show that PRKCB and FGF2 are associated with the clinical prognosis of LUAD and are independent prognostic factors. Moreover, the effect of PRKCB and FGF2 on the OS rate of LUAD was analyzed by GEPIA. We can conclude that the low expression of PRKCB in LUAD patients has a poor prognosis. Next, we further studied the GSEA pathway of PRKCB in LUAD through the TCGA database. The results show that PRKCB is indeed involved in cancerrelated pathways and immune cell receptor signaling pathways. TIMER verified that PRKCB is associated with immune cell infiltration in LUAD. In addition, different PRKCB gene expression levels combined with different immune cell contents have an impact on the prognosis of LUAD patients. We speculate that the expression of PRKCB may affect OS by regulating the degree of immune cell infiltration in LUAD. This suggests that PRKCB may play a role in LUAD through methylation and immune cell infiltration.
PRKCB is a member of the protein kinase C (PRKC) family, which is composed of several serine/threonine kinases and can be activated by calcium and a second messenger diacylglycerol. 22 It can be concluded from previous studies that PRKCB plays multiple roles in cell life and survival, especially in regulating cell survival and apoptosis. 23 Studies have found that PRKCB promoters are hypermethylated in a variety of adenocarcinomas. 24 At the same time, research indicates that PRKCB may regulate its expression in NSCLC through the Wnt signaling pathway. 25 In summary, based on comprehensive data processing and analysis, we found that PRKCB is highly methylated and lowly expressed in LUAD and is associated with immune cell infiltration. At the same time, the survival prognosis of LUAD patients with high PRKCB expression is better. Subsequent experiments such as qRT-PCR and IHC have also preliminarily verified this conclusion. Therefore, PRKCB is relevant to prognosis of LUAD through methylation and immune infiltration. Moreover, because gene methylation modifications usually occur in the early stage of cancers, the methylation change of the PRKCB gene may occur in the early stage of LUAD, which may have certain value for the early diagnosis of LUAD patients. 26 Of course, the current research is still limited, and further research is needed. The expression of PRKCB is accompanied by a large number of immune cell infiltrations, which means that PRKCB may play an important role in the tumor microenvironment of LUAD by regulating the tumor infiltration of immune cells. 27 This provides new ideas for future antitumor immunotherapy and anti-drug resistance in LUAD.
|
2022-05-15T06:22:10.592Z
|
2022-05-13T00:00:00.000
|
{
"year": 2022,
"sha1": "84d7527f731666a261f1d8c1ca57722bdb488513",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/1759-7714.14466",
"oa_status": "GOLD",
"pdf_src": "Wiley",
"pdf_hash": "21077314880641df2860e8ea43ecbb98be4cab7f",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
155294928
|
pes2o/s2orc
|
v3-fos-license
|
Energy Harvesting Technologies and Equivalent Electronic Structural Models-Review
As worldwide awareness about global climate change spreads, green electronics are becoming increasingly popular as an alternative to diminish pollution. Thus, nowadays energy efficiency is a paramount characteristic in electronics systems to obtain such a goal. Harvesting wasted energy from human activities and world physical phenomena is an alternative to deal with the aforementioned problem. Energy harvesters constitute a feasible solution to harvesting part of the energy being spared. The present research work provides the tools for characterizing, designing and implementing such devices in electronic systems through their equivalent structural models.
Introduction
Electronic devices without wires or systems with wireless capabilities are increasingly popular for they do not require connection to the mains power grid. Consequently, many devices in the industrial and domestic environments are solely powered by batteries, and the connection wires are mainly for battery recharging purposes. Battery manufacturers have significantly improved battery life [1], however they require periodical maintenance; and thousands of working hours must be added to the expense of batteries. Moreover, exhausted batteries produce waste that needs to be recycled [2]. Nowadays, there is increasing interest in green electronics, and in order to be "green" an electronic system must have a contained price, be energy efficient and follow the three R rule, i.e., reduce, reuse and recycle. The price of copper does not help because it has been increasing for several years [3]. Furthermore, all around the world, copper wire theft has become a widespread problem [4]. Therefore, the growing demand for electronics without wires, that are green and energy efficient faces several technical and ecological challenges, like power efficiency, battery charge times, the autonomy of the system, and the lifespan of the batteries themselves, which is a function of the load patterns, and the charge and discharge cycles. In this context, energy harvesters have become an efficient and green alternative for gathering energy from the environment and offer an answer to some of the aforementioned challenges [5,6].
Harvesters are electronic devices that inherit the same principles and architecture as those of electronic transducers. Thus, they are essentially transducers devised to extract, not only a sample of energy from physical phenomena, but the maximum feasible amount of energy [7]. Energy harvesting systems can gather energy from sources available on industrial or other environments such as mechanical vibration, temperature gradients, natural or artificial light, elevated levels of noise, pipes with air or water fluid. Then, this energy is managed and stored to be used to feed an electronic device.
It must be considered that the main aim of energy harvesting is not to produce power at large scale but to save the harvested energy in a storage device and use it later in the daily operation of an electronic system. Therefore, the usual operation mode of an energy harvesting system implies harvesting during the peak time slots of energy availability, while the storage devices must meet the demand and supply in specified periods.
In summary, the main objectives of energy harvesting technology include: • Remove mains supply wires • Eliminate or reduce dependence on batteries • Increase the lifetime • Maintain or/and increase the functionality Reduce waste A generic energy harvesting system has three main elements [8]: a harvester, a low power management, and a low power storage system. The operation process starts gathering energy from the environment via the harvester device. Then, a power management system converts the voltage level of the harvested energy to those of standard electronic, as efficiently as possible, and powers the electronic system. Eventually, the storage system stores the excess of harvested energy.
The present research work classifies and analyses the different energy harvesters' technologies available, their physical and/or chemical operation mode, the maturity of the state of the art, their efficiency, and the equivalent electric circuit. At the same time, this work provides some application examples.
Photovoltaic Harvester's Technology and Devices
Photovoltaic harvesters [12,[35][36][37][38][39][40][41][42] generate electrical power, converting sunlight or artificial light into electricity using the photovoltaic principle. The solar panel is a modular device, which is composed of n cell in parallel and in series. Thus, harvested energy is proportional to the surface area of the module and can be scaled to the desired size of power generation. The amount of energy that they gather depends on weather conditions and light/dark periods. Besides, efficiency limits photovoltaic harvesters' electric energy generation. The materials that compose the cell determine their efficiency. Current photovoltaic cells are classified into four categories based on their composition: multi-junction, crystalline silicon, thin-films and emerging. A comparison between different solar cell technologies is shown in Table 2 [17,43] and Figure 2 [44]. A solar cell is an unbiased diode that whenever exposed to light creates free electron-hole pairs, which is shown in Figure 3. The minority carriers diffuse to the depletion region where they experience the built-in filed which sweeps them to the opposite side of the junction. In an open circuit, the separation of carriers builds up the open-circuit voltage (Voc) across the junction. If a resistance free path short-circuits the n and p regions, there is a current flow, ISC, that balances the flow of minority carriers across the junction. Thus, harvesting the generated power requires a load connected to the cell.
When a voltage is applied to the electrical contacts of the p-n junction, the cell exhibits a rectifying behavior. The current through the cell is determined by the diode ideal Equation (1).
where K is the Boltzmann constant, T is the absolute temperature, q is the electron charge and V is the voltage at the terminals of the cell. I0 is the current generated by the cell in dark light condition, and the photogenerated current Iph is the photon flux incident on the cell. The parameter Iph depends on the light wavelength, and quantum efficiency or spectral response. In the ideal conditions, the shortcircuit current must be equal to Iph. Thus, Equation (2) gives the open circuit voltage of the cell, Voc.
The photovoltaic cell energy conversion efficiency is defined as the ratio of the maximum electrical power obtained Pp from the incident light power Pi. It is given by Equation (3) and it is dependent on the short-circuit current Isc, on the open circuit voltage Voc and on the fill factor ff, which is a measure of the quality of a solar cell. The ff is obtained by dividing the maximum power point with the Voc and the Isc.
The equivalent circuit of an ideal solar cell is made of a current source connected in parallel with a rectifying diode, series (Rs) and parallel (or shunt, Rp) resistances, see Figure 4. The current source produces IL, which is the current that the cell generates when it is illuminated. The series resistance characterizes the internal resistance of the cell material and the parallel resistance represents the equivalent resistance between cells connections inside a photovoltaic module. Photovoltaic cells designed for energy harvesting activities are suitable for both outdoor and indoor environments. Indoors light intensity is often much lower than outdoors. The sun generates power intensity far higher than that produced by artificial light sources such as an incandescent light bulb, fluorescent tube, or halogen lamp. Thus, it must be born in mind solar cell spectral properties to achieve the maximum feasible power, since spectral characteristics determine the operation range of each light type. Consequently, a photovoltaic cell would be more efficient on a given wavelength range, depending on the material which it is made of. Figure 5 shows the spectral operation range of different type of lights [45].
Kinetic Harvester's Technology and Devices
Kinetic devices [35,46] convert mechanical energy into electrical energy through electromechanical transducers. The most common transduction mechanisms are piezoelectric and electromagnetic conversion.
Kinetic energy harvesters have a resonance frequency that usually ranges from tens to hundreds of Hertz. In these conditions, they provide energy that ranges from tens to hundreds of microwatts.
Overall, the moving elements limit the energy obtained mechanically. Kinetic harvesters are also sensitive to driving frequencies, i.e., they provide peak power in a narrow frequency bandwidth of mechanical oscillations around their resonant frequency. Regardless, it is feasible to tune this type of harvester [24]; thus far, this requires an operator to set-up the frequency of the device manually. Table 3 compares available vibration harvesters and the following subsections describe the most common mechanical transducers: piezoelectric, electromagnetic, electrostatic and pyroelectric.
There are several factors that cause vibrations on a rigid body in a dynamic system, such as system mass unbalance, and tear and wear of the materials. Each system has a unique behavior that can be described through the damping constant and natural frequency. The study of the dynamic characteristics of a vibrating body associated with energy harvesting is usually done with a single degree of freedom lumped spring-mass system. Thus, the general principles of a resonant inertial vibration harvester can be described through the lumped model. The energy balance equation of D'Alembert's principle determines motion equation of the system, which is given by differential Equation (4).
where m is the seismic mass, Dv the viscous damping coefficient, k a spring of stiffness, F the applied force and z(t) the equilibrium position. Since relative movement between the mass and the inertial frame is the mechanism that provides the energy, the standard steady-state solution for the displacement of the mass is given by Equation (5).
where ω is the frequency and ( − ) is the steady state solution for z(t) being Y (Y = A/ω 2 ) the displacement of the amplitude and the phase-shift. The system provides its maximum energy when the excitation frequency is equal to the natural frequency of the system, ωn, given by Equation (6).
The maximum power generated therefore occurs when the device is driven at its natural frequency, ωn and hence the output power is given by the Equation (7).
where ωn is the peak power and ζT the damping factor. Thus, it must be considered the effect of the applied frequency, the magnitude of the excitation vibrations and the maximum displacement mass.
Whenever the input acceleration is high enough, there is an increase of damping that broadened the bandwidth of the system and, hence, will constitute a generator less sensitive to fluctuations of the excitation frequency. Frequency changes might be caused by temperature gradients over time or other environmental parameters. Also, if the amplitude of oscillations is too large, the device may present nonlinear behavior, therefore it will be difficult to keep the generator operating at resonance. In order to maximize the power output both the frequency of the generator and the level of damping should be designed to match the specific application requirements. The power obtained is proportional to the mass, which should be maximized while subject to the given size constraints. The study of the vibration spectra allows to identify the most suitable frequency of operation for the aforementioned design constraints, generator size and maximum displacement. Figure 6 shows an example of a piezoelectric harvester movement. The piezoelectric material is electrically polarized when it is under mechanical force because it contains dipoles. The degree of polarization is function of the strain applied. Conversely, the dipoles rotate when an electric field is applied, thus deforming the material. Single crystal materials (quartz), ceramics (lead zirconide titanite, PZT), thin-film materials (sputtered zinc oxide), screen printable thick films based upon piezoceramic powders and polymer materials (polyvinylidene fluoride, PVDF) present an anisotropic piezoelectric behavior.
Piezoelectric materials are classified in two groups identified with subscript constants, as shown in Table 4 [35]. The number three refers to piezoelectric materials polarized along their thickness (i.e., electrodes attached on the top and the bottom surfaces). Thus, the subscript 33, i.e., d33 denotes that a mechanical strain applies in the direction of polarization and the subscript 31, i.e., d31 denotes a perpendicular strain.
Usually, piezoelectric material works in lateral mode 31, because the most common assembly is made of a spring bonded to its surface which transforms the vertical displacements into a lateral strain across the piezoelectric element. Unless there are applications that employ the compressive 33 mode, typically higher than the 31 equivalents. However, the compressive strains are typically much lower than the lateral strains provide by the piezoelectric bonded onto a spring or flexing structure. Figure 7 shows the equivalent circuit of a quartz crystal resonator; the values of the components depend on the quartz characteristics. This circuit is applicable when the oscillation in the thickness shear mode is close to the fundamental resonance frequency. Where R represents oscillation damping of the enclosure material, CS represents the capacitor created by electrodes, the crystal disc and the dielectric in between, CP is the elasticity of the crystal (oscillation energy stored in the crystal) and L is the inertial component of the oscillation.
Electromagnetic Transduction
Electromagnetic harvesters [9,12,35,[54][55][56][57][58][59] produce energy by means of electromotive force that a varying magnetic flux induces through a conductive coil according to Faraday's law, Figure 8. The magnetic flux (B) source is obtained with a permanent magnet. The motion of a seismic mass attached to either a coil or magnet produces the variation of magnetic flux necessary to induce a current in the coil. When an electric conductor moves through a magnetic field, an electromotive force (Emf) is induced between the ends of the conductor. The voltage induced in the conductor (V) is proportional to the frequency of the magnetic flux linkage (∅) of the circuit, as shown in Equation (8). The generator is a multiturn coil (N), and permanent magnets create the magnetic field.
There are two possible cases: •
Linear vibration •
Time-varying magnetic field, B.
In the linear vibration case, there is a relative motion between the coil and the magnet in the xdirection, the voltage induced in the coil can be expressed as the product of a flux linkage gradient and the speed of movement, Equation (9). In the time-varying magnetic field (B) case, the flux density is uniform over the area, A, of the coil, then the induced voltage depends on the angle (α) between the coil area and the direction of the flux density, Equation (10).
Power is extracted from the generator by connecting the coil to a load resistance, RL. The induced current in the coil generates a magnetic field, which is opposed to the original magnetic field generated by the permanent magnets, according to the Faraday-Lenz law of electromagnetic induction. This electromagnetic induction results in an electromotive force, Fem that opposes the generator motion, which transfers mechanical energy into electrical energy. Thus, Fem is proportional to the current and the speed and it is defined in the Equation (11).
where Dem is the electromagnetic damping, RL is the load, Rc the coil resistance and Lc the coil inductance, and dΦ/dx is the magnetic flux. Therefore, so as to obtain the maximum electrical power output, the generator design must maximize Dem and speed. Increasing Dem implies maximizing the flux linkage gradient and minimizing the coil impedance. The flux linkage gradient is a function of the strength of the magnets, their relative position with the coil and the direction of movement, and the area and number of turns for the coil. The type of magnetic material determines the magnetic field strength. Usually, permanent magnets are made of ferromagnetic materials. The magnetizing force they provide is H, and the maximum energy product, BHMAX, is its figure of merit, which can be determined by the material's magnetic hysteresis loop. Properties of some common magnetic materials are summarized in Table 5 [9,35] among them the Curie temperature, i.e., the maximum operating temperature that the material can resist before becoming demagnetized. Table 6 [60] illustrates conductor material characteristics available for electromagnetic generators. Figure 9 represents the equivalent circuit model for vibration driven harvester using electromagnetic damping. The components on the primary side model the mechanical parts, where the current source represents the energy flux; the capacitor the mass; the inductor the spring; and the resistance the parasitic damping. The electronic components on the secondary side model the selfinductance of the coil in the electromagnetic device.
Electrostatic Transduction
Electrostatic harvesters [12,35,[61][62][63][64][65][66] are made of a variable capacitor whose plates are electrically isolated from each other by air, vacuum, or an insulator, Figure 10. The external mechanical vibrations cause the gap between the plates to vary, changing the capacitance. To harvest energy, the plates must be charged. In these conditions mechanical vibrations oppose the electrostatic forces present in the device. Therefore, if a voltage V biases the capacitor and if load circuitry is linear, the motion of the movable electrode produces electrical power. The fundamental equations that models this operation are Equations (12) and (13).
where C is the capacitance in Farads, V is the voltage in volts, Q is the charge in coulombs, A is the area of the plates in m 2 , d is the gap between plates in meters, ε is the permittivity of the material between the plates, ε0 is the permittivity of free space and E is the stored energy in Joules. Electrostatic generators can be either voltage or charge constrained. Voltage constrained devices have a constant voltage applied to the plates, therefore the charge stored on the plates varies with changes in capacitance. This usually involves an operating cycle that starts with the capacitor at its maximum capacitance value (Cmax). Then, the capacitor is charged up to a specified voltage (Vmax) from a reservoir while the capacitance remains constant. The voltage is held constant while the plates move apart until the capacitance reaches its minimum value (Cmin). The excess charge flows back to the reservoir as the plates move apart gaining energy. This energy is determined by Equation (14).
Alternatively, a fixed charge can be obtained using electrets materials, such as Teflon or Praylene. In either case, the mechanical work against the electrostatic forces is converted into electrical energy. The net energy gained is determined with Equation (15).
In both equations, Vmax must be compatible with the electronics and the fabrication technology. The previous two approaches have different strengths and weaknesses.
The electrostatic generator can be categorized into one of three types: 1. Out of plane, gap varying, voltage constrained. 2. In a plane, overlap varying, charge constrained. 3. In a plane, the gap is varying. Table 7 shows the electrostatic force for these three types. The electrostatic energy harvesters have many advantages over other methods of vibrating energy harvesting, such as high-quality factor Q, low noise, wide tuning range and contained size. However, electrostatic harvesters produce less energy than other kinetic harvesters and their application range is more limited due to their operation characteristics. Figure 11 represents the equivalent circuit model for vibration driven harvester using electrostatic damping. The circuit on the primary side of the transformer models the mechanical behavior of the harvester. The voltage source represents the vibration source; the capacitor represents the mass; the inductor represents the spring; and the resistor represents the parasitic damping. The electrical elements of the generator are on the secondary side, where the capacitor models the terminal capacitance of the piezoelectric material or the moving capacitor.
Pyroelectric Transduction
A pyroelectric harvester [67][68][69][70][71][72] converts a temperature change into electric current or voltage. Piezoelectric effect is similar, but the ferroelectric behavior is different. Pyroelectricity requires inputs that vary with the time and the provided power outputs are small because of its low operating frequencies. Nevertheless, an advantage of pyroelectrics over thermoelectrics is the stability of many pyroelectric materials up to 1200 °C or higher, which allows energy harvesting from hightemperature sources and therefore increases the thermodynamic efficiency.
The polarization P of pyroelectric materials present a strong temperature dependence because of their crystallographic structure. The spontaneous polarization of these materials is defined as the average electric dipole moment per volume unit when there is no applied electric field. By reversing the applied coercive electric field, ferroelectric materials, a subclass of pyroelectrics, can change the magnitude and direction of the spontaneous polarization. However, it is not possible to perform a direct conversion between two different temperatures Tcold and Thot because the unipolar hysteresis curves depend on the electric displacement D and electric field E exhibited by ferroelectric materials. The characteristic curves of the electric field applied across the sample are counter-clockwise oriented over the isothermal cycling. Ferroelectric materials become paraelectric when heated above their Curie temperature TCurie losing its intrinsic polarization. The electric displacement D of the material at temperature T and electric field E is defined as in Equation (16).
where ε0 is the vacuum permittivity, εr(T) is the relative permittivity of the material, E is the applied electric field, and Ps(T) the saturation polarization. Figure 12 shows the operation basics of a pyroelectric harvester: a. Pyroelectric free charges (represented as positive and minus circles) are attracted to the material due to its spontaneous polarization (Figure 12a). b. If a capacitor is formed with two electrodes plates and the assembly is set at constant temperature, the spontaneous polarization remains constant, consequently is no current flow through the ammeter (Figure 12b). c. As the harvester is being heated, the dipole moment diminishes and the spontaneous polarization slows down. As a result, the number of bound charges at the electrodes decreases, causing a redistribution of charges that produce a current flow through the external circuit (Figure 12c). d. As the harvester is being cooled, the spontaneous polarization increases, and the current sign is (Figure 12d). Equation (17) shows the pyroelectric current Ip produced during the cycle.
where Af is the surface area of the pyroelectric thin film capacitor, PS is the pyroelectric thin film polarization, T is the pyroelectric capacitor temperature, and p is the pyroelectric coefficient. Thus, the net output power Np that the pyroelectric capacitor provides is given by Equation (18).
where Vappl is the applied voltage across the pyroelectric capacitor terminals. The integration of Equation (18) over a temperature cycle gives the cumulative pyroelectric conversion output work Wout represented with Equation (19). Figure 13 shows a pyroelectric harvester with a bimorph cantilevered architecture. The bottom layer of the pyroelectric material has a large coefficient of thermal expansion (CTE) while the lower and thicker metal layer is made of a low CTE metal, such as Titanium (Ti). The bimorph metal and pyroelectric P(VDF-TrFE) layers are typically 2-10 µm thick. The uppermost metal layer creates a continuous metal film over the P(VDF-TrFE) dielectric layer. It must be thick enough to not contribute to the bimorph bending of the cantilever (typically 10-50 nm). The cantilever structure of the harvester heats through a heat generator, the anchor. Starting from a repose condition, when cantilever temperature increases it bends towards the lower cold heat sink surface. Once it touches the cold surface of the heat sink, the structure loses heat and bends back the hot upper surface, at a pace defined by the thermal resistance of the heat sink. When it contacts again the heat source, the process is repeated indefinitely if there is a heat gradient between heat generator and sink. Transferring enough amounts of thermal energy to the pyroelectric capacitor on the cantilever requires good thermal contact between the hot and cold surfaces, i.e., low thermal resistance. The heat transfer is produced through small striction forces between the surfaces of the sink and the heat generator and the surface faces of the cantilever mass. These forces, once contact and the temperature exchange has been made, counteracts the bimorph mechanical force that pulls the cantilever structure away from the surfaces. Hence, the faster the temperature can be cycled back and forth across the device, the more efficient the energy conversion process is and the higher the amount of electrical energy generated. Besides, the amount of current and electrical energy that this architecture produces is function of the magnitude of the pyroelectric coefficient p, the size of the capacitor (plate area A), and the rate of temperature gradient changes across the pyroelectric capacitor. Additionally, the time that the cantilever remains attached to the heat and sink surfaces can be set by applying an alternating electric potential between adjacent surfaces. The equivalent electric circuit of the harvester is shown in Figure 14. The Carnot efficiency limits thermodynamically the work available. It is given by Equation (20).
where Th is the temperature of the heat generator and TL is the temperature of the heat sink. The conversion efficiency, η, is function of the electrical energy (Wout) and the convert heat (Qin) of the thermal energy gradient power generator, Equation (21).
where WE is electrical energy, WP is the energy produced lost in the temperature cycle, Cv is the heat capacity of the pyroelectric device, QInt represents the intrinsic heat losses in the thermal cycle, and QLeak are the heat leakages between the hot and cold sources and the pyroelectric mass. The energy conversion efficiency for any thermal energy recovery device depends on the temperature difference between the hot and cold sources. Experimental results found in the bibliography exhibit maximum overall efficiencies in the range of 3-7%, for temperature gradients of 10-20 °C. The efficiencies increase up to 20-40% if the temperature gradients are between 100-300 °C.
Thermoelectric Harvester's Technology and Devices
Thermoelectric harvesters [9,12,35,[73][74][75][76][77][78][79] are suited to environments with temperature gradients. This harvester technology exploits the Seebeck effect to convert thermal energy into electric. The temperature gradient between the material terminals provides the potential for efficient energy conversion, while heat flow provides the power. Thermoelectric devices, even when the heat flow is large, do not provide much energy because the material efficiencies are low and the Carnot theorem. Therefore, heavily doped semiconductors constitute the best thermoelectric materials.
Thus, thermoelectric harvesters are solid-state devices without moving parts. This type of device is suitable for energy harvesting application because it is silent, reliable, scalable and easily installed.
The thermoelectric effects appear because charge carriers in metals and semiconductors could change their energy levels when energy is applied, thus producing heat transfer or electric energy in form of voltage or current. When a thermoelectric material is placed under a temperature gradient, there is a diffusion of charge carriers from the hot end to the cold one. The build-up of charge carriers implies that there is a net charge (negative for electrons, e − and positive for holes, h + ) at the cold end, which produces a voltage between both terminals. The electrostatic repulsion due to the accumulation of charge and the chemical potential for diffusion reach an equilibrium. This property is known as the Seebeck effect and is the basis of temperature measurement with thermocouples and the thermoelectric power generation. Figure 15 shows operation mode of a thermoelectric harvester. If there is a gradient temperature across a thermoelectric material, it becomes a thermoelectric generator able to power an electric load connected to its terminals. The temperature difference and the material Seebek coefficient determines the voltage (V = αΔT), meanwhile the heat flow drives the electrical current, which therefore determines the power output. The figure of merit of thermoelectric materials (zT) is a function of the Seebeck coefficient (α), the absolute temperature (T), the electrical resistivity (ρ) and the thermal conductivity (κ) of the material, as depicted in Equation (22).
The voltage level that a single thermocouple provides is in the range of mv, thus, to obtain practical output voltage and enough power for low temperature gradients, the thermocouples can be arranged electrically in series and thermally in parallel, forming the thermoelectric device. Equations where ΔT = Th − Tc is the temperature gradient across the thermoelectric harvester. Equation (27) shows the dependency of generated electric power (P) on the convert's heat (Q) and the system efficiency (η).
= (27)
Therefore, it is not straightforward to obtain the exact efficiency of thermoelectric materials. Equation (28) provides an efficiency value based on the constant properties approximation.
The state-of-the-art thermoelectric device has an efficiency of approximately 10% due to losses in the electrical junctions, and thermal electrical contact resistances between different materials, and other thermal losses. Figure 16 shows the equivalent circuit model of the thermoelectric energy harvester (TEG). The temperature difference, ΔTTEG, between the junctions of the TEG is lower than the temperature gradient, ΔT = Th − Tc, externally imposed across the thermal energy harvester. This difference is caused by thermal contacts and thermal grease resistances of the cold and hot ends of the thermal energy harvester, i.e., Rcon(H), Rcon(C) and Rg(H), Rg(C), respectively. The thermal resistance, RTEG, of the TEG is made as high as possible, in order to minimize this negative effect, the remaining thermal resistances of the energy harvester are designed as small as possible.
Based on the aforementioned thermal circuit, the temperature difference across the thermoelectric is obtained with the Equation (29).
And Equation (30) provides the heat flow through the circuit.
Combining these two equations and the linear relationship between efficiency and ΔTTEG gives the electric power value. As shown in Equation (31), the higher number of thermocouples, the larger temperature difference between the hot and cold junctions, the higher absolute Seebeck coefficient and lower internal resistance, the higher output power. Figure 16. Equivalent electro-thermal circuit of a thermoelectric energy harvesting device.
Magnetic Harvester's Technology and Devices
Magnetic harvesters [80][81][82][83][84][85] are made of a ferromagnetic material and an inductor. The Faraday's law of induction describes their behavior. When the harvester is placed close to a variable magnetic field, an induced electromotive force is generated on the inductor; this principle is the electromagnetic induction. The electricity is generated when a changing magnetic field influences the harvesting device.
The most common source of magnetic field is a conductor carrying electrical current. Thus, a standard application wraps the harvester around an electric power line, due to the high-power source and high magnetic variability. Nevertheless, is also possible to harvest power from electric powerlines, such as the harmful noise that produce the power lines in form of magnetic fields.
The flow of electric charge has an associated magnetic field. Moreover, ferromagnetic materials are ideal materials for this harvester technology. These materials have the propriety to attract a magnet and transfer electromagnetic energy to electric energy, where the usual used ferromagnetic materials are iron, nickel, cobalt, alnico and an alloy of aluminum-nickel-cobalt. Ferromagnetic materials are classified with relative permeability. The permeability ratio of a material compared with that in vacuum for the same magnetic field strength defines the relative permeability of a material. Besides, the relative permeability factor increases the magnetic field across the material. Table 8 [27] shows the relative permeability of different ferromagnetic materials. Another consideration in the implementation process of magnetic energy harvester is the properties of the coil. The series resistance and the number of coil turns are the parameters that determine the voltage and available power generated by the harvester. Moreover, the number of turns influence the coil geometry, the wire diameter and the coil wrap density. The insulated circular wire does not fill completely the volume of the coil with conductive material, thus the percentage of copper inside a coil defines the fill factor. The area of the wire, Awire, can be related to the total crosssection area of the coil, Acoil, for a given copper fill factor, N = Acoil/Awire. The copper fill factor depends on the tightness and the shape of the winding, and the variations in the thickness of the insulation. The harvester described in the previous lines is shown in Figure 17. The voltage produced in the harvester, the value of the coil inductance and the output power can be obtained with the Faraday's equations. Thus, the open circuit voltage of an N turn coil placed on a cylindrical core with l length, D diameter, and a relative permeability μr is given by Equation (32).
where B is the magnetic flux density (in air) parallel to the coil axis and f is his frequency. μeff is the effective relative permeability, which is inside 1 < μeff < μr range. The coil self-inductance is obtained with Equation (33).
The magnetic harvester's maximum output power is obtained in impedance matching situation with the load. If the coil open circuit voltage Voc under some defined conditions is optimally obtained, the output voltage Vout will be the half of Voc, Equation (34).
Finally, output power per unit volume in W/m 3 is represented as in Equation (35).
where ρ is the wire resistance per unit of length. The electric circuit representation of previous equations is shown in Figure 18. Whenever a current flow cross through the core conductor, an electric field proportional to the current is generated. Then, by the principle of induction, a current proportional to the electrical field, scaled by the transformer ratio, T, is induced in the secondary coil. Besides, ferromagnetic material resistance could be added to the secondary coil to increase the accuracy of the model. The resistance Rin is the internal resistance of the inductor. The transformer ratio is function of the distance between the core and the conductor line. Furthermore, if the magnitude of the primary current is reduced or the distance of the inductance from the conductor increases, the maximum energy provided by the harvester decreases exponentially. Therefore, it is inefficient to convert the magnetic field energy into electrical energy due to the low efficiency. Table 9 [86] compares the harvested power for different coils [81]. The magnetic field harvester could be used for monitoring electric lines because current carrying conductors are an ideal source of energy to be harvested.
In this context, radio frequency waves include frequencies from 3 kHz to 300 GHz. The harvested power depends on the incident power density, the distance between the transmitter and receiver, the power conversion efficiency and the harvester antenna size. Thus, the intercepted power is directly proportional to the size of the antenna aperture.
A coil and a separator compose an RF harvester, Figure 19. The coil is made of conductive materials (Table 6) and the separator is made of nonconductive materials, to avoid short-circuit situations and maintain the integrity of the coil. The electric field strength (V/m) quantifies the incoming RF radiation. In the far-field region, the electric field strength is converted into incident power density with Equation (36).
where S is the incident power density (W/m 2 ) and Z0 is the free space characteristic impedance. The incident power density not only depends on the distance between the source and the receivers, but also on the direction. For a system with two antennae, the received power PR as a function of the transmitted power PT is given by Equation (37).
where GT is the gain of the transmission antenna and GR the gain of the receive antenna, λ is the signal wavelength, and r is the distance between the two antennas. The far-field region, rff, is related to the antenna's physical dimensions and the signal wavelength, which is calculated with Equation (38), and being only valid for the antennas that operate in the far-field region.
An RF harvester device is usually made of a harvester circuit and an integrated rectifier. An example rectifier circuit consists of a single Schottky diode. The Schottky diode makes a dynamic switching in the GHz range. Figure 20 shows the equivalent circuit. Table 10 [87] shows the amount of energy collected with different RF energy harvesting experiments. The table shows the energy harvesting rate dependency with the source power and distance. The energy harvested is in the range of µW, an amount of energy only suitable for ultra-low power devices.
Both harvester technologies could have the same equivalent electromechanical circuit model ( Figure 21). Where T*drive is the fluid flow power; M the mass of the rotational parts; Cm the coefficient of frictional torque; G the electromechanical conversion coefficient; ω the mechanical angular velocity; Vg the generated voltage at the coils; Lin generator coils inductance and Rin coils internal resistance.
Wind Harvester's Technology and Devices
The wind flow harvester has two parts, one mechanical and one electrical. This system converts wind energy into mechanical energy, then into electromagnetic and finally into electrical energy. The electromagnetic wind generators are reliable and have small mechanical damping and magnets suitable to operate at low wind velocities, Figure 22.
When the airflow passes through the system structure, the airflow force pushes the blades, because of that they rotate around a pivoting axis. The blades have magnets attached (rotor) and their movement generates a variable magnetic flux. Consequently, the magnetic field created is harvested as a current induced in the coils of the generator (stator). As the miniature horizontal axis wind turbine (MHAWT) spin with the wind, so the rotor does, and captures and transforms the kinetic energy of the incoming wind into mechanical energy [102]. By the aerodynamic equation of Ibrahim [103], the available kinetic power from the airflow is (39).
where ρ the air density, A is the area swept by the rotor of wind turbine, and v the wind speed. However, the conversion of wind power into rotational power in the rotor of the turbine is a complex aerodynamic phenomenon [104][105][106]. Ideally, the power obtained from the ambient wind (Paero) is expressed as in Equation (40).
where Cp is the aerodynamic efficiency of the rotor or the power coefficient, which has a nonlinear dependence with the pitch angle θ of the turbine blades and the tip speed ratio λ. This final parameter is expressed in Equation (41).
where ω is the angular velocity and r the radius of the rotor. In addition, to calculate the power coefficient Cp(λ,θ) for small wind turbines, approximation equations are proposed by [107][108][109], giving as result the Equations (42) and (43).
where c1-c9 are power coefficients. In addition, the aerodynamic profiles of the turbine blades have a significant influence on the efficiency of the spinning. The blades determine directly the system torque force, which influences the output power level. The number of blades also conditions the performance of the energy harvester. System maximum efficiency and output power also depend on the impedance matching between the load, the torque force, and the wind flow. Finally, Table 11 shows achieved results with several wind flow harvesters.
Water Flow Harvester's Technology and Devices
The main difference between water flow and wind flow harvesters is the energy source. The operation principles of physics change from aerodynamics to hydraulics, because the flowing water has kinetic energy due to the water pressure fluctuation. This kind of harvester converts hydraulic kinetic energy into electrical energy by mechanic and electromagnetic conversions.
Water flow environments offer a high potential for energy harvesting. However, small-scale water harvesters are not common because their mechanical complexity. Figure 23a,b shows two architectures of water flow harvesters. Both are commonly used to produce electric energy in water fall and damps, i.e., at a large scale, but they are harder to implement on a smaller scale due to mechanical restraints. Energy generation in large-scale water flow systems usually use either Pelton or propeller turbines. Pelton turbines are more efficient for high heads and low-water flows, and propeller turbines for low heads. Therefore, the available power is proportional to the product of the flow rate with the head. Equation (44) shows the available power of a hydraulic system.
where ρ is the water density, g is the gravity acceleration, Q is the flow rate, and H is the effective height. Thus, the theoretical velocity is Equation (45).
The flow rate is given by Equation (46).
where A is the cross-sectional area, ds is the diameter and v1 is the maximum peripheral speed. Reference [30] describes a water flow harvester aiming for harvesting application of reduced dimensions. The rotor proposed is a permanent magnetic ring with a diameter of 20 mm and a height of 5 mm, and the stator has a built in three-phase generator of nine coils with a maximum diameter of 5 mm. The harvester presented is then compared with other types of turbine for different water flow velocities. Table 12 shows the results achieved. One of the applications of water flow energy harvesters is monitoring the water quality and hydraulic state of water distribution systems or rivers.
Acoustic Noise Harvester's Technology and Devices
Energy harvesting from sound is a relatively new technology [31,[111][112][113][114][115][116][117][118]. Acoustic noise is one of the most common pollution sources in cities, therefore it can be used as a source of energy to power electric devices with low consumption. Table 13 [111] shows some sources of acoustic noise and their intensity. Air and material permittivity are the parameters which affect the propagation of electromagnetic waves. However, in the acoustic wave's case, systems are dependent on mass density ρ and the bulk modulus of the medium, B. The refractive index of the medium, n, for acoustic waves is given by Equation (47). = (47) where v0 is the speed of sound in the air. And the effective modulus of the system Beff is calculated with Equation (48).
where B0 is the bulk modulus, ω the sound frequency, F the structural factor and ω0 the resonance frequency. The system frequency of the acoustic harvester depends on the structural factor, the resonance frequency, neck area, neck length, and the volume of the cell or acoustic tube. The resonance frequency of the system can be obtained with Equation (49).
When the acoustic signal comes and pressure varies, the membrane diaphragm moves in response to the changing force in the air. When the membrane starts moving, the magnet is also moved creating a variable electromagnetic field. The system inductors convert the electromagnetic field into electric energy, and a small voltage is produced across the load terminals. Figure 24 shows the architecture of an acoustic harvester, a Helmholtz Resonator. In this harvester, the acoustic signal enters the harvester through the insertion orifice into the system cavity. The power generator is in the cavity and it is made of a hard-core magnet inserted into a thin elastic material membrane. The generator inductor is placed under the elastic membrane. Usually, acoustic signals are of low power, therefore, the harvester construction must bear in mind the availability of a suitable movement even with low-pressure variations. Another consideration is that the membrane does not vibrate evenly due to changes of the noise level during the system operation. This modifies the oscillation frequency of the membrane and decreases the system's efficiency. The equivalent circuit for acoustic harvester is shown in Figure 25 [114]. The equivalent circuit has two parts, acoustic and electric. The transference of energy between the acoustic and the electric domain is modelled with a transformer, with a transduction ratio of Φ. On the primary side of the transformer there are, a mass MaN, a damper RaN, an acoustic cavity Cac, a lumped mass MaD, a compliance CaD and a radiation resistance. Their values represent the mechanical parameters of the harvester. On the secondary side, there are the electrical blocked capacitance CeB, the dielectric loss resistance ZeL and the load.
Conclusions
The research work presented in this article provides structural electrical models for different state of the art energy harvesting technologies. The aforementioned models allow conversion of the physical parameters of the harvesters into electronic components, thus helping the specification, scaling and design of power supply sources for electronic systems to provide the energy they require, to improve their efficiency, and at the same time, to reduce their environmental footprint.
The complete harvest modeling requires the addition of a storage system (batteries or capacitors), a power supply converter architecture (AC-DC or DC-DC) and a control system that manages and correlates the energy availability statistics over time, and the electronic load consumption patterns of the target application [119]. Those systems must be adapted to the energy and voltage waveform that each specific harvester technology provides, therefore it requires an ad hoc research of each block that constitutes a complete energy harvester.
Funding: This research received no external funding.
|
2019-05-17T14:02:59.983Z
|
2019-04-30T00:00:00.000
|
{
"year": 2019,
"sha1": "20ca9d6b6b996859f6da5850c2443ad466a6737a",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/electronics8050486",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "07e377dd8414a170aa4f202a75a83f2d04aa08e4",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Engineering"
]
}
|
11668425
|
pes2o/s2orc
|
v3-fos-license
|
Subacute transverse myelitis with Lyme profile dissociation.
INTRODUCTION
Transverse myelitis is a very rare neurologic syndrome with an incidence per year of 1-5 per million population. We are presenting an interesting case of subacute transverse myelitis with its MRI (magnetic resonance imaging) and CSF (cerebrospinal fluid) findings.
CASE
A 46-year-old African-American woman presented with decreased sensation in the lower extremities which started three weeks ago when she had a 36-hour episode of sore throat. She reported numbness up to the level just below the breasts. Lyme disease antibodies total IgG (immunoglobulin G) and IgM (immunoglobulin M) in the blood was positive. Antinuclear antibody profile was within normal limits. MRI of the cervical spine showed swelling in the lower cervical cord with contrast enhancement. Cerebrospinal fluid was clear with negative Borrelia Burgdorferi IgG and IgM. Herpes simplex, mycoplasma, coxiella, anaplasma, cryptococcus and hepatitis B were all negative. No oligoclonal bands were detected. Quick improvement ensued after she was given IV Ceftriaxone for 7 days. The patient was discharged on the 8(th) day in stable condition. She continued on doxycycline for 21 days.
CONCLUSIONS
Transverse myelitis should be included in the differential diagnosis of any patient presenting with acute or subacute myelopathy in association with localized contrast enhancement in the spinal cord especially if flu-like prodromal symptoms were reported. Lyme disease serology is indicated in patients with neurological symptoms keeping in mind that dissociation in Lyme antibody titers between the blood and the CSF is possible.
to the level just below the breasts. Lyme disease antibodies total IgG (immunoglobulin G) and IgM (immunoglobulin M) in the blood was positive. Antinuclear antibody profile was within normal limits. MRI of the cervical spine showed swelling in the lower cervical cord with contrast enhancement. Cerebrospinal fluid was clear with negative Borrelia Burgdorferi IgG and IgM. Herpes simplex, mycoplasma, coxiella, anaplasma, cryptococcus and hepatitis B were all negative. No oligoclonal bands were detected. Quick improvement ensued after she was given IV Ceftriaxone for 7 days. The patient was discharged on the 8 th day in stable condition. She continued on doxycycline for 21 days. Conclusions: Transverse myelitis should be included in the differential diagnosis of any patient presenting with acute or subacute myelopathy in association with localized contrast enhancement in the spinal cord especially if flu-like prodromal symptoms were reported. Lyme disease serology is indicated in patients with neurological symptoms keeping in mind that dissociation in Lyme antibody titers between the blood and the CSF is possible.
Introduction
Transverse myelitis is a very rare neurologic syndrome caused by inflammation of the spinal cord and occurs in both adults and children. Incidence per year varies from 1-5 per million population [1], [2]. Jeffery et al. [3] collected 33 cases in the Albuquerque area, NM, with a population base of 500,000 from 1960 through 1990 yielding an incidence of 4.6 per million per year. Berman et al. [4] gathered data on all Jewish patients with transverse myelitis throughout Israel for the period 1955 through 1975. Based on 62 patients who satisfied rigid diagnostic criteria, the average annual incidence rate was 1.34 per million population. No significant difference in incidence was noted between European/American-born and Afro/Asian-born populations. It is estimated that about 1400 new cases of transverse myelitis are diagnosed each year in the United States [5]. In the literature, a few cases of subacute transverse myelitis (SaTM) with their spinal MRI and cerebrospinal fluid (CSF) findings were reported [6]. We are presenting an interesting case of SaTM with its MRI and unusual CSF findings.
Case presentation
A 46-year old African-American woman presented with decreased sensation in the lower extremities. This started three weeks ago when she had a 36-hour bout of sore throat, cough and upper chest discomfort. At around the same time, she began to note a feeling of numbness of the right foot. Over the ensuing few days, this began to spread involving both legs up to the level of the knees. She reported hypersensitivity to touch in the lower extremities and described feeling as is she was walking on calluses. The sensation kept spreading up to the level of the buttocks and gradually up to the level just below the breasts bilaterally. Around that time, she began to have severe pain in the back involving the mid-thoracic region and interscapular area as well as the left scapular region. She was able to ambulate independently and was not aware of any weakness of the lower extremities. She had no upper extremity numbness, tingling or weakness. She noted decreased urination over the last two days, and when catheterized she was noted to have a post-void re-sidual. She did not have a bowel movement for approximately 4 days. No prior similar episodes were reported. She used to be in excellent health and exercised regularly. She did not use alcohol, cigarettes or illicit drugs and had no history of chronic medical problems. She had hysterectomy and bilateral salpingo-oophorectomy 25 years ago for fibroids. Her only medication was Premarin, but she had been using some Tylenol No.3 and/or Darvocet sparingly as needed for pain. Family history was negative for neurologic disease. She had had no tick bites, rash or fever. She denied headache, nausea or vomiting. Review of systems was otherwise negative. Muscle strength was intact in the upper and lower extremities. Muscle tone was slightly increased in the lower limbs. There was no clonus. Deep tendon reflexes were traced in the upper extremities, 2+ at the knees, 2+ at the ankles. Toe signs were silent. There was no spine tenderness. Sensory examination revealed a sensory level to pin anteriorly at approximately T5 of T6. A level could not be determined over the back. Vibratory sense was absent at the ankles and reduced at the knees, but present over the upper extremities and sternum. Sensation was normal in the upper extremities. White count 13,700/mm 3 , hemoglobin 13 g%, hematocrit 37%, platelets 289,000/mm 3 , VDRL (venereal disease research laboratory test), RPR (rapid plasma reagin test) and HIV (human immunodeficiency virus) were negative. ESR (erythrocyte sedimentation rate) was 28 mm/hr but CRP (C-reactive protein) was normal. TSH (thyroid stimulating hormone) was normal. Lyme disease total IgG and IgM in the blood was positive (>1.1 index). Antinuclear antibody profile was within normal limits. MRI of the cervical and thoracic spine showed a diffuse, intramedullary abnormal signal extending from the medulla down to approximately the mid-thoracic level. There was swelling in portions of the spinal cord, particularly in the lower cervical cord where there was contrast enhancement (Figure 1, Figure 2, Attachment 1). The MRI of the brain was normal.
Discussion
The term transverse myelitis refers to nonspecific inflammation across one level of the spinal cord. Approximately one third of patients with transverse myelitis report a febrile illness (flu-like with fever) in close temporal relationship to the onset of neurologic symptoms. Transverse myelitis symptoms develop rapidly over several hours to several weeks. Approximately 45% of patients worsen maximally within 24 hours. Inflammation within the spinal cord interrupts neuronal pathways and causes the common presenting symptoms of transverse myelitis which include limb weakness, sensory disturbance, bowel and bladder dysfunction, and back/radicular pain. Almost all patients develop leg weakness of varying degrees of severity. Sensation is diminished below the level of spinal cord involvement in the majority of patients. Some experience tingling or numbness in the legs. Pain and temperature sensation are diminished in the majority of patients. Vibration sensation and joint position sense may be decreased or spared. Bladder and bowel sphincter control are disturbed in the majority of patients. Patients sometimes report a tight banding or girdle-like sensation around the trunk that may be very sensitive to touch. Transverse myelitis may occur in isolation (idiopathic) or in the setting of another illness. It may happen as a parainfectious, paraneoplastic or postvaccinal syndrome or as a complication of systemic autoimmune disease, multiple sclerosis or vasculopathy. Ropper AH and Poskanzer DC [7] followed 52 patients with acute and subacute transverse myelopathy at the Massachusetts General Hospital between 1955 and 1975. Nineteen had symptoms of a recent acute infectious illness, three had cancer, and one had undergone a recent operation. Jeffery et al. [3] had 45% of their 33 cases mentioned above categorized as parainfectious, 21% as associated with multiple sclerosis, 12% as associated with spinal cord ischemia, and 21% as idiopathic. Parainfectious transverse myelitis may be distinguishable from that associated with multiple sclerosis on the basis of presentation, findings on imaging, and the presence of cerebrospinal fluid oligoclonal bands. Patients with parainfectious transverse myelitis show evidence of spinal cord swelling, whereas patients with multiple sclerosis-associated transverse myelitis have spinal cord plaques on MRI but not swelling [3]. Oligoclonal bands are absent in patients with parainfectious transverse myelitis (as in our case) and present in the majority of patients with multiple sclerosis-associated transverse myelitis [3]. Acute myelitis accounts for 4 to 5 percent of all cases of neuroborreliosis [6]. In the study of Blanc et al. [6], Lyme serology was positive in CSF for all three reported cases. In our patient's case, Lyme disease total IgG and IgM in the blood was positive (>1.1 index) however, Borrelia Burgdorferi IgG and IgM in the CSF were negative (<0.80 index). Lyme serology in CSF is indicated for any patients presenting with myelitis, particularly in endemic areas [6]. Transverse myelitis is generally a one-time occurrence and recuperation generally begins within 1 to 3 months with most patients showing good to fair recovery [4]. In the Ropper and Poskanzer study [7], an acute catastrophic onset was generally associated with back pain and led to a poor outcome in seven and a good outcome in only one of eleven patients. A subacute progressive onset over several days to four weeks (as in our case), generally with ascending paresthesias or leg weakness, was associated with a good outcome in 15 and fair outcome in 17 of 37 patients. The outcome seems to be correlated with the degree of cord enlargement, persistence of increased signal intensity and limited recovery. Atrophy and remaining high signal intensity are noted on late MRI in patients with poor outcome [8].
Although the patient, in our case, had negative Lyme titer in the CSF, the blood titer was positive and the patient responded quickly to ceftriaxone which was more consistent with transverse myelitis caused by borrelia burgdorferi. In addition to that, the high flu-like prodromal syndrome, high WBC (white blood cell count) and the detection of lymphocytes in the CSF pointed to infection-related process in the organism. IV ceftriaxone and oral doxycycline both were found to be effective, safe, and convenient for treatment of Lyme neuroborreliosis [9].
|
2016-05-04T20:20:58.661Z
|
2008-06-10T00:00:00.000
|
{
"year": 2008,
"sha1": "5604148123162ff02d5cc416f592035a9c5707a0",
"oa_license": "CCBYNCND",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "5604148123162ff02d5cc416f592035a9c5707a0",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
234561583
|
pes2o/s2orc
|
v3-fos-license
|
Strength predictions of GGBS based cement mortar with different M-Sands using Neural Networks
This study presents the compressive strength predictions of Ground Granulated Blast Furnace Slag (GGBS) based cement mortars containing two different types of M Sands (normal M sand and white M sand) which have been replaced effectively with the ordinary Portland cement. The defined mix ratios of mortar cubes are examined for compressive strength at 7, 14 and 28 days. Artificial Neural Network is a useful tool to predict various data’s strengths, making the work much more comfortable. Then the obtained compressive strength results of GGBS based cement based mortars varied with two types of M-sands at different days were feed into ANN tool box in MATLAB software to acquire the strength predictions. Experimental results indicated that the compressive strength results of GGBS based mortar with white M-sand showed superior results than the normal M sand. The predicted compressive strength results of GGBS based cement mortars obtained from ANN framework was in good agreement with the experimental results.
Introduction
Different types of mortars have been employed in the construction practices depending on the type of work involved. Generally mortars were used to hold the various weathered bricks. When the mortar becomes rigid and it behaves as a sacrificial element to the structure [1]. The mortar is not that much expensive which is used in the building blocks. The most common binder used from 20 th century is lime, lime mortar and gypsum in the form of plaster of paris used to repair the monumental old structures. It consists of a mixture of Portland cement and limestone and hydrated lime, increasing the setting time, water retention period, and durability. When the mortar cement is mixed with different types of sand and water, there should not be any additional hydrated lime which is not recommended for unit masonry construction [2].
The mortar materials are batched for making its strength, and therefore, proper curing and testing should be carried out so that specimen could withstand more strength and durability. The quality of workmanship and ambient conditions play significant role in this process. Mortar with proper workability which is of soft will spread through the surface easily and without smearing and dropping. Expansion of mortars will cause disintegration in masonry due to unsound ingredients present in it [3,4]. Fineness, time of setting, air content and water retention play a significant role in its performance with its workability [5,6]. Mason needs to confirm the workability of the mortar to the application to achieve watertight construction. In this, the machine batching is not mandatory and is done by gentle water addition into it.
Cement Properties
Several properties should be followed that will give us a detailed cement report to be used in the site investigation. The important criteria's to be followed while preparing and handling different types of mortars are listed below Consistency-I it is defined as the quantity of the water required to make cement paste of standard consistency. In this process, initial setting time and final setting time were involved mainly due to the influence of tricalcium silicate and tricalcium aluminate. Soundness-It is defined as the time taken for the hardened cement paste to regain its original volume. Thus, the cement paste must not undergo any changes in its volume after it has set. It is also defined as the volume stability of the cement paste. Strength-There are three types of strength characteristics namely compressive, tensile and flexural. These strength properties were greatly affected by several properties like water/cement ratio, size shape of the specimen, and loading conditions.
Sand Properties
Since there is a greater demand for the river sand, we need to shift our focus towards the m-sand where it has been crushed from the various rocks of huge mountains that differ from its mineralogical and physical properties.
1. The M-sand's durability characteristics nature makes it resistant against the high climatic conditions thereby preventing the formation of honeycombing and corrosion of steel [4]. 2. It has a smoother surface with texture and elongated flaky particles which improves the durability and longer life span. 3. It is 30% cheaper than the normal river sand, reducing the environmental disaster like groundwater depletion and scarcity of water, which will be a threat to the human race [1]. 4. Its proper cubical and granulated size particle gives a very smooth texture, consisting of a micron size of 150 which yields good flexibility and workability properties [7].
Artificial Neural Network (ANN)
The artificial neural network is a numerical model that controls the whole system and imitates the functions of a human brain [8]. It can adapt new and changing the environment, analyzing the unclear, fuzzy information and making its own judgment [9,10]. Our brain has its complex organ, which will control our whole body. It is also a primitive animal that has the capacity of the most advanced computer. Its control is for all the body's physical parts and learning, thinking, dreaming, and visualizing. Our brain will think beyond the limits bounded by our computer technology. It has the world's powerful source than any other one capacity of the most advanced computer.
There are several problems in ANN; they do need a supervised or unsupervised approach. The handling of the data requires several tactics that should be involved with several inputs and outputs [11,12]. This will be composed of several layers called bias, in which the data will be hidden to the bias and undergo several iterations, and it has been repeated continuously until we arrive at accurate results [13,14,15]. The one-layer network has four neurons, which have its own four weights leading to its bias, which is always equal to one. However, the weights leading from the bias is changed towards its nodes. The second layer has six neurons in which two is in the first layer, and four is in the second or in its output layer. It is understood that the input signals are normalized between 1 and 0. To overcome its consistency, the input is regarded as a non-active layer where the neurons will be serving the first layer of its active neurons. There are also variables of three input variables and four input variables.
Mix Proportions
In this study, totally six mix proportions were arrived in which the amount of GGBS cement is gradually increased from 0% to 50% in the parts of cement by maintaining constant water content ratio of 0.45. The proportions have been taken in the ratio 1:3 by trial and error. The mix ratios adopted in this study are shown in Table 1. Table 1. Mix ratios adopted in the study.
Compressive Strength
The compressive strength of GGBS based mortars with different types of M sand were evaluated using compression testing machine. Figures 1 and 2 show the specimen which are been tested. Figure 8 and 9 presents the ANN prediction of the various graphs which consists of training, Validation, and Test data that have been plotted in resource graph. As the prediction and the actual data that has been collected is 99% these will elaborate us that the White M Sand will have the slight variation with the compressive strength corresponds to its prediction data.
Conclusion
Based on the experimental and predictive results the following conclusion were drawn: From the test results it was observed that the mix 1 yields 4% higher compressive strength compared to mix no 6 and then strength increased gradually at later ages.
It was observed from graph 2 that the strength of the mix 3 increases gradually and showed 5 % higher compressive strength than mix 1 which has no GGBS content.
From the experimental results it can be concluded that white M Sand showed 3-4% increase in compressive strength compared to normal M Sand.
The figure 6 and 7 depicts the ANN prediction results of compressive strength having two types of M sands showing 99% accuracy.
|
2021-05-16T00:03:59.962Z
|
2020-12-01T00:00:00.000
|
{
"year": 2020,
"sha1": "861ec0b2472251382d47baffa12b7a1e781d26e4",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1716/1/012015",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "b7ddba1e24c915a16c6959923e584fb2eb2e1a5d",
"s2fieldsofstudy": [
"Materials Science",
"Engineering"
],
"extfieldsofstudy": [
"Materials Science",
"Physics"
]
}
|
251623480
|
pes2o/s2orc
|
v3-fos-license
|
Peripheral N-methyl-d-aspartate receptor activation contributes to monosodium glutamate-induced headache but not nausea behaviours in rats
Monosodium glutamate induces behaviors thought to reflect headache and nausea in rats. We explored the effects of the N-methyl-d-aspartate receptor antagonist (2R)-amino-5-phosphonovaleric acid, the inotropic glutamate receptor antagonist kynurenic acid, and the CGRP receptor antagonist olcegepant, on monosodium glutamate-induced increases in nocifensive, headache-like and nausea behaviours. Effects of these antagonists on motor function were examined with a rotarod. The effect of the dopamine receptor antagonist metoclopramide and the serotonin 3 receptor antagonist ondansetron on nausea behaviour was also assessed. (2R)-amino-5-phosphonovaleric acid, and to a lesser extent, kynurenic acid and olcegepant, reduced nocifensive and headache-like behaviours evoked by monosodium glutamate. No alteration in motor function by (2R)-amino-5-phosphonovaleric acid, kynurenic acid or olcegepant was observed. No sex-related differences in the effectiveness of these agents were identified. Nausea behaviour was significantly more pronounced in male than in female rats. Olcegepant, ondansetron and metoclopramide ameliorated this nausea behaviour in male rats. Ondansetron and metoclopramide also reduced headache-like behaviour in male rats. These findings suggest that peripheral N-methyl-d-aspartate receptor activation underlies monosodium glutamate-induced headache-like behaviour but does not mediate the nausea behaviour in rats.
www.nature.com/scientificreports/ Peripheral glutamate signalling has been proposed to play a role in the initiation of migraine headaches 1 . Consumption of monosodium glutamate (MSG: 150 mg/kg) results in reports of headache, craniofacial sensitivity and nausea in healthy humans 2 . Blood levels of glutamate are also elevated during and after a migraine headache 3,4 . Furthermore, genome wide association studies have shown significant polymorphisms in glutamate receptor and transporter genes that are associated with migraine 5 . Functional studies in rats have delineated that elevated blood levels of glutamate dilate dural blood vessels and increase the response of trigeminovascular neurons to mechanical stimulation of the dura 6 . Altogether, these findings suggest that a disruption in peripheral glutamate homeostasis may contribute to migraine headaches and some of their associated symptoms 1 .
We have developed an in vivo rat model, which uses systemic administration of MSG to induce headache-like and nausea behaviours 7 . This model produces dose-and time-dependent increases in nocifensive and headachelike behaviours in a sexually dimorphic fashion. Specifically, intraperitoneal injections (i.p.) of 1000 mg/kg MSG significantly increased nocifensive and headache-like behaviours compared to control (saline), which are attenuated by administration of the migraine abortive therapy sumatriptan 7 . In addition, this dose of MSG produces behavioural and physiological signs of nausea in rats. Headache-like and nausea behaviours occur in association with an increase in plasma glutamate and calcitonin-gene related peptide (CGRP) concentrations, supporting the notion that peripheral glutamate receptor activation may directly or indirectly (through CGRP-mediated mechanisms) contribute to headache pathogenesis 1,7 . Nevertheless, the specific excitatory amino acid receptors through which peripheral glutamate contributes to headache generation in this model have not been identified.
We hypothesized that peripheral glutamate receptor activation contributes to nocifensive and headache-like behaviour in this rat model of migraine. In the present study, we utilized (2R)-amino-5-phosphonovaleric acid (APV) a selective competitive NMDA receptor antagonist and kynurenic acid (KYNA) a non-competitive NMDA and competitive AMPA/Kainate receptor antagonist in the MSG-induced headache model to determine which type of peripheral ionotropic glutamate receptors may be responsible for the behaviours described above. These glutamate receptor antagonists were specifically chosen because they have very poor central nervous system penetration [8][9][10] . We also investigated whether the downstream effects of peripheral glutamate receptor activation on headache-like and nocifensive behaviours involve CGRP receptor activation by utilizing the CGRP receptor antagonist olcegepant. Finally, we validated MSG-induced nausea behaviour through the use of the antiemetic drugs metoclopramide (dopamine receptor antagonist; MTC) and OND (serotonin 5-HT 3 antagonist).
Systemic administration of KYNA, APV and olcegepant does not alter motor co-ordination or balance. To investigate whether systemically administered ionotropic glutamate receptor antagonists (KYNA and APV) or the CGRP receptor antagonist (olcegepant) produced any central nervous system effects, we assessed motor function and balance post administration by using an accelerated rotarod protocol (ACP; Fig. 4a). There was no significant effect of treatment, time, or time/treatment interaction for rotarod latency, distance, and speed (RPM) post treatment (p > 0.05). It was found that neither KYNA (100 mg/kg; i.p.), APV (50 mg/kg; i.p.) nor olcegepant (1 mg/kg; i.p.) reduced latency, distance travelled and speed when compared to baseline in the ACP (Fig. 4b). Thus, there was no evidence of loss of motor coordination or balance dysfunction following treatment of KYNA, APV and olcegepant.
Discussion
In this study we examined whether peripheral glutamate receptor activation contributes to changes in nocifensive and headache-like behaviours in a rat model of headache with migraine-like symptoms. We found that MSG induces nocifensive-and headache-like behaviours principally through activation of peripheral NMDA receptors. This interpretation is supported by the significant inhibitory effect of APV on headache-like behaviors, and by the finding that the glutamate inotropic receptor antagonist KYNA was no more effective at reducing these behaviours. Additionally, MSG-induced nocifensive and headache-like behaviours were partly mediated through activation of CGRP receptors, as they were attenuated by olcegepant, a selective CGRP receptor antagonist. This finding bolsters the argument that systemic administration of MSG in rats is a valid model of migraine-like headache, as olcegepant has been shown to be an effective migraine abortive drug in humans and other preclinical models of migraine [11][12][13][14] . It also suggests that elevated CGRP levels contribute to the changes in behaviour observed after MSG administration 7 . The suppression of normal exploratory behaviour (rearing) is likely secondary to ongoing glutamate-induced pain, as it was also attenuated by the selective NMDA receptor antagonist APV. While KYNA at higher doses was able to suppress MSG-induced nocifensive and headache-like behaviours, a dose response relationship for these effects was not identified in the present study. KYNA is an endogenous noncompetitive antagonist of the NMDA receptor that only poorly penetrates the blood brain barrier, which suggests that its effects are mediated peripherally. In chronic migraineurs, serum levels of KYNA are reduced, which has been suggested to contribute to overactivity of NMDA receptors in migraine 7,15,16 . Based on these findings, it has been proposed that inhibition of NMDA receptor activation may be effective for migraine prophylaxis, although inhibition of central nervous system NMDA receptors by increasing KYNA in the brain has been the focus. Indeed, both preclinical and phase 1 studies have explored the utility of the KYNA precursor kynurenine in this regard 17,18 . Our results indicate that doses of 50 mg/kg or higher of KYNA were effective in the MSG headache model without causing centrally mediated effects. This suggests KYNA may have potential as a peripherally restricted migraine prophylactic therapy.
As previously reported, only male rats exhibited significant LOB behaviour, which is thought to reflect ongoing nausea 7 www.nature.com/scientificreports/ is a valid indication of ongoing nausea in this model. While there was no effect of glutamate receptor antagonists on this behaviour, olcegepant did significantly attenuate LOB, suggesting that CGRP receptor activation partly underlies it. MTC, an antiemetic drug which is also used as a migraine abortive agent, was found to attenuate spontaneous head-flick behaviour, adding further evidence that this behaviour is as a valid representation of headache in rats. These findings support a conclusion that administration of 1000 mg/kg MSG to Sprague Dawley rats produces several migraine headache-like symptoms, only some of which can be attenuated by inhibition of peripheral glutamate receptors. Previously we showed that systemic administration of MSG (1000 mg/kg, i.p.) induces dose-and timedependent nocifensive-and spontaneous headache-like behaviour that was associated with a significant increase in peripheral CGRP levels 7 . This supports the theory (Fig. 3 of reference 1 ) that elevated blood glutamate concentrations initiate headaches by directly sensitizing dural afferent fibers, and indirectly through CGRP (and possibly substance P) release from dural afferent endings to induce vasodilation 1,19,20 . This effect is a peripheral phenomenon as MSG administration does not significantly elevate CNS glutamate levels 21 . Previous research indicates that APV attenuates activation of trigeminovascular neurons by systemically administered MSG 6 . The current study provides additional evidence that the analgesic effect of APV is peripheral, as no evidence of motor dysfunction or altered coordination was uncovered with the dose employed. As blood glutamate levels are also elevated during and after a migraine headache, this finding suggests that peripheral NMDA receptors could be a potential target for novel migraine abortive drugs 3,4 .
Consistent with our previous findings, we found that systemic administration of MSG (1000 mg/kg, i.p.) induces nausea-like behaviour in a sexually dimorphic manner such that male rats display longer lying-on-belly behaviour than females rats 7 . Oral administration of MSG to healthy human subjects results in consistent reports of nausea 20 . This nausea may be induced through activation of gastric vagal afferents and/or the ability of glutamate to excite area postrema (chemoreceptor trigger zone) neurons 22 . Mechanoreceptive gastric vagal afferent fibers increase their firing rate in response to glutamate 23 . This may lead to a gastric distension-like sensation that results in the appearance of LOB behaviour, as an attempt by the rats to modulate it. Gastric distension associated excitation of vagal afferents is mediated by non-NMDA receptors, as previous work demonstrated its inhibition by the AMPA/kainate receptor antagonist CNQX, but not by APV 24 . Consistent with a lack of effect of APV on vagal afferent fibers, we found that APV had little effect on MSG-induced LOB behaviour. However, KYNA, which is a mixed excitatory amino acid receptor antagonist, appeared to slightly increase LOB behaviour. The increase in vagal afferent discharge induced by glutamate has been found to be inhibited by the 5-HT3 receptor antagonist granisetron, which suggests it may occur secondary to a local elevation of serotonin rather than by a direct action of glutamate on the vagal afferent fibers 25 .
The antiemetic agents MTC and OND, both of which have 5-HT3 antagonist activity, did exert an inhibitory effect on MSG-induced LOB behaviour. Both MTC and OND have central nervous system actions, and thus may also act in the area postrema or elsewhere in the central nausea generator to decrease nausea. Interestingly, we also found that olcegepant was also effective in inhibiting LOB behaviour. A recent systemic review and meta-analysis found strong evidence for the anti-nausea efficacy of gepants in episodic migraine 26 . Thus, the nausea behaviour induced by MSG administration responds to the same drugs that are found to be effective for the treatment of nausea in migraine. In this study we found that both MTC and OND were also effective in attenuating spontaneous headache-like behaviour. While MTC is used to abort migraines 27 , OND, though it has been shown to be effective in certain other types of headaches, is not usually used for migraine 28 . However, activation of 5-HT3 receptors has been shown to excite both dural and cranial muscle afferent fibers 29 . Thus, we speculate that part of the mechanism by which MSG produces headache-like behaviour in rats may result from increased peripheral serotonergic tone.
We note that one limitation of our studies was that the dose of MSG (1000 mg/kg) used to produce headache and nausea behaviours in rats is greater than that reported to induce headache and nausea in humans (150 mg/ kg) 2 . Nevertheless, far greater doses of the CGRP receptor antagonists olcegepant and antiemetic drugs MTC and OND were also required to attenuate these behaviours in rats compared to humans, thus indicating that this model is potentially translatable to humans. Future research is required to address these limitations. A considerable variability in the magnitude of behavioural response to MSG was also noted when different groups of rats in this study were compared. We speculate that this variability is explained by individual differences in the rat's responses to MSG. A similar variability in response to MSG ingestion by healthy humans is observed, where only between a third and a half report nausea and headache, respectively 20 .
Despite advances in migraine therapies, many people with migraine do not attain adequate relief or are resistant to available treatments 30 . There is, therefore, a need for additional prophylactic and abortive migraine therapies. Clinical evidence suggests a relationship between elevated plasma glutamate levels and migraine headache 1,3 . Ketamine and memantine, which are NMDA receptor antagonist drugs, have been investigated for migraine treatment and prophylaxis, respectively [31][32][33] . Several studies have also indicated that ketamine and AMPA receptor (LY293558; BGG492) antagonists were effective as abortive therapies in migraine with aura or familial hemiplegic migraine 31,[34][35][36] . Despite their clinical efficacy, centrally mediated adverse side-effects have limited wide-scale use of these glutamate receptor antagonists for migraine pharmacotherapy 37 . The findings in our study suggest that peripherally restricted NMDA receptor antagonists may offer an alternative pathway for the development of prophylactic and/or abortive therapies.
The present study indicates that inhibition of peripheral NMDA receptors attenuates nocifensive-and headache-related behaviours produced by systemic administration of MSG in rats. MSG-induced nausea behaviour does not appear to be mediated by peripheral glutamate receptors but is sensitive to established antiemetic agents. Altogether, these findings further validate the MSG model of migraine-like headache and identify peripheral NMDA receptors as a future drug target.
Methods
Animals. Male (15) and Female (9) Sprague Dawley rats (weight 250-275 g at start of experiment) were used for these experiments. Rats were procured from Charles River laboratory and housed in groups of 2-3, under a light-dark cycle of light at 7:00 h to dark at 20:00 h in a temperature-controlled environment (20-25 °C) with free access to food and water. Experiments were specifically designed to minimize the number of rats used. For behavioural experiments, rats were acclimatized to the experimental set up and procedures for a minimum of 5 days prior to initiation of experiments. Rats were considered appropriately acclimatized when stable baseline mechanical thresholds (MTs) were obtained from stimulation of the temporalis muscle region (as described below). Rats were weighed daily. All animal experiments were approved by the Animal Care Committee of the University of British Columbia (A19-0174) and adhered to the ARRIVE guidelines 2.0. All methods were carried out in accordance with the guidelines and regulations of the Canadian Council on Animal Care. Behavioural assays. Assessment of nocifensive, headache-like and nausea behaviours. Before each experiment, rats were allowed to acclimate to the testing area for 30 min. Rats were individually video recorded for 10 min before treatment (pre-injection video) and for a total of 1 h after treatment (post-injection video). The resultant mp4 video recording files were assigned to two blinded assessors for quantification of headache-like and nausea behaviours. Four distinct categories of non-evoked behaviour were assessed as previously described:
Drugs and
1. Nocifensive-like behaviour: Grimace score 38,39 2. Headache-like behaviour: Head-flick (characterized as stereotypic rapid and arrhythmic vertical twitching of the head) 40 3. Normal exploratory and grooming behaviours: Rearing (front paws lifted from the ground and placed on walls of the chambers), head scratches (movement patterns where the forehead was itched using the front or hind paws) and facial grooming (movement patterns where the facial areas are touched using the front or rear paws) 4. Nausea behaviour: Lying on belly (LOB; pressing abdomen-area between the fore paws and hind paws section onto the floor) 41,42 . Lying-on-belly behaviour is produced by the known nausea-inducing agent lithium chloride, and responds to treatment with the antiemetic ondansetron (OND) 41,42 .
The rat grimace score was assessed from frames captured at 10 min in the baseline video, and then at 10, 20, 30, 40, 50 and 60 min in the post-injection videos 38,39 . Frames were not captured when the rat was sleeping, grooming and or sniffing. If no clear photo could be captured during any period, then these time points were omitted from analysis. From the captured frames an unobstructed view of the face was cropped to remove body position.
The number of head-flick events was assessed for each 10-min epoch. The total duration of the exploratory, grooming and LOB behaviours in seconds was also assessed for each 10-min epoch. Exploratory and grooming behaviours post-injection (P1 = 0-10 min, P2 = 10-20 min, P3 = 20-30 min, P4 = 30-40 min, P5 = 40-50 min and P6 = 50-60 min) were normalized to baseline (B = 0-10 min) activity. Where the values for all behaviours scored by the two assessors, were significantly different, a consensus value was agreed upon for use in the final analysis (Intraclass Correlation Coefficient; r = 0.83; p = 0.008).
Mechanical withdrawal threshold (MT). The MT of the temporalis muscle region was assessed using a rigid electronic von Frey hair (IITC Life Sciences, Woodland Hills, CA, USA) to evoke a withdrawal response as previously described 7,43 . At each time point, MT was obtained from the average of five successive mechanical stimulations alternating between the left and right temporalis muscle region. Post-injection MT was obtained every 10 min post i.p. injection and the relative MT was then calculated by dividing each post-injection MT by the baseline MT. The experimenter conducting the withdrawal assessments was blinded to the identity and order of the treatments. www.nature.com/scientificreports/ 6 days (3 sessions) until they were able to stay on the rod rotating at 40 RPM for at least 300 s for a total of three trials separated by 15 min intertrial intervals. On experimental days, baseline latency was recorded as described above. Rats were then placed again on the rod (at, Max: 300 s) and the post-injection latency recorded at 5-, 20-and 35-min post-injection. Subjects that fell from the rod upon placement, but prior to pressing the start button, were given a "0" score for that trial (0 s latency to fall) and returned to their cage until their next consecutive trial. Rats that maintain their balance on the rotarod for the maximum time of 300 s were removed and placed into their cage until the next consecutive trial. Trials were separated by 48 h intervals.
Experimental design. A transparent plexiglass chamber (8" × 8" × 8") was equipped with mirrors behind the left, right and posterior walls of the chamber and a digital video recording camera placed 10 inches away from the anterior wall. The chamber was large enough to allow rats to roam freely and unhindered. Rats were acclimatized to the chamber by individually placing the rats into the chamber for 30 min each day for five days prior to experimental days. Experiments were conducted between 9:00 and 17:00 each day. The chamber was thoroughly cleaned with 70% isopropyl alcohol in between each rat. Each rat was given a 48-h rest period in between experiments. Previous work in healthy humans who were administered MSG 150 mg/kg daily for 5 days found no significant change in pain ratings or temporalis mechanical sensitivity at the end of 5 days compared to the first day 20 . Thus, given the apparent half-life of glutamate (30-min), we determined that a dosing interval of every 48-h would minimize the risk of sensitization to MSG 7,45 .
Glutamate and CGRP receptor antagonists. To investigate the effects of glutamate and CGRP receptor antagonists on MSG induced nocifensive, headache-like and nausea behaviours, 6 male and 6 female rats were individually placed in the chamber and video recorded for 10 min (baseline recording) and then given an i.p. injection of either MSG 1000 mg/kg alone or MSG 1000 mg/kg in combination with either KYNA (10, 50 or 100 mg/kg), APV (50 mg/kg) or olcegepant (1 mg/kg) 6,11,46 . Immediately after injection, the rat was placed into the chamber and video recorded for 1-h (post-injection recording). MT were also taken at baseline and every 10 min postinjection as described above. Each rat was tested every 48-h until it received all treatments in the protocol. The experimenter and assessors were blinded to the identity and order of the treatments.
Serotonin and dopamine receptor antagonists.
To investigate the effects of serotonin and dopamine receptor antagonists on MSG induced nausea-and headache-like behaviours, a separate cohort of 6 male rats using the same experimental paradigm described in Study 1 was used. Male, but not female rats were examined in these experiments, as male rats have been previously shown to have the most robust nausea behavior in response to MSG administration 7 . Rats were individually placed in the chamber and video recorded for 10 min (baseline recording) and then given an i.p. injection of either MSG 1000 mg/kg alone or MSG 1000 mg/kg in combination with either MTC (3 mg/kg) or OND (0.5 mg/kg). Immediately after injection, the rat was placed into the chamber and video recorded for 1-h (post-injection recording). MTs were also taken at baseline and every 10 min post-injection as described above. Each rat was tested every 48-h until it received all treatments in the protocol. The experimenter and assessors were blinded to the identity and order of the treatments.
Data and statistical analysis. Sample size was based on a previous study where significant effects of MSG administration on behaviours were demonstrated in groups of 6 rats 7 . Statistical analysis was performed using the GraphPad Prism (Version 9.1.1). Normality was assessed using the Shapiro-Wilk normality test. Where data was non-normally distributed, data transformation by square root (SQRT) was used to achieve normally distributed data with equal variances. Data were analysed using repeated measures two-way mixed model ANOVAs with post-hoc Dunnett's tests (time and treatment as factors). Sex-related differences were analysed by repeated measures two-way mixed model ANOVA with post-hoc Dunnett's test (sex and time as factors). For all analyses, p < 0.05 was considered statistically significant. Data in the text are reported as mean ± SEM (Standard Error of the Mean).
Data availability
The data that support the findings of this study are available from the corresponding author, BEC, upon reasonable request.
|
2022-08-18T06:17:20.516Z
|
2022-08-16T00:00:00.000
|
{
"year": 2022,
"sha1": "afaa7822f74a60c7649821dc76bfdbbdb1074bc9",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-022-18290-w.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "caf17e1e86ac46c8dc1c4bf8bd1e4a551f4d2e55",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
225331465
|
pes2o/s2orc
|
v3-fos-license
|
Recreation and Tourism Service Systems Featuring High Riverbanks in Taiwan
: Taiwan’s cities exhibit high levels of urbanization, which has resulted in limited recreation space in urban areas. In response, government policies have been enacted to promote the large-scale greening of rivers in urban areas and the establishment of aquatic recreation areas that do not interfere with water flow areas, pavilions for recreation purposes, indoor stadiums, and biking lanes alongside riverbanks to provide citizens with recreation space. An expert team was convened to investigate 50 riverside recreation sites, and the Comfortable Water Environment Rest Assessment Form was devised. The investigation results revealed three factors that contribute to the value of riverside recreation sites; the three factors had a total explanatory power of 70.17%. The factors, namely exercising and leisure, overall design plan and entrance image, and environmental maintenance and service, had an explanatory power of 25.52%, 23.32%, and 21.32%, respectively. Finally, this study provides guidance for constructing service systems for riverside recreation sites by referencing practical cases. This study suggests that future designs focus on the characteristics of visitors as the main consideration when investing resources in recreation sites. In addition, more exercise and recreation equipment and facilities should be provided at recreation sites located within highly populated areas. For recreation sites that feature beautiful scenery, greater degrees of overall design planning and entrance image qualities can be integrated into the recreation sites, and environmental teaching materials can be incorporated into the environment. Furthermore, this study suggests that residents who live near recreation sites form and operate volunteer groups to contribute to environmental maintenance and the relevant services; this would greatly enhance the overall experience of comfort of visitors to the recreation sites. Finally, this study provides guidance for low-intensity construction in high riverbank areas.
Introduction
The objective of this study was to improve the recreation and tourism services of high riverbanks. The constant expansion of cities has contributed to the diminishing green space available to the public. Hence, high riverbanks have become popular and attractive sites for recreation. The functions of high riverbanks in urban areas include preventing floods, storing water, removing pollutants, protecting and enhancing aquatic ecology and ecosystems, stabilizing river flow, extending the lag time for floods, protecting civil engineering structures on both sides of the bank, lowering flood potential on both banks, being transformed into constructed wetlands, increasing urban green space, promoting biodiversity, and enhancing environmental impact tolerance. Human expectations for the use of river spaces have resulted in a variety of high riverbank functions. Studies have ascertained the benefits of high riverbank functions [1][2][3][4][5][6][7][8][9][10][11][12][13]. In Taiwan, the dry period for seasonal rivers can be as long as seven or eight months. In Taiwan, river governance measures depend on whether water levels are high or low. High water levels only occur during floods and are rare, whereas a low water level represents the basic condition of the river, which is typically conducive to ecosystem functions. To protect the flood prevention function of rivers, relevant regulations can be established to regulate the construction intensity in river regions. However, construction methods that meet the demands of protecting natural ecosystems and environments and developing a river space for water recreation experiences as well as integrate historical cultural monuments and the natural landscape do not exist. Therefore, we can only establish conceptual construction methods [14,15].
However, given the unique characteristics of high riverbanks, limitations abound regarding the development of such sites, including restrictions on the depth and width of the development and the use of low-rise buildings and dwarf plants. The use of a high riverbank should be designed, planned, and maintained. First, high riverbanks in urban areas are connected via various transportation channels and located near convenient transportation systems, which enables them to draw crowds. These riverbanks allow for local resident visitors to take walks, provide children with play areas, render space for parks and exercise equipment, and provide citizens with space to exercise and engage in recreational activities (e.g., biking), and thus they are suitable for citizens of all ages to visit. By contrast, tourist visitors favor unique exercise facilities and scenic designs, such as baseball stadiums, bike lanes, suspension bridge landscapes, riverbank scenic landscapes, and environmental education installations. Therefore, designs of high riverbank areas in urban areas should aim to offer the aforementioned accommodations. Regarding the cleaning and maintenance of high riverbanks in urban areas, the public sector should take responsibility for maintenance tasks, including repairing hardware and maintaining mechanical operations (e.g., water gate maintenance). Since high riverbanks are located near communities, both cleaning personnel from the public sector and local residents are responsible for environmental protection tasks. In addition, a large proportion of maintenance is conducted by local residents [16,17]. In 2000, ecosystem engineering was introduced for the governance of rivers [15,18]. Discussions have been continuing regarding how to increase biological habitats, considerably reduce maintenance costs and workload, and enhance the application of ecological construction methods without affecting the flood prevention strength of the riverbanks. Furthermore, climate change and increased episodes of extreme rainfall are disadvantageous to the development of riverbank spaces and increase the difficulty of the governance of urban drainage and rivers. These issues have become popular research topics in recent years [19,20].
To promote exercising and recreational activities, Taipei City has endeavored to improve the water quality of the Tamsui River (goal: dissolved oxygen level > 2; the river does not smell) and promoted the construction of facilities on both riverbanks, including riverside bike lanes, exercise and recreational facilities, playgrounds suitable for all ages, and riverbank scenic landscapes [15]. Through this, the riverbank space has been transformed into a multipurpose leisure and recreation space featuring recreational entertainment, ecological conservation, and age-inclusive properties. Visitors may take rapid public transit to the riverbank, rent bicycles from a bike-sharing system, and then ride to the riverbank ecosystem observation section. On such a journey, visitors may experience a natural sense of comfort, making a visit to the riverbank attractive. From river governance to creating a connection with the river, Taipei City has converted the high riverbank into an ecological classroom and playground, thereby transforming the high riverbank of Tamsui River into a hotspot for leisure outings and encouraging the economic development of relevant industries.
No effective quantitative method for solving the aforementioned developmental limitations exists in Taiwan or any other country, although centuries worldwide have abundant experience with river restoration, have combined river development with recreation and tourism, and have even integrated river development with urban functions [21][22][23][24]. On the basis of Taiwan's rich experience in developing high riverbanks, this study proposes an analytical scale for a comprehensive quantitative analysis of such development and engages in a qualitative discussion before providing constructive methods and procedures for high riverbank development.
Onsite Investigation
The researchers conducted field research and visited the selected investigation sites to experience and observe the leisure areas. The main investigation team comprised three researchers, two families with parents and children (the youngest family member being 2 years old), two cycling enthusiasts, and two older adults. The research team comprised Master's students and project assistants familiar with the basic knowledge of the water environment. The families were constituted mostly by environmental protection volunteers who possessed environmental protection knowledge and experience of traveling with children. The cyclists referred to bicycle enthusiasts with more than 10 years of experience in cycling or cycling as a leader. The senior team comprised retired couples with a passion in environmental protection and were hired to assist in the survey. During the investigation, the investigation team walked and rode bicycles while wearing GPS-enabled devices to record their traveled routes, elevation gain/loss, and slope angle. In addition, the investigation team photographed the vegetation conditions of each site. Investigation of each site lasted at least 2 h. The survey was conducted at each survey site during daytime which was divided into three time periods: from 11 AM to 1 PM (period A), from 2 PM to 4 PM (period B), and from 5 PM to 7 PM (period C) (only added period C on weekdays), on each weekend. The aim of this survey design was to maximize the number of survey participants (please see Table 2). During the investigations, the team observed the number of hourly visitors, the ecological system around the traveled routes and bicycle lanes, the recreational scenic sites, the resting areas, and the visitor service centers. In addition, the team personally experienced and observed the weather of the investigation sites to determine whether it was frequently sunny or rainy at each site; each investigation ended if it rained. Prior to visiting, the researchers collected information from announcements by government tourism agencies and relevant websites established by tourism enthusiasts to understand the bike paths and maps of the investigation sites as well as to reference the recreational experiences of other netizens. The investigation period was between July and October 2019, which was during the summer vacation for students in Taiwan, prior to the outbreak of the COVID-19 pandemic and during a time when life was normal. The number of tourists surveyed reflected the popularity of the recreation and tourism sites. For this study, the research team made 734 onsite investigation visits. However, the number of people surveyed in the same time period at different recreation sites may not have the same benchmark. Additionally, this study was limited in that it did not distinguish between tourists (out-of-town) and local residents.
Establishment of the Comfortable Water Environment Rest Assessment Form
The study team held five workshops on becoming familiar with water environments and cherishing river and water resources. The workshops were chaired by five experts, with 40 enthusiasts invited to attend and engage in discussions with the experts. Subsequently, the researchers referenced suggestions from the relevant literature and formulated the Comfortable Water Environment Rest Assessment Form (CWERAF; Table 3.) [25][26][27][28][29][30][31]. By referencing each variable in the CWERAF, the researchers inspected the advantages and disadvantages of each recreational site. The researchers explained the principles of each index and the scoring method to each investigator to ensure that the indices were scored under the same principles. After completing an investigation, the investigators convened a group meeting to discuss and confirm the scoring of each index. When the investigators' opinions diverged, the investigators conducted a discussion using the Delphi method to ensure scoring consistency [32]. During the meetings, all participants were asked to provide written comments and rate each item anonymously. Everyone was then engaged in a group discussion on parameters with excessively large variances. Subsequently, a second round of rating was conducted, followed by a group discussion. Gathering feedback from experts in multiple rounds is a critical process of the Delphi method. After the fourth round, the host of the discussion session, which was assumed by Researchers 1, 2, and 3 in turn (the host must be a surveyor of the particular recreation site of the discussion), finalized the ratings of parameters that the surveyors had reached an agreement on and initiated discussion on those rated inconsistently by the participants. The process was repeated until a consensus was reached for the ratings of all parameters. In this study, the ratings of all recreation sites were completed in the fourth round, which was also a parameter of this study-quality control (QC).
The indices, which function as quantified measurement instruments, are divided into functional, service, and overall design dimensions, of which the specific items are aquatic zones, bike paths, exercise equipment, and environmental cleanliness and maintenance conditions; service center, entrance image, information boards, parking lots, and mass transportation accessibility; and overall design planning and number of hourly visitors, respectively.
After investigation, the researchers labeled each answer option with sequential scores and used the Ridit analysis to compute the scores for each site, which were used for multivariate analysis [33].
Multivariate Analysis
Multivariate statistical processing, also known as multivariate analysis, requires considerable amounts of computation. Since the popularization of high-speed calculators, development in the field of multivariate analysis has exhibited exponential growth. Multivariate analysis encompasses various subfields, including multiple regression analysis, discriminant analysis, analysis of covariance, multidimensional scaling analysis, and cluster analysis [34].
Explanatory factor analysis primarily evaluates the number of underlying variables (i.e., factors) within a set of observed variables. The analysis process is detailed as follows: First, we hypothesize that a set of observed variables are affected by a common factor and calculate their correlation with said factor. Subsequently, we exclude the correlation value and search for the next factor that can explain the remaining covariance relationship and repeat the process until all variation is explained. At this stage, the number of extracted factors equals the total number of items for the observed variables. However, because most factors do not have high explanatory power, various methods are used to determine the number of factors, such as determining whether the eigenvalue of a factor is >1, which means the explanatory power of the variable is strong [34,35].
In cluster analysis, cluster classification is conducted on clusters of samples comprising multiple variables, and samples with similar properties and characteristics are classified into the same cluster. The analysis results are presented in a tree-shaped structure, which is highly similar to the shape of a phylogenetic tree. Samples with higher correlation are connected first to form small clusters; subsequently, small clusters are connected with other small clusters or unconnected samples in sequence according to their correlation. The process is repeated until a single cluster is formed. When the phylogenetic tree is complete, the relationship between the samples becomes apparent. The number of clusters classified in this process is determined by the set likelihood value [36].
Logistic regression resembles the linear regression model. Regression analysis describes the relationship between a dependent variable and one or many predictor variables. In regression analysis, the dependent variables and the independent variables are generally continuous variables. By comparison, logistic regression analysis is used when the dependent variables are discrete variables [37]. In particular, logistic regression analysis is generally used for dichotomous classification, such as "agree or disagree" and "succeed or fail." The purpose of logistic regression is to establish simplified analysis results with the highest fit. When applied in practical and reasonable models, the established model can be used to predict the relationship between a dependent variable and a set of predictor variables. Logistic regression analysis differs from other multivariable analysis methods because it does not require the assumption of a distribution type. In logistic distribution, the relationship between a dependent variable and one or more independent variables aligns with a logistic function. This indicates that in logistic regression, the assumption of normal distribution is unnecessary. However, if the predictor variables have a normal distribution, the logistic regression results are more reliable. In logistic regression analysis, independent variables can be categorical variables or continuous variables. We employed logistic regression analysis to analyze the correlation between the number of visitors and the other variables of the recreational environment [38]. Table 4 displays the total index scores and number of hourly visitors of the 50 investigation sites. The CWERAF comprises 11 indices, with the highest total score being 10 points. For sites with over 100 and approximately 20 hourly visitors, the total index scores ranged from 7.6 to 8.8 and 2.8 to 6.4, respectively. Higher total index scores indicate that the software and hardware of the service system are more attractive, functional, or otherwise preferable, with the opposite being true for sites with lower total index scores. The differences between total index scores are reflected in the number of hourly visitors.
Analyzing Influential Factors of High Riverbank Recreational Areas
A. Exploratory Factor Analysis Table 5 presents the results of computing the 10 variables of the 50 investigation sites with a correlation matrix. The table reveals that the correlations of four sets of variables achieved the 0.05 significance level (i.e., CW1-CW9, CW4-CW5, CW4-CW9, and CW5-CW6) and the correlation of 15 sets of variables achieved the 0.01 significance level (CW1-CW2, CW1-CW8, CW1-CW10, CW2-CW8, CW2-CW9, CW3-CW4, CW3-CW6, CW3-CW7, CW3-CW9, CW5-CW7, CW6-CW7, CW6-CW9, CW7-CW9, CW8-CW10, and CW9-CW10). Other correlations between the variables did not exhibit statistical significance. Using the Kaiser-Meyer-Olkin (KMO) test, we acquired a KMO value of 0.613 (>0.5), signifying the existence of underlying factors. Therefore, we conducted factor analysis, reduced the variables, and ran a regression on principal component factors to understand the reason for the formulation of underlying factors [34]. Table 6 displays the computed results of explanatory variance and factor extraction. When the eigenvalue of the unrotated component loading matrix was >1, three components had a total of 70.17% explanatory power over the data, with the highest explanatory power of a single component being 29.00%, followed by 25.32% and 15.84%. After rotation, when the eigenvalue of the component loading matrix was >1, the total explanatory power of the three components was identical to that before rotation, with the highest explanatory power of a single component being 25.52%, followed by 23.32% and 21.32%, thereby indicating that the difference between the explained variances was reduced. Table 7 lists the loading of each component factor. Before rotating the component loading matrix, the results were similar to CW4 and thus had low explanatory power. After rotating the component loading matrix, the explanatory power of each variable was strengthened. Therefore, we adopted the component set of the rotated component loading matrix for reference. From the table, we extract CW1, CW2, CW8, and CW10; CW3, CW4, and CW9; and CW5, CW6, and CW7 as the component variables for the first factor, second factor, and the third factor, respectively.
We name the first factor the exercise and recreation factor. For recreational parks, exercise facilities should be prioritized and introduced in the site design, such as exercise equipment, ball courts, bike lanes, children's playgrounds, and aquatic zones. In addition, these sites should be located near transit stations and where public transportation is otherwise convenient to attract residents and tourists to visit. Factor two encompasses the overall design plan and entrance image and has an emphasis on developing features within the recreation area and providing comprehensive information on the area. If this factor is successfully implemented, this enables tourists to identify the designers' intentions behind the recreation site design [34]. For example, integrating environmental and ecological education information and environmentally friendly designs into a recreation site design can enhance visitors' impressions of and satisfaction with the site. The third factor is environmental maintenance and service factors. The parking lots and environmental cleanliness of recreation sites directly determine visitors' first impressions of sites. In addition, our findings regarding information boards indicate that the quality and quantity of the information boards were negatively related to factor three. After personally inspecting these information boards, we concluded that boards with inconsistent quality and an excessive number of boards resulted in a site appearing disorganized.
Among the recreation sites, sites DS-01, DS-11, DS-12, DO-01, and GP-01 had the top scores in the exercise and recreational factor. These parks are popular sites, hosting crowds on weekdays and weekends alike and featuring expansive park space, government-subsidized resources, and additional facilities and services to meet the demands of various types of visitors, whether children, adults, or older adults. In addition, these sites have a comprehensive set of exercise facilities that range in intensity to meet the needs of all visitors, thereby attracting residents and tourists to make visits for recreational activities.
Sites DS-01, DS-05, DO-01, LW-01, and LJ-01 are the top five in the overall design plan and entrance image factor, with three of these sites being among the most popular in Taiwan. Due to their beautiful scenery, the sites attract many visitors. Under the combined efforts of the public sector and local nongovernmental organizations, these sites exhibit comprehensive overall design plans and unique entrance images, which enable visitors to feel at ease. In particular, one of the five sites is an education center, where a small building has been built in which commentary and tour guide activities are provided. We observed that the local governments' efforts to create comfortable environments and to incorporate unique entrance images into site designs have been recognized and successful.
Finally, sites DS-01, DS-02, DS-05, EL-01, and BA-01 were the top five in the environmental maintenance and service factor. Valued by public agencies and maintained by volunteer groups, these sites have clean environments and frequently attract tourists. For example, sites DS-01 and DS-02 are not particularly scenic; both are riverbank parks located in urban areas. However, efforts to maintain a clean environment at both sites increase their overall comfort levels. By observing other cases, we concluded that it is insufficient to exclusively rely on the public sector to maintain the environment; a clean environment can only be maintained with the participation of local residents.
B. Feature clusters
After extracting the component factors, we acquired the component factors of each variable and the component factor percentages. However, given the limited research resources, the researchers had some difficulty determining which factors should be prioritized and could not determine whether efforts and resources should be invested exclusively into the factor with the highest explanatory variance or divided and invested into all three factors. This was a difficult decision because component factors and explanatory variance reflect the overall performance of the samples; such analysis concerns past experiences and does not constitute an actual prediction model. Therefore, we first conducted a cluster analysis to cluster the recreational sites and then discussed and analyzed how to enhance the quality of each site according to the basic conditions of each cluster.
First, we conducted a cluster analysis on the 50 high riverbank recreation sites in Taiwan. After clustering the recreation sites according to their features, the results revealed that the features of each cluster may have been caused by differences in funding or limitations imposed by external conditions. Therefore, a separate discussion of each cluster was deemed a more effective method. Figure 2 displays the cluster analysis results. After clustering the recreation sites into three main clusters, sites DS-01, DS-05, DO-01, and LJ-01 formed one cluster. The main features of this cluster are "popular scenic site" and "high service quality," which enable these recreation sites to attract many visitors. Site DN-01 formed another cluster; the recreation site consists of a geological landscape featuring the exposed sedimentary layer of a high riverbank. The site has unique scenery but lacks fundamental facilities and thus represents a risky tourism site. Finally, the remaining 45 recreation sites constituted a cluster with various features, including exercise facilities, leisure facilities, scenic sites, and bike lanes.
Logistic Regression Analysis of Factors for Driving Site Popularity
By investigating the number of hourly visitors (CW11) for each recreation site, we determined the attractiveness of each recreation site based on the number of visitors received. To verify the driving relationship and intensity of the component factors for the popularity of the recreation sites, we used the extracted component factors to conduct logistic regression analysis. The analysis results are presented in Table 8. When the threshold for popular sites was set as "more than 100 hourly visitors (CW11 ≥ 100)," four recreation sites were classified as popular sites, whereas 46 sites were classified as unpopular. Logistic regression results indicated that under this threshold, factor one was a positive driving factor, thereby indicating the exercise and leisure factor is a principal component factor for popular recreation sites.
When the threshold for popular sites was set as "more than 50 hourly visitors (CW11 ≥ 50)," 18 recreation sites were classified as popular, whereas 32 were classified as unpopular. Logistic regression analysis results revealed that under this threshold, factors one, two, and three are positive driving factors. This signifies that exercise and leisure, overall design plan and entrance image, and environmental maintenance and service factors are principal component factors for popular recreation sites.
When the threshold for popular sites was set as "more than 20 hourly visitors (CW11 ≥ 20)," 31 recreation sites were classified as popular, whereas 19 were classified as unpopular. Logistic regression analysis revealed that under this threshold, factors two and three are positive driving factors, suggesting that the overall design plan and entrance image factor and the environmental maintenance and service factor are principal component factors for popular recreation sites. The area under the curve of Equations (1)-(3) is 0.891, 0.951, and 0.944, respectively, all of which are greater than 0.5 and indicate the accuracy of the prediction model. In addition, the odds ratio provides the increments of the odds when the factor loading value is increased by 1.
Discussion
According to the factor analysis, cluster analysis, and logistic regression analysis, we devised two approaches to improve the use of high riverbank areas.
First, the main type of visitors for each high riverbank area should be clarified. In popular scenic sites, the number of tourists is greater than that of local residents. Relative to local residents, tourists have a higher demand for service quality. For riverbank parks in urban areas, the number of local resident visitors surpasses that of tourist visitors. Since local resident visitors have greater demand for exercise and leisure, these sites can improve their facilities to meet these demands. Furthermore, it is worth noting that due to Taiwan's declining birthrate, Taiwanese families are willing to spend and consume more for the benefit of their children [39,40]. Small families that include young children or older adults constitute a large proportion of visitors. Therefore, overly risky environments or activities may discourage these families from visiting. In addition, the overall design plan and entrance image factor can be enhanced for all recreation sites to strengthen resource allocation.
Second, beautiful scenery, expansive spaces, transportation convenience, and being close to nature are key attractions that prompt visitors to visit high riverbanks. In our study, riverbank parks in urban areas, popular hot springs recreation sites, bike lanes along riverbanks in urban areas, and sites with rural farmland scenery boast beautiful natural scenery and high popularity. Regarding the design plan of the environment and the quality of facilities and services provided in riverbank parks, visitors are concerned with the planning of pedestrian or bike lanes, the pruning of plants and removal of fallen leaves, and the removal of weeds in grasslands. Since construction in high riverbank areas is limited, large trees are rare, resulting in less shade. Therefore, the demands of all age groups should be included in the consideration of tourists' demands for pedestrian and bicycle path planning, which is a key influential factor for visitor comfort [41,42]. Taiwan has convenient railway and highway systems; visitors can take a train or drive to the vicinity of a scenic site and then ride a bicycle or walk to the recreation site. By enhancing the delivery and pick-up functions of mass transportation systems for recreation area surroundings, more visitors may be attracted to engage in aquatic and low-carbon tourism activities in the recreation sites, thereby achieving the practical goal of green tourism. In addition, recreation sites should provide clear details on pedestrian and biking paths and path distances. Recreation sites that provide visitors with clear information enable visitors to engage in recreation activities with ease. Currently, Taiwan has introduced 5G technology and real-time augmented reality facilities in recreation sites to provide visitors with information [43][44][45].
In particular, the Wulai Old Street scenic area (DS-05) is a high riverbank area that exhibits a higher Ridit score (8 points) and features a hot springs region. The site receives many visitors during the weekend, has a landscape bridge that extends over the riverbank, and features a shopping district beside the bridge. The environment of the recreation site, which is entirely maintained by local residents, features a scenic environment. Therefore, this study posits that the involvement of volunteer groups is a key underlying factor. However, the quantification of this underlying factor is difficult. This study only conducted an assessment and evaluation of one outcome of the recreation sites, namely number of hourly visitors [16,17].
Conclusions
This study conducted a large-scale and intensive investigation to establish the CWERAF and to evaluate the cases of Taiwanese riverbank recreational sites. Due to Taiwan's topography, which features steep mountains and rapid-flowing rivers, the government has difficulty utilizing Taiwan's water resources. To date, the government has constructed 109 multipurpose reservoirs, employed construction methods for river governance, and established many dams to control water levels of rivers. The high riverbanks selected in this study provide recreation facilities for citizens year-round. Taiwan has not experienced large-scale, disastrous typhoons from 2017 to the time of this writing in 2020. As a result, Taiwan's high riverbanks have not flooded and their facilities have not been damaged, which has enabled the government to expand the riverbank facilities. We propose that the exercise and recreation factor, overall design plan and entrance image factor, and environmental maintenance and service factor are the factors underlying the use of Taiwan's high riverbanks. We also defined our future avenue of research as: This future study could divide visitors between residents and tourists (out-of-town). Future research may focus the attention not only on considering tourists (out-of-town) or residents for calculating the number of hourly visitors and the site attractiveness, but also study their motivations and leisure barriers in cycling [46]. Finally, the government can conduct cluster classification and visitor analysis and invest resources into these high riverbanks to enhance their recreation value.
|
2020-09-10T10:24:28.062Z
|
2020-09-04T00:00:00.000
|
{
"year": 2020,
"sha1": "11698dd91efa047957f9bbdf327f2fe8cb5fa620",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4441/12/9/2479/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "970a50c81435eeede23e3a3138cc7cf351c34552",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Geography"
]
}
|
250682998
|
pes2o/s2orc
|
v3-fos-license
|
Cumulative phenomena due to the collision of laser-driven projectiles related to fast ignition and astrophysical applications
The results of contemporary investigations of cumulative phenomena due to the collision of projectiles laser-accelerated up to the velocities significantly higher than the sound velocity in solids are reviewed. The discussion is focused on the applications directed on the development of a concept of fast ignition by projectile impact and the laboratory modeling of astrophysical phenomena. Low-entropy acceleration of a projectile in a dense state up to ‘thermonuclear’ velocity of the order of 1000 km/s is grounded. The parameters of extreme matter state induced by laser-accelerated projectiles collision, namely, energy flows, pressure and temperature, spontaneous magnetic field and thermonuclear neutron yield are discussed. Astrophysical phenomena due to collision of cosmic objects such as a creation of a crater by meteorite's impact, as well as the emerging of a shock wave to a star surface, and formation of jets, ionization and emission waves are considered.
Introduction
The study of hydrodynamic phenomena due to laser-accelerated projectile impact is of fundamental importance for, at least, two basic fields of physics. The first one is astrophysics in the section of the phenomena associated with the collision of cosmic objects with relative velocities of several tens and hundred km/s. The second one is inertial confinement fusion (ICF). High-temperature plasma may be produced in the collisions of projectiles with relative velocities from several hundreds up to 1000 km/s. The latter has recently become of great interest in connection with fast ignition [1,2], a promising ICF-approach. An impact by accelerated projectile [3][4][5][6] is effective method to ignite the preliminarily compressed ICF-target, along with the action by fast electron or ion beams.
In the present are discussed the limits of laser-driven acceleration of the projectile in a dense state as well as an impact-produced energy cumulation. Laser and projectile parameters for impact fast ignition are concluded. Among astrophysical applications are considered the possibilities of laboratory modeling of crater creation and shock wave emerging at a star surface.
Laser-driven acceleration of a dense matter up to thermonuclear velocity.
To accelerate a projectile in a dense state the low-entropy regime should be applied, when Iλ 2parameter (product of laser radiation intensity and wavelength) does not exceed 510 14 W/cm 2 . Such a regime corresponds to inverse Bremsstrahlung absorption of laser light, when the ablation density is close to the critical plasma density. Under this conditions, the rocket model gives the following solution for the final velocity of a plane foil u (measured in the units of thermonuclear velocity, 1000 km/s) and thickness δ of unablated foil (normalized to initial foil thickness, Δ 0 ): In Eqs. (1) are supposed that the blow off plasma is fully ionised, the adiabatic exponent is equal to 5/3 and intensity I, pulse duration τ, initial foil density ρ 0 and wavelength λ together with foil thickness Δ 0 are measured in the units of 10 14 W/cm 2 , ns, g/cm 3 and µm, respectively. The laser pulse and projectile parameters should meet the requirements that (i) an expansion of ablated projectile matter is close to a plane motion, τ<R L /c a , (ii) a foil density during the period of laser pulse exceeds initial density, τ<Δ/c s (c a and c s are the sound velocities in blow off plasma at the ablation boundary and in unablated foil, respectively) and (iii) a hydrodynamic instability influence on plane foil acceleration is insignificant τ<Γ -1 (Γ is an increment of a perturbation growth). The sound velocities are connected by the relation c s =[γ s (γ+1)] 1/2 (ρ cr /ρ 0 ) 1/2 c a , and c a =[2(γ-1)/(γ+1)] 1/3 (I/ρ cr ) 1/3 , where ρ cr and ρ 0 are the critical density and initial foil density, γ s and γ are the adiabatic exponents in unablated foil and blow off plasma. The Rayleigh-Taylor instability increment is given by the well-known formula [7]: Γ=Γ 0 -βk m D, where Γ 0 =[Ak m g/(1+k m L a )] 1/2 ; k m is the unstable wavenumber, k m =2π/Δ for the most dangerous mode (with respect to foil destruction); g≈u/τ is the acceleration; L a ≈c a τ is the density gradient length at the ablation front; the Atwood number, A=(ρ−ρ cr )/(ρ+ρ cr )≈1, since ρ cr <<ρ 0 ; the evaporation wave rate, D=c a ρ cr /ρ 0 and β≈3. So, the limitation requirement for laser pulse duration is For the ratio δ=5, which corresponds to maximal hydrodynamic efficiency of the plane projectile acceleration the ultimate velocity achievable at low-entropy acceleration actually conforms to the thermonuclear velocity, and lies in the range of (1.1-1.5)10 3 km/s (Iλ 2 =(1-5)10 14 Wcm -2 µm 2 ). Such a velocity corresponds to impact-produced temperature of deuterium-tritium fuel equal to 10-20 keV. To reach the velocity of polystyrene foil of 1000 km/s at the final density equal to initial value in the result of acceleration by the action of 10 3 J pulse of the third Nd-laser harmonic radiation the laser pulse duration, beam radius and foil thickness should be chose as 1.2 ns, 380 µm and 27 µm.
Impact-driven extreme state of matter
Inelastic collision of a projectile accelerated up to super sonic velocity with a massive wall initiates the strong shock waves propagating from the contact boundary in the both objects. The wave's velocities in projectile D p and in wall D t , the pressure behind their fronts P i , the duration of impact t i =Δ/D p , which is the time of shock wave propagation through the projectile, and the impact efficiency, which is equal to the fraction of projectile energy transfered to the wall, are given by the solution [4]: Here u p is a projectile velocity; ρ p , γ p and ρ t , γ t are the densities and adiabatic exponents of the projectile and wall, respectively. So, the collision of the projectile accelerated up to thermonuclear velocity with a massive wall leads to creation of the pressure closed to several and even several tens of Gbar. Due to the energy concentration in a dense matter such a pressure exceeds the ablation pressure accelerating the projectile by factor of several tens. So, the equation of state at the Gbar pressure and hydrodynamical phenomena at the energy flow of 10 17-19 W/cm 2 (corresponding to fast ignition requirements) may be studied in laboratory experiments under convenient (open) plane geometry. In Gekko/HIPER impact-experiments [8] the DD-neutron yields of (1-5) 10 5 due to the collision of Δ=15-20 µm thickness CD-foils with massive wall were measured. The foil velocities of u p ≈600-700 km/s were registered also. For these experiments the density and temperature of the impact-produced plasma of the foil may be estimated as ρ p ≈0.1 g/cm 3 , and T p =m n u 2 /3k B ≈0.75 keV (here m n is nucleon's mass, and k B is Boltzmann constant). The impact time t i is closed to 0.03 ns. So, the impact-produced neutron yield N∝πR L (ρ p Δ) 2 <σv> DDT=0.75 kev /4u p is estimated as 10 6 neutron/shot. The agreement with experimental results confirms the dynamics of inelastic impact given by Eqs. (3). At the projectile velocity u p ≈1000 km/s, the DD-neutron yield could be risen up to 10 8 -10 9 neutron/shot. The impact-generated spontaneous magnetic field is an interesting phenomenon. The magnitude of magnetic field due to excitation of thermo-current is estimated on the base of following relations , MGs (4) Here c and e are the light speed and electron charge; electron temperature, T e is proposed as equal to ion one since in considered conditions the energy relaxation time τ ei ≈10 -12 s is less than the duration of impact t i ≈10 -11 s; n e is electron density, L T and L n are the size scales, which define the plasma temperature and density gradients, L T ~L n~Δm ; A and Z are the atomic number and charge of plasma ions; u p and Δ are measured in 1000 km/s and µm. So, the magnetic field of several hundred MGs may be generated at the impact of the heavy metal projectile accelerated up to thermonuclear velocity.
Impact fast ignition
Igniting projectile parameters could be obtained from the analysis of impact-produced shock wave dynamics given by Eqs. (3). The first requirement is the energy equation which means that during the period of impact the shock wave heats the wall's layer up to ignition temperature,T ig =10 keV and the second one is the equality of shock-heated wall mass to (ρΔ) ig -parameter which is equal to 0.3 g/cm 2 : Here c v is specific heat of deuterium-tritium plasma. Substitution of (3) in (5) gives the velocity and mass of projectile which ignites the deuterium-tritium fuel with density ρ t (ρ t >>ρ p , γ m = γ p =5/3) For DT-fuel density, ρ t =200 g/cm 3 : u ig ≈30000 km/s, Δ ig ≈450 µm, η imp ≈0.06 at ρ p =0.2 g/cm 3 (DTice); u ig ≈3000 km/s, Δ ig ≈60 µm, η i ≈ 0.31 at ρ p =10 g/cm 2 (Au). In the case of heavy-material projectile the radiative processes should be studied to take into account corresponding energy losses.
To provide an acceptable impact efficiency, 0.3-0.4, the projectile density must be close to 10-20 g/cm 3 . There are several approaches to go to this goal. As a "freely accelerated projectile" may be considered [4]: 1) heavy-material or composite projectile consisting of heavy-material impactor and light-material ablator, 2) compressed-in-flight projectile accelerated by a time-profiled pulse. The main directions of research here are two-dimensional effects of laser-driven acceleration, impactinitiated emission, compression-in-flight effectiveness. Another approach is a "cone-guided projectile" [5,6]. The main unclear problems are the effectiveness of a cone-guided compression and energy losses in a cone wall. The cone-guided acceleration by profiled laser pulse looks quite attractive.
Astrophysical phenomena modeling
The physics of cosmic objects collision may be studied in experiments on laser-driven projectile impact. That fact illustrated by the results of recent experiments [9] on laser-accelerated projectile collision with massive target performed on the PALS/Asterix iodine-laser (the energy, 100-500 J; pulse duration, 0.4 ns; beam radius, 100 -1200 µm). The features of crater creation were discovered such as the dependence of crater longitudinal and transverse sizes as well as energy transmitted from projectile to the wall on the projectile sizes, density, composition (aluminum, copper, plastic, lead and others), form (disk and cone) and projectile velocity. Projectile velocity varied in the range of 50 -150
|
2022-06-28T01:50:33.174Z
|
2008-01-01T00:00:00.000
|
{
"year": 2008,
"sha1": "f525a665e5b745f8fa9972c755304805fdbcfca5",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/112/2/022059",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "f525a665e5b745f8fa9972c755304805fdbcfca5",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
254118242
|
pes2o/s2orc
|
v3-fos-license
|
Benchmark of a multi-physics Monte Carlo simulation of an ion guide for neutron-induced fission products
To enhance the production of medium-heavy, neutron-rich nuclei, and to facilitate measurements of independent yields of neutron-induced fission, a proton-to-neutron converter and a dedicated ion guide for neutron-induced fission have been developed for the IGISOL facility at the University of Jyväskylä. The ion guide holds the fissionable targets, and the fission products emerging from the targets are collected in helium gas and transported to the downstream experimental stations. A computer model, based on a combination of MCNPX for modeling the neutron production, the fission code GEF, and GEANT4 for the transport of the fission products, was developed. The model will be used to improve the setup with respect to the production and collection of fission products. In this paper we benchmark the model by comparing simulations to a measurement in which fission products were implanted in foils located at different positions in the ion guide. In addition, the products from neutron activation in the titanium foil and the uranium targets are studied. The result suggests that the neutron flux at the high-energy part of the neutron spectrum is overestimated by approximately 40%\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\%$$\end{document}. However, the transportation of fission products in the uranium targets agrees with the experiment within 10%\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\%$$\end{document}. Furthermore, the simulated transportation of fission products in the helium gas achieves almost perfect agreement with the measurement. Hence, we conclude that the model, after correction for the neutron flux, is well suited for optimization studies of future ion guide designs.
Introduction
At the University of Jyväskylä, the Ion Guide Isotope Separator On-Line (IGISOL) technique is used to produce radioactive ion beams of short-lived exotic nuclei for fundamental nuclear physics research [1]. The main production mechanism for neutron-rich medium-heavy nuclei is proton-or deuteron-induced fission in actinide targets, such as uranium and thorium. However, it is anticipated that neutron-induced fission would provide access to even more exotic isotopes [2].
The IGISOL technique is also used to study the fission process itself through measurements of independent and isomeric fission yields (FY) [3][4][5]. Accurate fission yield data as a function of target material and excitation energy, as well as inducing particle type, are needed to achieve a better understanding of the fission process. Furthermore, yield distributions from neutron-induced fission at incident energies above thermal neutron energies are required in the design and development of Generation IV fast reactors.
In order to access neutron-rich nuclides further from stability, and to facilitate measurements of fission yields in neutron-induced fission, a proton-to-neutron converter (pnconverter) [6] and a dedicated ion guide [7] have been developed and tested at the IGISOL facility. However, the first test suggested that the production rate of exotic nuclei is lower than anticipated, hampering its usefulness in the intended studies. The production rate depends on several parameters, including the neutron flux from the pn-converter, the size and position of the actinide (uranium) targets, the volume and shape of the ion guide, and the helium gas pressure. To optimize the design with respect to the production of exotic nuclei, a simulation model based on GEANT4, GEF and MCNPX has been developed [8].
In this paper, we benchmark the simulation model against a measurement in which foils were placed at different positions in the ion guide [9]. Through γ -spectroscopy of these foils, the fission products stopped in the foils, as well as neutron activation products, were detected. The results of the measurements are compared to those expected from the model.
Simulation model
The simulation model combines MCNPX 2.5.0 [10], GEF 2020/1.1 [11] and GEANT4 10.2.3 [12]. MCNPX is used to simulate the production of neutrons from the beryllium pn-converter while the yields and kinetic energies of the fission products (FP) from neutron-induced fission of 238 U are obtained from GEF. The main part of the model is based on GEANT4 which is responsible for the transport of the fission products in the ion guide. The model and relevant details are described in previous publications [8,13]. The results presented in this paper have been obtained using GEANT4 version 10.2.3, where the transportation of the fission products was governed by the classes G4ion I onisation, G4h Multi pleScattering (MS) and G4N uclear Stopping [12]. Figure 1 shows the geometry of the GEANT4 model, which is based on the physical ion guide. The generation and transportation of nuclei (post neutron-emission fission products) in the model undergoes the following steps on an event-by-event basis: 1. A starting position is sampled uniformly in the uranium targets. 2. A neutron energy is sampled continuously and uniformly from 0 to 30 MeV (the full energy of the proton beam).
Fig. 1
Image of the geometry of the ion guide in the GEANT4 simulation. Purple: helium gas. Green: uranium targets. Blue: titanium foil.
Orange and red: aluminium frames. Dark gray: aluminium supports. The outer walls are not shown to increase visibility 3. The mass (A), charge (Z) and kinetic energy (E kin ) of the nucleus is sampled from the GEF output of the corresponding neutron energy. 4. The nucleus is emitted isotropically, assuming negligible angular anisotropy and zero neutron momentum transfer, and then transported in the ion guide until fully stopped.
In one simulation, 10 8 events (post neutron-emission fission products) are generated to ensure that the statistical uncertainty is negligible compared to systematic uncertainties, while keeping the computation time reasonable. To account for the distribution of the neutron flux in energy and space, the production rate of FPs is calculated by where σ (E n ) is the fission cross section at the neutron energy E n , as obtained from the ENDF/B-VIII.0 evaluation [14] and N U is the number of uranium atoms in the targets. φ p (E n ,r ) is the neutron fluence per proton at positionr and neutron energy E n , obtained from the MCNPX simulation, and I p is the proton intensity (protons per second) impinging on the pn-converter. N is the number of simulated events (10 8 ) and P(E n ) the probability of obtaining the neutron energy E n in the sampling process (Step 2). This energy is sampled uniformly from 30 energy bins between 0 and 30 MeV, and hence, P(E n ) = 1 30 . The factor 2 arises from the fact that two fission products are generated per fission event and assures that the total production rate of FPs is 2 times the fission rate. Each of the terms in Eq. (1), referring to a sampled positionr and neutron energy E n , is the multiplication factor used to calculate the yield of the corresponding FP event. Thus, by conditioning the production rate to a specific nucleus, P(Z , A), the (absolute) independent fission yield (per second) of the nucleus is obtained. Furthermore, by also requiring the stopping position to be inside any of the foils the implantation rate of the chosen nucleus in that particular foil is obtained.
After the generation of the fission products, the stopping power of uranium and helium governs the collection. Because of the distribution of initial energy, position and direction of the FPs, some will lose enough energy to stop in the uranium targets while others will enter the helium gas or the aluminium backings. To investigate the energy dependence of the stopping range of uranium for different models, the stopping ranges of 95 Zr at energies from 40 to 100 MeV were studied in natural uranium using TRIM [15] and different versions of GEANT4. A similar comparison was made for the helium gas for which the selected range of energies were 0.5 to 6 MeV.
In the previous version of the model [8], the GEANT4 class G4Multi pleScattering, as well as the energy loss model G4I on Parametrised Loss Model within the G4ion I onisation class, were not used. To investigate the performance of these models they were included in the study of stopping ranges and the results are presented in Fig. 2. As seen from Fig. 2, the G4Multi pleScattering leads to a reduction in range and an increase in the spread of ranges of the stopped 95 Zr ions in both uranium and helium. Further more, including also the G4I on Parametrised Loss Model, the stopping ranges of 95 Zr in both uranium and helium increase and, in particular for the helium gas, get closer to the stopping ranges obtained with TRIM. The effect of the increases of the stopping ranges will be discussed together with the benchmark below.
Measurement
In the measurement of neutron-induced fission yields performed at the IGISOL facility [9], a proton beam with a nominal current of 10 µA at an energy of 30 MeV impinged on the 6 mm thick beryllium target of the pn-converter, and the produced neutron flux induced fission in two uranium targets. The resulting fission fragments are either stopped in the uranium targets, the aluminium backings, in the helium gas or the titanium foil or hit the walls of the gas cell. At the same time, the neutrons also induced activation in the titanium foil and the uranium targets. Figure 3 shows a schematic view of the setup implemented in the measurement. The two uranium targets, fixed to thick aluminium backings, were mounted in aluminium frames. The targets were made of natural uranium, each having a size of 10 mm × 50 mm and thickness of 15 mg/cm 2 . However, 1 mm on each side was used to fix the target in the frames, resulting in an active area of each target of 8 mm × 48 mm from which the FP enters the helium gas. On the opposite side of the uranium targets, a titanium foil with the size 24.5 mm × 50 mm was installed. From the flow rate and the upstream pressure, the helium gas pressure in the ion guide was estimated to be 400(80) mbar.
After the beam was turned off, the uranium targets, the aluminium backings and the titanium foil were taken out of the ion guide. After a few days of cooling they were placed, one at a time, in front of a lead-shielded HPGe detector for measurements of γ rays from fission and activation products.
As seen in Table 1, some of the foils were measured two or three times and the time evolution of the extracted counts were used to confirm the identifications by comparing to the expected decay and build-up of the corresponding nuclei. The dead times of the measurements are also listed in Table 1 and are adopted in the calculations of the intensities of the γ rays.
The HPGe detector was energy calibrated with sources of 60 Co, 133 Ba, 137 Cs and 241 Am. In addition to the energy calibration, an efficiency calibration, including geometrical efficiency, was made with sources of 133 Ba, 152 Eu and 241 Am. The uncertainties from the calibrations are propagated in the analysis of the γ -spectroscopy data.
Data analysis
To benchmark the computer model using the spectroscopy data, the counts of identified γ -ray transitions extracted from the measurements and the simulations were compared.
Identification of measured γ -ray transitions
An example of the γ -spectrum from one of the uranium targets is presented in Fig. 4, with inserts of part of the spectra of the uranium target and the titanium foil. The energy and peak area for each γ -ray transition was extracted from such spectra of the different foils.
Fission products implanted in the foils
Some of the detected γ -ray transitions originate from the decays of fission products implanted in the foils. The implanted FP can be identified based on the measured γray transitions. However, to confirm the presence of a FP in any of the foils, all transitions that are expected to have a detectable intensity, should be observed. In the initial analysis, 81 γ -ray transitions in 12 decay chains were identified. However, only transitions that are unique to a certain FP, and could clearly be resolved from nearby peaks, were adopted in the benchmark. In addition, γ -ray intensities that could not be determined from at least two different foils are not useful Fig. 4 Examples of γ -spectra obtained from the uranium target and the titanium foil (4) 1.32 (6) 10.98 d [26] The values in the columns R U , R Al , and R T i represent the comparisons of simulations to measurements of the different foils * The γ 620.9 keV and 621.2 keV are not resolvable from each other in the spectra. The tabulated branching ratios of them are summed up for one γ -ray transition a The isomeric yield ratio of 131 Te and 132 I are estimated from the GEF simulation, and are adopted in the calculations below for the consecutive analysis, and hence, were not utilised. Some details about the selection of the γ -ray transitions for the benchmark are presented in the examples below. All in all, 47 γ -ray transitions from 10 decay chains remained for the comparisons. These transitions and their assignments are listed in Table 2. The values in the columns R U , R Al and R T i represent the comparisons of simulations to measurements of the different foils, and will be described and discussed below.
As seen in the inserts of Fig. 4, the peak at 531 keV was not resolvable from nearby peaks in the spectra of the uranium target and the titanium foil and hence, it was excluded from the comparison. The same is true for the peak at 723.8 keV in the spectrum of the uranium target. Moreover there are also two sources for this peak, 95 Zr and 131 I. Thus, the peak at 723.8 keV could not be used for the benchmark, despite having a high intensity. On the other hand, although too weak to be observed in the spectrum of the titanium foil, the peaks at 543.3 keV and 547.2 keV could be resolved in the spectra of the uranium targets and the aluminium backings and hence, could be used in the benchmark.
Neutron activation
Some of the observed γ -ray transition in the titanium foil and the uranium targets originate from the decays of neutron activation products. From the intensities of these transitions the production rate of the activation products could be derived and used to benchmark the simulated neutron flux. As above, only unique and resolvable γ -ray transitions were used in the benchmark. These transitions, their branching ratios and the corresponding activation reactions are listed in Table 3.
Fission products
Using Eq. (1), with a condition that a specific FP stop in a specific foil, the corresponding implantation rate of the FP is obtained.
Starting from the implantation rates, considering the decay relationships of the corresponding decay chain, the build-up of each nuclide in the foils can be calculated. In these calculations, a time step length of 1 min is used. The implantation rate of any nuclide with a half-life shorter than one minute is added to the implantation rate of the daughter nuclide. The amount of a specific nuclide at the time step m, N m , is governed by where m is an index of the time step, D m−1 = λ × N m−1 is the number of decays during the time step m-1, λ is the decay constant in units of per minute, and Y is the implantation rate in units of per minute. The subscript pr e indicates the corresponding quantities of the decay precursor. Using Eq.
(2), the amount of decays of a specific nuclide, during the measurement periods listed in Table 1, can be calculated for each measured foil. The expected counts of each γ -ray transition (N sim ) are then calculated by multiplying with the tabulated branching ratios. These can then be compared with the observed counts, N ex p , of γ -ray transitions in the measurements. Figure 5 shows the A = 95 decay chain as an example. Since the precursors of 95 Y have shorter half-lives than 1 min, their implantation rates are added to the implantation rate of 95 Y. Hence, according to the simulations, the implantation rate of 95 Y in the titanium foil is 3037.3 s −1 during irradiation. The corresponding implantation rate of 95 Zr is 7.4 s −1 , while 95 Nb and 95 Mo have zero yields according to the GEF simulations. The decay chain is considered confirmed because two γ -ray transitions from the decay of 95 Zr and one from 95 Nb (marked with arrows in Fig. 5) were identified in the γ -spectra. The red arrows represent the γ -ray transition 756.7 keV and 765.8 keV that were used in the benchmark while as mentioned above the black arrow (γray transition 724.2 keV) was excluded from the benchmark for having two sources, 95 Zr and 131 I.
In Fig. 6, the simulated build-up of each isobar in the decay chain A = 95 is presented for the titanium foil. The vertical dotted line indicates when the beam was turned off. The dashed and shaded regions show the time-periods of the γ -ray spectroscopy measurements of the different foils (see Table 1). The black curve represents the total amount of nuclei of the decay chain. Table 3 lists the neutron activation products of titanium and uranium, which were identified by the observed γ -ray transitions. The production rates of these neutron activation products in the titanium foil and the uranium targets were calculated based on the simulated neutron flux from MCNPX and the cross sections from ENDF/B-VIII.0 [14] and JEFF-3.3 [16].
Calculations of neutron activation
In Fig. 7, the neutron flux spectra from the MCNPX simulation and the calculated production rates of the neutron Table 1 activation products in the titanium foil and the uranium target are presented. A build-up of the activation products was constructed in the same way as in the example shown in Sect. 4.2.1. The amount of decays of each activation product was calculated in the same way and then multiplied by the tabulated branching ratio of the transition. Hence, the expected counts of each transition (N sim ) was obtained.
Ratios of simulation to experiment
The performance of the simulation model is evaluated by forming a ratio of each γ -ray transition Fig. 7 The neutron flux spectra from MCNPX and the calculated production rates of the neutron activation products in the titanium foil and the uranium target where N sim is the calculated number of γ -rays from the simulation and N ex p is the number of counts extracted from the spectroscopy data.
In the estimation of the uncertainty of the calculated number of γ -rays from the simulation (δ N sim ) only the uncertainties of tabulated branching ratios are included. The uncertainties of the half-lifes can be ignored because their impact is very small. Also the effect of β-delayed neutron emission has been estimated and deemed insignificant. The uncertainties of the FY from GEF are not included at this stage, but should be considered in future work. Considering that 1×10 8 fissions per neutron energy are calculated by GEF, and that each run of the GEANT4-simulation includes the same number of events, the statistical uncertainties from the simulations are negligible compared to uncertainties from other sources.
The measured number of counts is derived from the peak area, corrected by the corresponding efficiency. Thus, the uncertainty of the measured counts (δ N ex p ) have contributions from the uncertainties of the peak areas and the efficiency calibration. For all γ -ray transitions that were measured more than once in a certain foil (see Table 1) the uncertainties of the peak areas extracted from the measurements were treated as statistical uncertainties and were propagated in the calculation of the weighted average ratio for that γ -ray, while the uncertainties from the efficiency calibration were added after.
Hence, the uncertainties, δ R, of the ratios listed in Table 2 include the uncertainties of the branching ratios, the peak areas, and the efficiency calibration.
In the subsequent analysis, average ratios from different foils (or decay chains) are compared in order to draw conclusions about the accuracy of the simulation model. These averages were formed using the estimated uncertainties as weights, w = 1 δ R 2 . In the estimation of the uncertainties of the average ratios, the uncertainties δ R are assumed to be independent. The estimates of the uncertainties were derived in two different ways: using error propagation and using the weighted standard error of the mean The index i refers to the different γ -ray transitions from a certain mass number or foil. Assuming large sample sizes and no unknown uncertainties these two measures should be identical. In cases where the two estimates disagree the larger of the two is adopted as the uncertainty of the average ratio.
Results and discussion
As mentioned above, the products of neutron activation and the fission products were identified from the foils. The ratios from the activation data can be used to benchmark the neutron production from the MCNPX simulation. On the other side, comparisons of R-values based on implantation data from the different foils are used to benchmark the transportation of fission products in the GEANT4 simulation. For example, a comparison of the average ratio from the aluminium backings (R Al ) with that from the titanium foil (R T i ) can be used to benchmark the ion transport in the helium gas, i.e. the stopping efficiency of the gas.
Neutron flux
The production rates of the observed activation products in the titanium foils and the uranium targets are dominated by high-energy neutrons (see Fig. 7). Hence, the ratios of simulations to measurements of the activation products can be used to benchmark the high energy part of the neutron flux obtained from the MCNPX simulation. Figure 8 shows the ratios from the observed activation products together with the weighted average ratio 1.42 (4). It is noticeable that the ratio at 175.4 keV deviates significantly from those from the same nuclide ( 48 Sc) at higher energies. However, whereas this deviation can not be really understood, disregarding the 48 Sc results from the benchmark does not significantly change the average ratio.
The fact that the activation ratio is significantly larger than one implies that the flux of high-energy neutrons in the current simulation is overestimated. This conclusion is supported by similar overestimations of fission products stopped in the foils (see Sect. 5.2).
The reason for the overestimation of the neutron flux is not known but the relatively good agreement between the activation of the uranium targets and the titanium foil (see Fig. 8) indicates that the geometry of the simulation model is in reasonable agreement with the experimental setup. Possible explanations for the overestimation include the estimation of the integrated beam current on the target, and the modeling of the neutron production in the MCNPX simulation.
In the measurement, the proton beam current could not be monitored online. Instead it was measured from time to time by blocking the beam with a non-penetrable Faraday cup in front of the reaction chamber. In addition, neither the beam size nor the beam position on target could be measured. Hence, the integrated beam current on target, used in the calculation of the γ -intensities from the simulation, had to be inferred from the readings of the Faraday cup. In future versions of the neutron converter, an online monitoring of the proton beam current on the beryllium target is anticipated. The 30 MeV proton beam is fully stopped in the 6 mm thick Beryllium target. This is taken into account in the MCNPX-simulation of the neutron production where the interaction of the proton beam with the beryllium target is governed by the endf70prot library [31]. However, the resulting neutron flux in the ion guide will depend on the double differential neutron production cross sections of multiple neutron producing channels (Be(p,xn)) at proton energies ranging from 0 to 30 MeV. To what degree this can be accurately modeled by MCNPX is beyond the scope of this study. However, it is not unreasonable to assume that at least part of the 40% deviation between simulated and experimental results could be explained by model defects.
It is also worth mentioning that 239 Np, a daughter of the neutron activation product 239 U, was observed in the spectrum from the uranium targets. The reaction 238 U(n,γ ) has a much higher cross section at low-energy than at high-energy [14]. Thus, the ratios derived from 239 Np can be seen as an indicator for the fluence of low-energy neutrons. The Rvalues of the γ -ray transitions 106.1 keV and 277.6 keV belonging to the nucleus 239 Np, are 0.090(2) and 0.094 (2) respectively. The weighted average of these, 0.092 (2), is about 10 times smaller than one, suggesting that the neutron fluence at low energies, as obtained from the current MCNPX simulation, is underestimated.
The neutron field produced by the neutron converter is a white spectrum ranging from thermal energies to 30 MeV. However, in the simulation model the response from the surroundings has been ignored, which could explain the underestimation of the low energy flux. As a result, more than 98% of the fissions in the simulation originate from 238 U(n,f). Considering that the fission rate of 235 U(n,f) is dominated by low-energy neutrons it is likely that the fission rate of 235 U(n,f) is also heavily underestimated. A more accurate model of the neutron flux would include the response of the surrounding environment, which might add a non-negligible contribution from thermal fission of 235 U.
Transportation and stopping of fission products
The R-values for each observed γ -ray transition belonging to FPs in the foils are listed in Table 2. From these values, the weighted average ratios of γ -ray transitions from the same mass chain are calculated per foil and shown in Fig. 9. The grand average ratios with respect to the foils are also calculated, R U = 1.47(3) for the uranium targets, R Al = 1.33 (2) for the aluminium backings, and R T i = 1.31(3) for the titanium foil, respectively. These ratios all indicate that the production of FPs in the simulation is overestimated, which is consistent with the overestimation of neutron activation discussed above. Keeping this overall overestimation in mind, the differences between the average ratios of the different foils are discussed below.
A weighted two-sample t-test shows that the ratio from the uranium targets are larger than that from the aluminium backings at a 99% significance level. The difference means that more FPs remain in the uranium targets and less reach the aluminium backings in the simulation compared to the experimental result. According to the simulation, 70.9% of the generated fission fragments stop in the uranium targets and 14.7% in the aluminium backings (see Table 4). From this, the size of the overestimation of the stopping in the uranium can be estimated to be 3%. Likewise, an underestimation of the stopping in the aluminium of about 7% is obtained. Many factors, including; the shape, uniformity and thickness of the targets; the angular anisotropy of the FPs; (3) The data is presented for simulations using different versions of GEANT4 the neutron momentum transfer; and the stopping power of uranium, could contribute to this discrepancy. Depending on the starting positions in the target, the fission products that leave the target will have energies ranging from zero up to the full energy of the initial fragments (of the order of 100 MeV). After entering the helium gas the slowing down of the FPs continues. If the remaining kinetic energy of a FP is completely absorbed by the helium gas, the FP will be stopped in the gas and transported out of the ion guide by the gas flow. If the FP passes through the helium gas it will either hit the walls of the ion guide or stop in the titanium foil. Table 4 lists the fractions of simulated FPs stopped in different parts of the ion guide for different versions of GEANT4.
There is no material between the uranium targets and the aluminium backings, while there is the helium gas between the targets and the titanium foil. Thus, a comparison between the ratios of fission products stopped in the aluminium backings and the titanium foil can be used to benchmark the stopping performance of the helium gas. The ratios from the aluminium backings and the titanium foil agree within their respective uncertainties, and a t-test shows the difference to be insignificant. Hence, we conclude that the stopping efficiency of the helium gas obtained from the simulation agrees with the experimental results.
The model of the ion guide presented in this paper was developed based on version 10.2.3 of GEANT4 with the class G4Multi pleScattering (MS) and the energy loss model G4I on Parametrised Loss Model (PH). To investigate the impacts of recent updates of the program the simulations were repeated using different versions of GEANT4 and the results are presented in Table 4. This shows that version 10.2.3 yields less FPs stopped in the uranium and more in the aluminium compared to the newer versions. Version 10.2.3 also give less FPs stopped in the helium gas compared to the other versions. Both of these results are consistent with the calculated stopping ranges of 95 Zr obtained with the different versions of GEANT4 (see Fig. 2).
According to the R-values of the foils shown in Table 4, GEANT4 10.2.3 agrees best with the measurements. In Table 4, the values from the simulation with GEANT4 10.2.3 are closest to the average ratio from the neutron activation data of 1.42. Furthermore, the deviation between R U and R Al for the simulation with version 10.2.3 are smaller than that for the simulations by other versions of GEANT4. Also, R Al agrees with R T i within uncertainty for the simulation by GEANT4 10.2.3 but not for the others. Hence, we conclude that GEANT4 10.2.3 results in the best agreement between simulation and experiment.
The implementation of the class G4Multi pleScattering (MS) and the energy loss model G4I on Parametrised Loss Model (PH) has an impact on the R-values consistent with the change of stopping ranges shown in Fig. 2. Omitting them from the simulation results in much worse agreement between simulation and experiment. Thus, we conclude that both of these play an important role in current simulation model and should be adopted in future models.
According to the simulation by GEANT4 10.2.3, less than 1% of the produced fission products are stopped in the helium gas.
Conclusions and outlook
In this work we have used three independent Monte Carlo codes; GEF, GEANT4 and MCNPX, with the aim of modelling the generation and collection of fission products in the IGISOL neutron-induced fission ion guide. The MCNPX calculation of the production of neutrons in the proton-to-neutron converter depends on the Be(p,xn) doubledifferential cross section. Due to energy absorption of the protons in the beryllium target [6], this cross section has to be known as a function of proton energy. The neutron production heavily depends on the proton beam current and, to a lesser extent, the size of the beam spot. With this in mind, the observed overestimation of the neutron flux by approximately 40% does not seem unreasonable.
In the second step of the simulation, GEF is used to generate fission products from the 238 U(n,f) reaction at a neutron energy of 0 to 30 MeV. However, the reliability of GEF depends on the availability of experimental data and, espe-cially at the higher part of the neutron energy spectrum and for fission products with low yields, data are scarce. This is a possible explanation for what seems to be a significant and systematic decrease of the ratios for mass chain 127 and 131 in Fig. 9.
For this study the performance of different versions of GEANT4 (10.2.3, 10.3.3, 10.6.2, and 10.7.1) were investigated. Although similar results were obtained in all cases, the version which best reproduces the data is 10.2.3, resulting in an agreement between the simulations and the experiment well within 10%.
Considering the many steps involved, from the incoming proton beam until a fission product is stopped, and the fact that three different Monte Carlo codes have been used, the predictive power of the model is very good and sufficient for the purpose of modelling the ion guide.
The next step will be to use the model to optimize the design of a new ion guide to increase the amount of extracted fission products. In the measurement of neutron-induced fission yields [9] the amount of fission products extracted from the ion guide was too low to perform the intended measurement. To improve this the geometry could be modified to allow an even shorter distance between neutron production target and fission target. The design could also be optimized to increase the collection efficiency of the helium gas. Due to vacuum requirements downstream, a higher helium pressure is not a feasible way to achieve this and instead the volume of the ion guide has to be increased. This in turn will probably require a RF-structure that provide an electric field guidance system, similar to the CARIBU gas catcher [32], to achieve a sufficiently efficient collection and extraction of the fission products from the guide. Such a RF-structure will be adjusted to the size of the IGISOL ion guide and tested with a Californium source before it is put in use in experiment.
|
2022-12-01T14:58:08.462Z
|
2022-02-01T00:00:00.000
|
{
"year": 2022,
"sha1": "87c536503263643fe9d3c7e873fcb32e03fcaf5d",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1140/epja/s10050-022-00676-z.pdf",
"oa_status": "HYBRID",
"pdf_src": "SpringerNature",
"pdf_hash": "87c536503263643fe9d3c7e873fcb32e03fcaf5d",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": []
}
|
219068346
|
pes2o/s2orc
|
v3-fos-license
|
Unregulated Social Work Practice in Botswana: A Risk to Professional Integrity and Clients’ Welfare
Through professional regulation, the aspirational goals of the Code of Ethics is to become a legal obligation with enforceable accountability for public protection. The professional regulation does not only protect the public but also gives integrity and respect to the profession. Social workers are licensed and they always know that malpractice may lead to losing their registration and licenses. The above is a reality in many countries, while in Botswana, after 74 years since the first social welfare officer started to work in the welfare department, the country has not yet established any regulatory body. Even though the regulation might not be a guarantee for ethical practice, but it is better to have a framework that can be used to regulate in terms of monitoring and evaluating practice. The education and practice of social work is unregulated and has left a vacuum where, anybody who government deems to be eligible can be employed as social worker. The above statement is buttressed by the Children’s Act, 2009 section 2, which says, ‘a social worker is a person who holds a qualification in social work, or such other qualification as may be prescribed, and is employed as such by government or such other institution as maybe approved under this Act Original Research Article Jongman and Tshupeng; JESBS, 33(4): 1-10, 2020; Article no.JESBS.56425 2 and any other law’. This has brought challenges in dealing with values, principles and standards of social work, teaching and practice in Botswana [1]. The complaint is that social workers are unprofessional and not adhering to their own code of conduct. From the complaints, the assumption is that social work has a code of conduct as a heling profession. The reality is there is no code of conduct and there is no licensure in Botswana. This paper is a narrative of examples of cases where social workers were deemed not adhering to their professional ethics and not providing service to the most vulnerable at the time of need. These stories are used as a yardstick to argue for the establishment of an ACT of parliament which will establish the council of social work. The council will be a regulatory body of social work.
INTRODUCTION
The Social work profession's history in the world dates to the 1800's in Buffalo, New York in the United States of America. The serious turn in the profession was realized in 1915 at an educational conference where Abram Flexner challenged the professional status of the field of social work [2]. Syers [2] summaries the conclusions of Flexner on declaring social work as not qualifying to be classified as a profession. The above-mentioned author has indicated that Flexner, was convinced that social work lack in individual responsibility and educationally communicable techniques, regulatory body, body of knowledge and national association. This led to social workers of the time to introspect and review the programmes, of the profession and come up with frameworks that Flexner felt were not visible in the profession. The rise of books like social diagnosis by Mary Richmond and Jane Addams in 1920 were some of the visible attempt to try and professionalize social work practice which was born out of charity work in Buffalo [3].
Social work in Botswana is a new profession dating bac to around 1946 [4]. The profession started at the Ministry of Education and then training was around 1974 at Botswana Agricultural College which offered certificate in social welfare and community development. The real growth of social work has been seen with an established department of social work at the university Botswana in 1985 [4]. The recognition of social work practice in Botswana has continually increased with many policies and programs aimed at improving quality of life for the community in Botswana, yet the quality of service provided to the social work clients remains questionable given the public criticisms, complaints and reports from clients [4]. Social work is a sensitive profession that deals with individuals, groups and communities that might be vulnerable due to different circumstances.
There are, people living with disabilities, patients at hospitals with different conditions, broken families, abused children and women. Furthermore, there are those impoverished individuals and families, children at risks, orphans and vulnerable children in an endeavor to attain equilibrium and restore social functioning. It is against this backdrop that unregulated practice may further put clients at risk of by practitioners. The sensitivity of the profession cannot be overstated since it deals with human life which requires caution to protect the inherent dignity and worth of a person as in accidence with the values and principles of social work enshrined in the code of ethics of the International Federation of Social Workers [5]. Therefore, there is need for an urgent response to the plight of social work clients in Botswana and establish a body which regulates practice which currently is not there. OF SOCIAL WORK MALPRACTICE IN BOTSWANA "THERE IS NO SMOKE WITHOUT FIRE" There is a general notion that "there is no smoke without fire" meaning that even from rumors, chitchats, stories and gossips, there must be the underlying truth. Given the statements related to social workers that follow there must be reality of client suffering during service provision by social workers. Some of the experiences of clients with social workers are as follows as collected from newspapers, meetings held by the President of the country, ministers and shared in staff meetings:
a. Violation of social work principles and values in an adoption case
There have been a number of controversial adoption cases in Botswana over the years. Some of the cases include the following, case MAHGB-000291-14 1 , where the father of the child was not consulted for his consent before the child was adopted. The father of the child challenged the constitutionality of Section 4 (2) (d) (i) of the Adoption of Children Act Cap 28:01, in so far as it does not require his consent for the adoption of his child, just because such child was not born in wedlock. Furthermore, there are more cases of this nature where the social worker was supposed to do the assessment and end up not fully doing assessment. The two cases in note are just an example of what is happening in the adoption field. The following cases is noted in terms of adoption in Botswana, there was a case in February 2018 the Francistown High Court Judge ordered that a 15 months old child be returned to the biological mother following a controversial adoption that was facilitated by a social worker (The Voice Newspaper, 16 February 2018). The judge ruled that the adoption process of the little girl was tainted with flaws and had been carried out in a haphazard and hasty manner, without considering the traumatic and roller coaster of emotions the mother was going through after she was raped. This client was brutally raped on her way back from shopping at one of the local malls in March 2016. Between her home and shopping centre laid a bush, she walked deep into the bush and a man sprung from the thicket, pounced on her and gagged her mouth shut to stop her from screaming for help, with a knife he hard pressed against her neck, the stranger dragged her deeper into the bushes, away from the footpath and raped her without a condom. The survivor stated, "dishevelled, ashamed I walked home and told no one of the ordeal. I didn't report the rape to the police. I felt dirty and all I wanted was to clean myself", she continued. "I filled up the bathtub with water; soaked in the water, praying that I had not contracted any disease or fallen pregnant.'' 2 The intrusion of the rapist had however implanted seeds of a permanent reminder of her rape. She had conceived and was pregnant. When she discovered she was pregnant the first thought that occurred to her mind was to commit an abortion. Four months pregnant, the woman approached Somerset Extension clinic to request for termination of the pregnancy. She was however turned down and referred to a social worker for help. Abortion was illegal in Botswana until 1991 amendments were made under the 1 MAHGB-000291-14 2 The Voice Newspaper Penal code (Amendment) Bill. Pregnancy could be legally terminated within 16 weeks of conception under the following conditions, when the woman was raped or its due to incest to safe the life of the mother or instance of impairment to the child [6]. In this case, the survivor of rape did not report the case and it could not be done hence she was turned down at the clinic.It was during her consultation with the social worker, that she was advised and asked to consider giving up the baby for adoption, an option she agreed to. In the subsequent counselling sessions, the expectant woman renounced her rights over the unborn child and applied to the Magistrate for the renunciation of the rights.
She gave birth to a baby girl. And a twist and change of heart gripped her, when she heard the first cry of the little one. The cry lingered, echoed and triggered the motherly love and affection for the little girl. Even though she was not allowed to take the child due to the legal obligation, she wanted her baby back, but she was reminded that she had given up her parental rights. Eight days later, she was discharged from hospital empty handed. Meanwhile, when her parents heard news of the birth of their grandchild, they travelled from the home village, but their expectations to see their daughter and the baby were crushed, as they were met with the sight of their daughter grieving the loss of the child. It was under this cloud of dismay that the family sought an explanation from the social services offices and demanded the baby back only to be told them she had been adopted and the process was irreversible. The family sought legal recourse and the matter landed in court.
Therefore, in the final deliberation of the matter, the Judge with the help of other social work experts spotted loopholes in the counselling sessions the woman was taken through as the judge indicated said, that there were no records in the social welfare officer's notes to indicate that the expectant woman had gone through sessions of trauma counselling. "Trauma counselling was critical to firstly rid her of feelings of self-hatred and hatred for the product of rape. Therefore, when she made the decision to relinquish her parental rights, she was not in a position to think rationally," continued the judge. "Her status as a victim and the turmoil she was going through was not addressed. This was unfortunate." The judge went on to state that the victim's consent to forfeit her rights as a parent and to give the child away was not properly obtained.
He noted that an evaluation after birth should have done, to assess how the victim related to the baby before a final decision was made. Although a fairly wealthy parent, who has the ability and prospects to adequately provide had adopted the 15-month-old baby, which is in sharp contrast, to the humble one room rented apartment the biological mother lives in, Judge observed that, " child care is a wider ambit. Circumstances of her birth are matters which will one day have to be dealt with. When this happens the social safety network of parents and relatives will provide last line of defence.""The presence of her biological mother will provide a shock absorber of the news. It is in the best interest of the child that she be returned to her biological mother. This has long term benefits for the child," said the Judge as he concluded his judgment". 3
b. "Social workers do not perform their duty efficiently. They do not help us"
Kang'ethe [7] carried out a study on critical coping challenges facing caregivers of persons living with HIV and AIDS and other terminally ill persons in Kanye. The results showed all the participants were dissatisfied with the social workers' services. Due to poor service delivery from social workers, it took too long for the clients who are assessed for a food basket to get their assessment results and the food basket [7]. . The residents mentioned that the department hardly assists destitute persons as well as orphans. The member of a Village Development Committee, Philip Matante Ward, said that the social services brag of assisting the needy, yet they do the opposite. This view is important in building the argument of this paper, but it is view not a proof.
The Department of Social and Community Development (S&CD):
Botswana has been said to be among the lest corrupt nations in Sub-Saharan Africa. Cases of fraud and malpractice are not as many as in other African countries. Despite that, it has been reported, that, some social workers have been defrauding Government through the local councils of Gaborone and Southern district councils after it was discovered that the social grants offices at both councils have been making payments to fictitious beneficiaries. This was carried by the [9]. The above statement bothers on professional integrity, dignity, and image of the profession. The caliber of professionals for a sensitive service provided by social workers is fundamental to promote trust for clients to seek assistance without hesitation. The statement is not appealing for anyone regardless of their vulnerability to associate themselves with such a system. If the helping system is construed as above, then social problems will persist without any intervention since clients will avoid such a system at all cost.
c. Shelving & Neglectof clients' cases
Casework is one of the five traditionally defined social work methods. Traditionally, this approach has focused on those individuals who could not achieve a normal adjustment to life and needed outside attention [10]. There has been allegations from residents especially in the Francistown area that A resident of Francistown that some Social Welfare officers at the department of S&CD neglect cases even though they could see the need to do the case Johnson and Huggins [11] indicated that, social casework method in social work has a rich history with various iterations to work with individuals, families, and people suffering from mental illness or working in schools. The above point was previous shared by Popple and Leighninger [12] who went further to describe what social case work does to the abovementioned interaction. The above authors note that social casework involves working with the client to 1). Assess and identify individual and family strengths and needs; 2). Develop a case plan to provide appropriate supports and services; 3). Implement the case plan using community resources; 4). Coordinate and monitor the provision of services, and 5). Evaluate client progress and the case plan to determine continued need for services. In the above scenario, a competent social worker would have built rapport with the client, made an intervention plan, scheduled sessions with the client, provided feedback and such a complaint would have not surfaced.
d. Unfriendly practitioners
An empirical research in Botswana that sought caregivers' opinion on the services of the social workers indicated that social workers were not keen, were not friendly and did not appear to enjoy their work. This was attributed to higher caseloads besides other administrative and logistical challenges [13]. This cannot only be attributed to social workers but to a system that does not favour or create a conducive environment for social workers to perform at their highest potential. Social work by nature must create a conducive environment for client's freedom to express themselves in a welcoming atmosphere. The attitude and appearance of a social worker play an important role in determining an effective working relationship with the client. Social workers who are not keen and who do not enjoy work are susceptible to provision of low-quality service, neglect clients and harm them in the process. This is against the value of competence in social work which promotes proficiency in practice.
IMPLICATIONS FOR SOCIAL WORK PRACTICE a) Establishment of Social Work ACT
The above scenarios are few that this paper has picked, and they are so many similar complaints.
Others are genuine while others are fabrications. The challenge in validating the above scenarios is that there are no reporting mechanisms to measure the extent to how the service beneficiaries are at the mercy of practitioners. These might remain as anecdotal because research on social work is at the lowest, no reporting system, no ministry concerned with social work but different departments in different ministries dealing with social welfare. Despite all that, the scenarios picked above depicts client suffering at the mercy of the helping system of social workers. The issues of the clientele of social worker maybe they can be addressed by establishing a regulatory body in terms of a social workers professional's council. A social workers council should be established by an Act of Social Work which currently is not available in Botswana. After establishing the council, its core mandate will be to regulate practice, service to clients, caliber of professionals and enforce the ethics, principles and values of social work and deal with the violations thereof. The Council of Social Work Education formed in 1952 played a pivotal role in social work development in the United States of America [14]. So, Botswana also needs an active Council of Social Work to play the same role. The roles and activities of the council will be different based on different environments, but the principle will be the same, to achieve a fully regulated profession of social work in Botswana. The council of social work is needed to standardize the profession, establish requirements/qualifications, practice, values, principles, and education. It furthers its mandate through research, negotiations and consultation of relevant systems to establish a contextual relevant social work in the country.
The council would provide the standards of practice by which would include ethics. The word "ethics" means a custom or a habit, it emphasizes on what is morally right and how things ought to be [15]. Dolgoff et al. [15] notes that professional ethics are intended to help social workers recognize morally correct practice. This definition suggests that ethics are there to facilitate good professional conduct towards the improvement of the quality of life of the client. Ethical standards clarify the level of performance, expertise and expectations; they are an operationally oriented reiteration of ethics [16]. The standards expand on ethics by showing the responsibilities or how work ought to be done.
i. Attainment of a professional status
The question of whether social work is a profession or not should now go to rest. It has been asked over the years and it has been answered by the progress in the profession by developing on the mechanisms that answers what is a profession and what is not. Moving forward, there has been a consensus the world over and the agreement is that social work is a profession. International Federation of Social Workers [17], define social work as, a practicebased profession and an academic discipline that promotes social change and development, social cohesion, and the empowerment and liberation of people. Principles of social justice, human rights, collective responsibility, and respect for diversities are central to social work. Underpinned by theories of social work, social sciences, humanities and indigenous knowledge, social work engages people and structures to address life challenges and enhance wellbeing. The above definition may be amplified at national and/or regional levels. Moreover, [18], indicate that social work must critically review what it means by, along with the implications of, the profession's commitments. The profession needs to consider how theory, its academic discipline and social work interventions support these commitments [18].
Social work is a professional activity which calls for ethically professional conduct. An array of values and ethical principles inform social work profession. The IFSW Ethical code was recognized in 2014 by the International Federation of Social Workers and The International Association of Schools of Social Work in the global definition of social work, which is layered and encourages regional and national amplifications [19]. When social work loses the most important trait of a profession, it simple ceases to exist as a profession and its rejection and criticism by the society will take preeminence.
The changing organization and practice of social work:
i. Fragmentation
Fragmentation regularly spoils professional identities and generates uncertainty amidst attempts to provide effective or reliable services [20]. The author continued to indicate that, indeed fragmented, disorganized, or reductive provisions often generate new risks for the recipients of services. Social work's plurality according to Thomas,[21] emerges from differences in how social workers conceive of and act on social issues. Furthermore, [22] indicate that, such pluralism imbues social work with contrasting forms of action, thereby reducing the integration of social work through what Knorr-Centina (1999) considers epistemic culture in which knowledge emerges from science and its practice.
It is a well-known fact that in Botswana, several social science graduates such as sociologists, psychologists, adult education, and theology graduates are employed as social workers in the public service [4]. This is a serious distortion of professional identity since their adherence to ethical practice is doubtful and their education is irrelevant. In order to maintain identity, ethics are needed to define social work practice. Given these fragmentations, it is difficult to exist without ethics lest social workers be confused for every other profession that arises around human care.
ii. The changing role of the social worker
Social workers do tasks like resource management, assessment and monitoring and evaluation of poverty eradication projects than the traditional casework and this constitutes a threat to traditional conceptions of professional education [23]. The statement of ethical principles enables social workers to ensure the professional integrity of their practice. In addition, it is also the basis for all social work codes of ethics for members of International Federation of Social Work [23]. Members of IFSW are required to support their social work members in upholding these ethical principles, their own code of ethics and the integrity of the profession.
iii. Contact with the client
In a profession where the contact is distinct, direct and personal, then it is open to possible abuse, owing to its intimacy, a code of ethics therefore protects both professionals and service users [24]. Social workers work with various clients with a possibility of engaging in unprofessional relationships by having sexual relationships with these clients. The NASW code of ethics states that sexual activity or sexual contact with clients' relatives or other individuals with whom clients maintain a personal relationship has the potential to be harmful to the client and may make it difficult for the social worker and client to maintain appropriate professional boundaries [25].
If there is no such regulation that instructs social workers not to get in intimate relationships with their clients, the clients will suffer sexual exploitation from workers. For instance, sexual favors will be given to some social workers in exchange of the client's required service. The social work clients who are already disadvantaged would be victims of exploitation under the loose helping system if ethics are not in place. The primary objective of the Code of Ethics is to make these implicit principles explicit for the protection of clients and other members of society [22]. The clients need to be able to trust the professionals both to have enough expertise to do what they claim to be able to do, and not to deceive or abuse the service user.
iv. Duty towards the client
This means the obligations of the worker towards the client. The ethical standards indicate the responsibility of the social worker towards the client. The duty needs to be clearly defined by ethics when there is a single user than when there are multiple users [24]. This assertion is very limited because the writer thinks that ethics are obvious where professionals are accountable to a large population but that is very dangerous because professionals would make countless errors in practice because such a system will be open to subjective practice. Therefore, explicit and well defined ethics and ethical standards will help to define the common standard of practice. For a client to hold the social worker accountable, explicit ethics and ethical standards are needed because the code provides ethical standards to which the general public can hold the social work profession accountable [23]. The ethics will be contextualized to the Botswana context so that social workers and the general public can understand them better.
Professional values versus Personal values:
Social workers face tension between professional values and personal values [26]. Social workers and their clients hold different values, beliefs and philosophies. If there is no common guide in such a situation, a social worker might take advantage of his professional role and impose their values on the client. A Christian social worker may persuade a Muslim client to convert to Christianity. A social worker who does not believe in legal abortion might simply tell the client who comes for counseling on abortion to refrain from abortion if there are no ethics to guide the working relationship. NASW [25] in its ethical standards discourages a social worker from taking an unfair advantage of any professional relationship or exploit others to further their personal or religious interests and it emphasizes the right of a client to selfdetermination. So, in this case, a code helps to protect the client. The ethics and ethical standards help professionals to reserve personal values and assume neutrality to help clients.
b) Association of Social workers
The now Botswana National Association of Social Worker (BONASW) originally formed as Botswana Social Workers Association (BOSWA) exists as a unified association to enhance the image and standards of social work locally and globally [27]. The association was formed to bring back together social work practitioners in different organization to lobby together for the establishment of a regulatory body and to be gatekeepers of the profession. It also intends to serve as a united voice for the social work profession to advance the interests and contributions of social work in its broader pursuit for human dignity and social justice. The Botswana National Association of Social Workers was established in 2001 and remained dormant until 2010 when over 100 social workers converged to resuscitate it but the numbers of social workers continued to decline in their top agenda meetings to date, hence its equivocal impact on issues of professional interest inclusive of the council [28].
Generally, associations are meant to bring social workers together to share experiences and contribute to the shape of the profession. First, the social workers must take responsibility of the destiny of their profession, demonstrate their passion for their noble profession and see the need to be organized under one roof to shape and defend the image of social work in Botswana. The association achieves the policy dimension of developmental social work practice where social workers implement, analyze, comment on, influence, and generally work towards making policies just and meaningful [29]. For instance, the issue of homosexuality, transgender and bisexuals which remains a bone of contention in African countries inclusive of Botswana should see social workers in the forefront, but such minority groups continue to fight their inclusion battle without social workers.
c) The Department of Social Protection
The department of social protection under the Ministry of Local Government and Rural Development is a policy making body which also oversees implementation of programs and policies by social workers and plays an advisory on social work practice in Botswana. It also prides in self in optimal functioning for all. Therefore, the atrocities suffered by clients at the hands of some social workers should be matter urgency to the department and ignite its pursuit for a deliberate move to establish social workers council in order to safeguard the dignity and worth of clients in provision of social services. In order to further assist the organization of social workers, the department should encourage social work practitioners to be part of the association by supporting the association in terms of continues professional development programmes which will attract social workers to attend. Lobby government for the recognition of social work profession and giving social work profession a stake at the table of policy crafting to be part of the process not only be implementers.
CONCLUSION
In modern societies where traditional norms and values have either broken, or are fast breaking down, situations, of what [29] refers to as "anomie" have become quite common. Though individual means of livelihood in many countries have generally improved, many people still face difficult existential conditions, for example in situations characterized by war, famine, poverty, crime, disease, and associated personal and familial traumas and maladjustments. Social workers (caseworkers) are required to mitigate the effects of these problems. Their role in providing support and a sense of belonging to maladjusted persons cannot be overestimated.
Using their professional skills and knowledge, social case workers help in assessing the clients' needs and applying agency, community, and public welfare resources and programmes to address relevant social, health or economic problems. They help clients who become eligible for a variety services designed to improve their economic, social and/or health functioning, thereby working toward improving the clients' quality of life or standard of living [10]. The ethical standard on competence emphasizes that social workers should exercise careful judgment and take responsible steps to ensure the competence of their work and to protect clients from harm [25]. Delay of services to clients on palliative care indicate diminished professional judgment and incompetence of the social workers which ultimately contribute to clients dying while awaiting assistance.
The above account of social service delivery in Botswana tried to depict the role of ethics and values of social work in Botswana. It has captured few cases of 'malpractice' in Botswana and how they have pulled the good name of social work into disrepute. Moreover, the paper tried to look at the ethics of social work and its importance. The paper has argued that there is a need of a regulatory body that will help to establish the council of social work which will be concerned with regulating how social workers will deliver their services to the client. As it was indicated throughout the paper, there is need for urgent establishment of the social workers professionals' council to protect clients and achieve customer satisfaction. The government should engage social workers, social work educators and experts to drive this endeavor lest the profession loses its integrity, relevancy and remains untrusted by its clientele. Nevertheless, these should not be construed as a demonstration of miserable social work in Botswana but an edge to calibrate, invigorate and standardize the noble profession.
CONSENT
As per international standard or university standard, respondents' written consent has been collected and preserved by the author(s).
ETHICAL APPROVAL
As per international standard or university standard written ethical approval has been collected and preserved by the author(s).
|
2020-05-07T09:06:11.723Z
|
2020-05-02T00:00:00.000
|
{
"year": 2020,
"sha1": "854ea8b9cf0801d752325d0568fda81780cb227a",
"oa_license": null,
"oa_url": "https://www.journaljesbs.com/index.php/JESBS/article/download/30212/56689",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "5172ba750dbdf1c43a9377d7a9a75591b30b86b0",
"s2fieldsofstudy": [
"Political Science"
],
"extfieldsofstudy": [
"Business"
]
}
|
258923677
|
pes2o/s2orc
|
v3-fos-license
|
Response of Prolyl 4 Hydroxylases, Arabinogalactan Proteins and Homogalacturonans in Four Olive Cultivars under Long-Term Salinity Stress in Relation to Physiological and Morphological Changes
Olive (Olea europeae L.) salinity stress induces responses at morphological, physiological and molecular levels, affecting plant productivity. Four olive cultivars with differential tolerance to salt were grown under saline conditions in long barrels for regular root growth to mimic field conditions. Arvanitolia and Lefkolia were previously reported as tolerant to salinity, and Koroneiki and Gaidourelia were characterized as sensitive, exhibiting a decrease in leaf length and leaf area index after 90 days of salinity. Prolyl 4-hydroxylases (P4Hs) hydroxylate cell wall glycoproteins such as arabinogalactan proteins (AGPs). The expression patterns of P4Hs and AGPs under saline conditions showed cultivar-dependent differences in leaves and roots. In the tolerant cultivars, no changes in OeP4H and OeAGP mRNAs were observed, while in the sensitive cultivars, the majority of OeP4Hs and OeAGPs were upregulated in leaves. Immunodetection showed that the AGP signal intensity and the cortical cell size, shape and intercellular spaces under saline conditions were similar to the control in Arvanitolia, while in Koroneiki, a weak AGP signal was associated with irregular cells and intercellular spaces, leading to aerenchyma formation after 45 days of NaCl treatment. Moreover, the acceleration of endodermal development and the formation of exodermal and cortical cells with thickened cell walls were observed, and an overall decrease in the abundance of cell wall homogalacturonans was detected in salt-treated roots. In conclusion, Arvanitolia and Lefkolia exhibited the highest adaptive capacity to salinity, indicating that their use as rootstocks might provide increased tolerance to irrigation with saline water.
Introduction
The olive tree is known to grow in adverse environments, ranging from dry conditions to heavy-rain areas [1]. However, the Mediterranean basin, which is the major olive cultivation region in the world, has been identified as one of the "Hot-Spots" in future climate change projections, posing a threat to olive cultivation [2].
Climate change in the Mediterranean basin will affect year-round precipitation, considering that the majority of olive trees grow under rainfed conditions [3]. In this case, low-quality saline water will be used for the irrigation of olive orchards, inducing salinity stress and limiting vegetative growth and productivity [4].
Plant Material
Two-year-old olive trees of Koroneiki, Arvanitolia, Lefkolia and Gaidourelia were placed in containers with a 150 L volume and 90 cm length containing approximately 130 lt of a sand-perlite mixture with 50% sand and 50% perlite and were fertilized with halfstrength Hoagland solution ( Figure S1) [46,47]. The olive trees were grown for 10 months until the rooting system was fully developed, as is shown for Koroneiki in Figure 1A. The containers were placed one meter apart from each other, while the cultivars were positioned randomly in four rows. Ten trees per cultivar were used for the experiments: five trees for the salinity condition and five for the control. Fertilization was performed either manually or through a hydroponic system via Hoagland nutrient solution. Full-strength The NaCl concentration was set at 120 mM, and the application frequency was constant in every irrigation. The experiment was conducted for a period of 90 days starting in spring. At least three biological replicates from roots and leaves were collected at 0 days and after 45 and 90 days of NaCl treatment. Roots and fully developed leaves were ground into fine powder in liquid N 2 and stored at −80 • C.
Morphological Analysis of Leaves
Fully developed leaves were collected from the middle parts of newly grown shoots around the tree at a height similar to the shoulder level of an average human. After scanning the leaves at a 600 dpi resolution, they were subjected to segmentation by using an algorithm [48] implemented in MATLAB (The Mathworks Inc., Natick, MA, USA) using the Image Processing Toolbox. This method was used for object contour extraction from binary images and identified various geometrical characteristics, which were assigned to different morphological traits.
Cells 2023, 12, x FOR PEER REVIEW 4 The NaCl concentration was set at 120 mM, and the application frequency was con in every irrigation. The experiment was conducted for a period of 90 days starti spring. At least three biological replicates from roots and leaves were collected at 0 and after 45 and 90 days of NaCl treatment. Roots and fully developed leaves were gr into fine powder in liquid N2 and stored at −80 °C.
Morphological Analysis of Leaves
Fully developed leaves were collected from the middle parts of newly grown s around the tree at a height similar to the shoulder level of an average human. After ning the leaves at a 600 dpi resolution, they were subjected to segmentation by usi algorithm [48] implemented in MATLAB (The Mathworks Inc., Natick, MA, USA) the Image Processing Toolbox. This method was used for object contour extraction binary images and identified various geometrical characteristics, which were assign different morphological traits.
Chlorophyll Pigment and MDA (Malondialdehyde) Determination
Fully developed leaf tissues were collected at least in triplicate from the salinity ment time course points of the four cultivars and were used to extract chlorophyl ments according to the DMSO-based protocol [49]. Chlorophyll a (Chla), chlorop (Chlb) and total chlorophyll (Chlt) were determined using Arnon's (1949) equations. peroxidation was determined by using the TBA (thiobarbituric acid) reaction, and the MDA content was quantified [50,51].
Chlorophyll Pigment and MDA (Malondialdehyde) Determination
Fully developed leaf tissues were collected at least in triplicate from the salinity treatment time course points of the four cultivars and were used to extract chlorophyll pigments according to the DMSO-based protocol [49]. Chlorophyll a (Chla), chlorophyll b (Chlb) and total chlorophyll (Chlt) were determined using Arnon's (1949) equations. Lipid peroxidation was determined by using the TBA (thiobarbituric acid) reaction, and then the MDA content was quantified [50,51].
Protein Prediction of AGPs and P4Hs and Phylogenetic Analysis of P4Hs
We developed an algorithm in MATLAB (The Mathworks Inc., Natick, MA, USA) to identify possible classical AGPs from the Olea europaea proteome using the bioinformatics method in [52]. Classical AGPs were initially discriminated for biased amino acid compositions that were at least 50%. The identified proteins were then further investigated with criteria regarding the existence of a signal peptide, as well as the presence and placement of repeats of the dipeptides AP, PA, SP and TP, as these sequences are frequently seen in known AGPs [52].
RNA Extraction and Real-Time RT-PCR
Total RNA was isolated from mature roots in a non-destructive way, considering that the trees were grown in 90 cm long containers, using the PureLinkTM RNA Mini Kit (Thermo Fisher Scientific ® Waltham, MA, USA). Total RNA was isolated from mature leaves by using a Phenol-Chloroform protocol [61,62]. Total RNA was further purified with NucleoSpin ® RNA Clean-up XS (Macherey-Nagel, Duren, Germany). Total RNA was incubated with Dnase I Rnase-free enzyme (Thermo Scientific, Waltham, MA, USA) and then fractionated on an RNA-denaturing GelRed ® (Biotium, Fremont, CA, USA)-stained 1.5% agarose gel by electrophoresis. Approximately 1-2 µg of total RNA was reversetranscribed into first-strand cDNA with SuperScript™ II RT (Invitrogen,Waltham, MA, USA). Real-time RT-PCR was conducted on the CFX Connect™ Real-Time PCR Detection System (Bio-Rad ® , Hercules, CA, USA) with SYBR Select Master Mix (Applied Biosystems ® , Waltham, MA, USA). The data were analyzed by using the 2−∆(∆Ct) method [63,64]. The olive Act7a Actin cDNA (identifier: OLEEUCl025648|Contig2 OLEAEST TC) was used as a reference gene [65]. The primers for the gene expression studies are presented in Table S1. Three biological replicates were used for each sample.
Western Blotting
Total proteins were extracted according to Woodson [66] with some modifications. Approximately 0.2 g of root and leaf tissue was ground in liquid nitrogen, and then extraction buffer (65 mM Tris HCl, pH 6.8, 2% (w/v) SDS, 700 mM β-mercaptoethanol, 2 mM EDTA, 1:10 volume protease inhibitor and 1 mM PMSF) was added at a ratio of 1:3 volume. After vortexing, the mixture was boiled at 99 • C for 5 min and centrifuged for 10 min at 4 • C. Approximately 20 µg protein samples, after Bradford assay quantification, were separated by SDS-PAGE in a 10% polyacrylamide gel (30% acrylamide/bisacrylamide) and transferred to a hydrophobic PVDF membrane (Millipore Immobilon-P, Burlington, MA, USA). The membrane was blocked with 5% non-fat dry milk dissolved in 1 × TBST buffer for 1 h at room temperature. The LM2 and JIM13 antibodies (Plant Probes, Leeds, UK) were used to determine epitope-bound AGPs. The β-actin monoclonal antibody (ANTIBODIES, Germany) was used as a loading control. The secondary antibodies that were used were an anti-rat monoclonal antibody (Agrisera, Geneva, Switzerland) for LM2 and JIM13 and an anti-mouse monoclonal antibody (Agrisera, Switzerland) for β-actin. A chemiluminescent solution (SuperSignal West Pico Chemiluminescent Substrate, Thermo Scientific, Waltham, MA, USA) was used for detection. After incubation, the membranes were exposed to X-ray film for 1-2 min.
Root Fixation
Whole root parts, 1 cm in length, were fixed for 1.5 h in 4% (w/v) paraformaldehyde in PEM buffer (50 mM piperazine-1,4-bis (2-ethanesulfonic acid, 5 mM ethyleneglycol-O,Obis(2-aminoethyl)-N,N,N ,N -tetraacetic acid, 5 mM MgSO 4 ·7H 2 O, pH 6.8) containing 2% (v/v) dimethyl sulfoxide and 0.1% (v/v) Triton X-100. The specimens were dehydrated in a graded ethanol series (10-90%) diluted in distilled water and finally three times in absolute ethanol for 30 min (each step) on ice. The material was post-fixed with 0.25% (w/v) osmium tetroxide added to the 30% ethanol step overnight at 4 • C during the dehydration process. The material was infiltrated with LR White (LRW; Sigma, Darmstadt, Germany) acrylic resin diluted in ethanol in 10% steps (10% LRW in ethanol, followed by 20% and gradually replaced by higher LRW concentrations until the total replacement of ethanol with 100% LRW (1 h in each step) at 4 • C. When pure resin was finally applied, the samples were embedded in gelatin capsules filled with LRW resin and polymerized at 60 • C for 48 h.
Root Sectioning by Ultramicrotome and Toluidine Blue O Staining
After they were embedded in LRW, the samples were sectioned using an ULTROTOME III TYPE 8801A ultramicrotome (LKB, Stockholm, Sweden) equipped with a glass knife. Semithin root sections (0.5-2 µm) were placed on a microscope slide and stained with 0.5% (w/v) toluidine blue O to observe the morphology and the structure of the roots.
Callose and Lignin Localization
Callose was localized in root semithin sections (0.5-2 µm) stained with 0.05% (w/v) aniline blue (Sigma, C.I 42725) in 0.07 M K 2 HPO 4 buffer, pH 8.5. Sections remained in aniline blue solution during observation with a Zeiss Axioplan microscope (Zeiss Oberkochen, Germany) equipped with a micrometric scale, a differential interference contrast (DIC) system and an Axiocam MRc5 digital camera. Lignin was observed under the optical microscope after staining with 3% Phloroglucinol.
Cell Wall Epitope Immunolocalization
For cell wall epitope immunolocalization, the protocol described by Giannoutsou et al. [67] was applied. In detail, root semithin sections were blocked in 5% bovine serum albumin (BSA) for about 1.5-2 h. Then, the samples were thoroughly washed with pH 7 phosphate-buffered saline (PBS) (3 × 10 min), followed by an overnight incubation with the appropriate primary antibody diluted 1:40 in PBS (pH 7) at room temperature. After being rinsed with PBS (pH 7/3 × 10 min), the samples were incubated with the appropriate secondary antibody diluted 1:40 in PBS (pH 7) for 3 h at 37 • C. Finally, the samples were once again rinsed three times with PBS (pH 7) for 10 min and mounted with a mixture of glycerol/PBS (2:1, v/v) containing 0.5% (w/v) p-phenylenediamine (anti-fade solution).
For the immunostaining of HGs, the protocol described by Giannoutsou et al. [67] was applied. LM-20, LM-19 and LM-18, as well as JIM-7 and JIM-5 (Plant Probes, Leeds, UK), were used as primary antibodies, and FITC-conjugated anti-rat IgG (Sigma, Darmstadt, Germany) was used as the secondary antibody. All antibodies were diluted 1:40 in PBS at pH 7. During the immunolabeling procedure, the sections were washed with PBS. LM-20 is a monoclonal antibody that recognizes the HG domain of pectic polysaccharides. The antibody requires methyl-esters for the recognition of HGs and has no known crossreactivity with other polymers. It does not bind to unesterified HGs and was applied for the detection of fully methyl-esterified pectin. JIM-7 recognizes partially methyl-esterified epitopes of HGs having a high degree of methyl esterification. LM-20 was recommended as a cross-check in place of JIM-7 due to its higher specificity for fully methyl-esterified HGs. JIM-5 is capable of binding even to unesterified HGs, something that JIM-7 is incapable of. The LM-19 monoclonal antibody was also applied for the detection of demethyl-esterified HGs but appears to prefer unesterified HGs. This antibody was also used as a cross-check in place of JIM-5 due to its ability to bind more efficiently to unesterified HGs. Lastly, the LM-18 monoclonal antibody binds to partially methyl-esterified HGs (Plant Probes, Leeds UK) (Table S4).
For the immunostaining of arabinogalactan proteins, JIM13 (Plant Probes, Leeds, UK) was used as the primary antibody, and FITC-conjugated anti-rat IgG (Sigma, Darmstadt, Germany) was the secondary antibody. All antibodies were diluted 1:40 in PBS with a pH of 7. During the immunolabeling procedure with the antibodies, the sections were washed with PBS (pH 7).
Leaf Morphological Characteristics in Response to Salinity Stress
By using the measuring tools in ImageJ (NIH, Bethesda, MD, USA) [68], we measured the length of the depicted root at 86.783 cm, while the convex hull area was estimated at 2825.017 cm 2 ( Figure 1B). The salt treatment comprised 120 mM NaCl for a 90-day time course, while root and leaf tissues were sampled after 45 and 90 days (Figure 1).
Significant variation in morphological responses was observed among cultivars. Koroneiki showed smaller leaves ( Figure 1C) and necrosis at the edge of fully developed leaves, accompanied by severe leaf drop after 90 days, as was previously reported [69]. The leaf area index of NaCl-treated Koroneiki decreased by approximately 30% after 45 days and an additional 10% by the end of 90 days ( Figure 1D).
The leaf area Index of Arvanitolia decreased by almost 20% in 45 days of salinity and fully recovered to pre-treatment levels after 90 days ( Figure 1D). Decreases in the leaf height of almost 20% and 8% were observed in Koroneiki and Gaidourelia, respectively ( Figure 1D).
The maximum transversal leaf diameter (the greatest horizontal diameter of the leaf blade) in Arvanitolia decreased after 45 days, while it had recovered to pre-treatment levels by 90 days. However, in Koroneiki, a salt-sensitive cultivar, the maximum transversal leaf diameter decreased by 10% and 15% after 45 and 90 days, respectively ( Figure 1D).
Leaf Chlorophyll and Malondialdehyde (MDA) Contents under Saline Conditions
The leaf Chla and Chlb contents were determined after 45 and 90 days of salinity ( Figure 2A). In salt-sensitive Koroneiki, Chla and total chlorophyll (Chlt) decreased after 45 days, while no differences in Chlb were observed ( Figure 2A). However, in Arvanitolia, Chla, Chlb and Chlt increased, while in Lefkolia, only Chlb increased after 45 days ( Figure 2A). No alterations in the contents of Chla, Chlb or Chlt were observed in Gaidourelia ( Figure 2A).
The content of Chla was higher than that of Chlb in all cultivars in the control and salinity treatments ( Figure 2A). However, the Chla-to-Chlb ratio showed significant variation between salt-tolerant and salt-sensitive cultivars (Table 1). In Koroneiki, the Chla/Chlb ratio decreased from 2.5 to 1.6 and from 2.3 to 1.1 after 45 and 90 days, respectively (Table 1). Approximately similar reductions in the Chla/Chlb ratio were also observed for Gaidourelia (Table 1). Salt-tolerant cultivars, Arvanitolia and Lefkolia, exhibited slightly higher Chla/Chlb ratios after 95 days of salinity (Table 1). The lipid peroxidation of membranes was also determined by the content of malondialdehyde (MDA) accumulation in the leaf tissue of all four cultivars ( Figure 2B). The MDA content significantly increased in Koroneiki and Gaidourelia after 45 and 90 days of salinity, respectively ( Figure 2B). The salt-tolerant cultivars, Arvanitolia and Lefkolia, did not show any changes in the MDA accumulation rate, possibly indicating a lack of oxidative stress ( Figure 2B).
Olive P4H Gene Family
A phylogenetic tree comprising 16 olive genes was constructed based on 19 previously classified Arabidopsis genes ( Figure 3; personal communication). Each cluster comprised an Arabidopsis gene, which was considered the basis, as well as olive, tomato and grape genes with the highest identity at the amino acid level. Multiple alignment was performed, and all the important motifs and essential amino acids for P4H activity are highlighted ( Figure S2). The lipid peroxidation of membranes was also determined by the content of malondialdehyde (MDA) accumulation in the leaf tissue of all four cultivars ( Figure 2B). The MDA content significantly increased in Koroneiki and Gaidourelia after 45 and 90 days of salinity, respectively ( Figure 2B). The salt-tolerant cultivars, Arvanitolia and Lefkolia, did not show any changes in the MDA accumulation rate, possibly indicating a lack of oxidative stress ( Figure 2B).
The gene structure and domain organization were conserved in olive P4Hs according to an in silico analysis (Tables S1 and S2). The positions of the exons for each gene and the positions of the protein domains in the deduced amino acid sequence of each P4H polypeptide were conserved. The P4Hc (IPR006620, SM00702) and Fe2OG-Oxy (IPR005123, PF13640) domains are located at the C-terminal region of the predicted OeP4H protein sequences (Table S2).
P4Hs are type II membrane-anchored proteins localized in the ER and Golgi. Almost all of the OeP4Hs comprise transmembrane helices or an ER retention signal at the Nterminal or C-terminal region or both (Tables S1 and S2). The ER retention signal might indicate permanent localization in the ER, but the presence of a cytoplasmic tail facilitates its transport to the Golgi via the secretory pathway. The cytoplasmic tail comprises 5-20 amino acids (Supplementary Table S2) and is located in the N-terminus of the protein. The cytoplasmic tail consists of positively charged amino acids similar to the RXR motif or the dibasic signal found in mammalian and yeast Endoplasmic Reticulum (ER) and Golgi proteins. Similar cytoplasmic domains have been found to be responsible for the transfer of prolyl 4 hydroxylases to the Golgi apparatus in plants. Mutant GFP prolyl 4 hydroxylase proteins in which the basic amino acids of the cytoplasmic tail were substituted with non- The deduced olive, tomato and grape P4H amino acid sequences were grouped into clusters based on their percent identity with Arabidopsis P4Hs (Figure 3). The basis for each cluster was an Arabidopsis polypeptide. The 16 putative OeP4H polypeptides were clustered into six groups ( Figure 3). Several deduced olive P4H polypeptides also showed high percent identity among them, which ranged from 76 to 84% and were considered olive P4H-like polypeptides. OeP4H3, OeP4H10, OeP4H1 and OeP4H9 showed 78%, 76%, 83%, 84% and 79% identity with the OeP4H3-like, OeP4H10-like, OeP4H1-like, OeP4H9-like1 and OeP4H9-like2 polypeptides, respectively.
The more divergent group in terms of percent identity at the amino acid level comprised four P4Hs, OeP4Hs, OeP4H2, OeP4H6, OeP4H7 and OeP4H12, while the group also included four, OeP4H3, OeP4H3-like, OeP4H10 and OeP4H10-like, with a high percentage of identity at the amino acid level ( Figure 3). OeP4H9, OeP4H9-like 1 and OeP4H9-like 2 were members of a distinct group ( Figure 3). One cluster comprised only one olive P4H, OeP4H14, while another two clusters included two olive P4Hs each: OeP4H15 and OeP4H15-like in one cluster and OeP4H1 and OeP4H1-like in the other ( Figure 3).
The gene structure and domain organization were conserved in olive P4Hs according to an in silico analysis (Tables S1 and S2). The positions of the exons for each gene and the positions of the protein domains in the deduced amino acid sequence of each P4H polypeptide were conserved. The P4Hc (IPR006620, SM00702) and Fe 2 OG-Oxy (IPR005123, PF13640) domains are located at the C-terminal region of the predicted OeP4H protein sequences (Table S2).
P4Hs are type II membrane-anchored proteins localized in the ER and Golgi. Almost all of the OeP4Hs comprise transmembrane helices or an ER retention signal at the Nterminal or C-terminal region or both (Tables S1 and S2). The ER retention signal might indicate permanent localization in the ER, but the presence of a cytoplasmic tail facilitates its transport to the Golgi via the secretory pathway. The cytoplasmic tail comprises 5-20 amino acids (Supplementary Table S2) and is located in the N-terminus of the protein. The cytoplasmic tail consists of positively charged amino acids similar to the RXR motif or the dibasic signal found in mammalian and yeast Endoplasmic Reticulum (ER) and Golgi proteins. Similar cytoplasmic domains have been found to be responsible for the transfer of prolyl 4 hydroxylases to the Golgi apparatus in plants. Mutant GFP prolyl 4 hydroxylase proteins in which the basic amino acids of the cytoplasmic tail were substituted with noncharged hydrophilic amino acids were found to be located in the ER in tobacco BY-2 cells, unlike the control proteins, which were observed in both the ER and Golgi apparatus [70].
Signal peptide and N-glycosylation prediction tools indicated that 7 out of the 16 OeP4Hs were predicted to have a signal peptide as well as N-glycosylation sites, which indicated that these proteins might undergo post-translational modifications. Based on the 3D structure of OeP4H ( Figure S3), 46 residues comprising the signal peptide and the transmembrane domain were missing.
Gene Expression of P4Hs and AGPs in Roots and Leaves in Response to Salinity Stress
Despite the involvement of plant P4Hs in abiotic stresses, no information on their response to salinity is available for any plant species, including olive. Among the 16 putative olive P4Hs, only 7 showed detectable expression levels in either roots or leaves ( Figure 4).
In roots, the transcript levels of most OeP4Hs were stable under saline conditions in tolerant Arvanitolia among all cultivars ( Figure 4). Expression peaks for OeP4H2 and OeP4H1-like were observed at 45 days, while OeP4H7 was downregulated after 90 days in roots as well as in leaves ( Figure 4). OeP4H10-like was downregulated in leaves after 90 days ( Figure 4).
Salinity-tolerant Lefkolia exhibited expression peaks for OeP4H1, OeP4H2, OeP4H6 and OeP4H9 in roots after 45 days, while OeP4H6 and OeP4H9 were upregulated and OeP4H7 was downregulated at 90 days ( Figure 4). In leaves, the transcript abundance of OeP4H9, OeP4H6, OeP4H2 and OeP4H10-like decreased at 90 days, while that of OeP4H7 increased ( Figure 4). Moreover, the expression of OeP4H1-like was upregulated at both time points in leaves, and that of OeP4H1 increased at 45 days and decreased at 90 days ( Figure 5).
In the salt-sensitive cultivar Koroneiki, the expression patterns of OeP4H2 and OeP4H9 in roots increased at 90 days, while OeP4H1-like, OeP4H10-like, OeP4H7 and OeP4H1 were downregulated ( Figure 4). In leaves, OeP4H1, OeP4H2, OeP4H9 and OeP4H10-like were upregulated at both time points, while OeP4H6 and OeP4H1-like were upregulated only after 45 days of salinity stress (Figure 4).
In Gaidourelia, a salt-sensitive cultivar, OeP4H1, OeP4H6 and OeP4H9 showed expression peaks at 45 days in roots, and OeP4H10-like had an expression peak only at 90 days (Figures 4 and 5). The transcripts of both OeP4H1 and OeP4H1-like decreased after 90 days (Figures 4 and 5). In leaves, the OeP4H1 and OeP4H6 expression levels were upregulated after 90 days, while those of OeP4H9 and OeP4H10-like increased after both 45 and 90 days and after 45 days only, respectively ( Figure 5). time points in leaves, and that of OeP4H1 increased at 45 days and decreased at 90 days ( Figure 5).
In the salt-sensitive cultivar Koroneiki, the expression patterns of OeP4H2 and OeP4H9 in roots increased at 90 days, while OeP4H1-like, OeP4H10-like, OeP4H7 and OeP4H1 were downregulated (Figure 4). In leaves, OeP4H1, OeP4H2, OeP4H9 and OeP4H10-like were upregulated at both time points, while OeP4H6 and OeP4H1-like were upregulated only after 45 days of salinity stress (Figure 4). In Gaidourelia, a salt-sensitive cultivar, OeP4H1, OeP4H6 and OeP4H9 showed expression peaks at 45 days in roots, and OeP4H10-like had an expression peak only at 90 days (Figures 4 and 5). The transcripts of both OeP4H1 and OeP4H1-like decreased after 90 days (Figures 4 and 5). In leaves, the OeP4H1 and OeP4H6 expression levels were upregulated after 90 days, while those of OeP4H9 and OeP4H10-like increased after both 45 and 90 days and after 45 days only, respectively ( Figure 5).
The expression levels of two AGPs, OeAGP4-like and OeAGP10-like, were determined in roots and leaves during the salinity treatment time course ( Figure 5). In Arvanitolia, OeAGP4-like and OeAGP10-like exhibited no changes in expression in roots, while in leaves, OeAGP4-like was down-and upregulated after 45 and 90 days, respectively ( Figure 5). Lefkolia showed an increase in transcript abundance for OeAGP10-like in roots and the upregulation of both AGPs in leaves throughout the salinity treatment time course ( Figure 5). In Koroneiki, the transcript levels of OeAGP4-like and OeAGP10-like were downregu- In leaves, a decrease in expression was observed for OeAGP4-like and OeAGP10-like after 45 days, which was followed by upregulation after 90 days only for OeAGP10-like ( Figure 5). For Gaidourelia, OeAGP4-like showed a significant increase in transcript levels after 90 days in roots, while OeAGP10-like was up-and downregulated after 45 and 90 days, respectively ( Figure 5). In leaves, OeAGP4-like transcripts decreased after 45 days, while those of OeAGP10-like were down-and upregulated after 45 and 90 days, respectively ( Figure 5). No trends in AGP expression among the four cultivars were observed, despite minor changes in transcript abundance throughout the salinity treatment time course.
Protein Levels of AGP-Bound Epitopes in Roots and Leaves in Response to Salinity Stress
The AGP-bound epitopes were detected by Western blot analysis in roots in response to salinity ( Figure 6). LM2-bound AGPs in Koroneiki, Gaidourelia and Lefkolia showed stable protein levels after 45 and 90 days of exposure to salinity stress, while, in Arvanitolia, an increase was observed mainly after 90 days of treatment ( Figure 6). The JIM13-bound AGPs also exhibited stable accumulation levels in Koroneiki, Arvanitolia and Lefkolia under saline conditions, while a marginally lower signal was observed in Gaidourelia at both time points (Figure 6).
Protein Levels of AGP-Bound Epitopes in Roots and Leaves in Response to Salinity Stress
The AGP-bound epitopes were detected by Western blot analysis in roots in response to salinity ( Figure 6). LM2-bound AGPs in Koroneiki, Gaidourelia and Lefkolia showed stable protein levels after 45 and 90 days of exposure to salinity stress, while, in Arvanitolia, an increase was observed mainly after 90 days of treatment ( Figure 6). The JIM13bound AGPs also exhibited stable accumulation levels in Koroneiki, Arvanitolia and Lefkolia under saline conditions, while a marginally lower signal was observed in Gaidourelia at both time points (Figure 6).
Cell Morphology and AGP and Pectin Immunolocalization in Koroneiki Roots
The effect of salinity on the anatomy and cell morphology in Koroneiki was determined by analyzing root cross-sections ( Figure 7A-K). The outer cell layer constitutes the rhizodermis (1 in Figure 7D,F), while multiple parenchyma cell layers form the cortex (2b in Figure 7B,F), in which the exodermis is the first layer (2a in Figure 7D,F,I) and the endodermis is the innermost layer (2c in Figure 7B,G-I). The vascular tissue is surrounded by the pericycle (3c in Figure 7C,G-I), while a distinct separation of xylem (3a in Figure 7C,G,I) and phloem elements (3b in Figure 7C,G,I) was visible.
The cortical cells of the control root were round and arranged in homocentric circles ( Figure 7A-D). The cortical cells proximal to the endodermis were smaller and compactly arranged without intercellular spaces, while those near the exodermis were larger with
Cell Morphology and AGP and Pectin Immunolocalization in Koroneiki Roots
The effect of salinity on the anatomy and cell morphology in Koroneiki was determined by analyzing root cross-sections ( Figure 7A-K). The outer cell layer constitutes the rhizodermis (1 in Figure 7D,F), while multiple parenchyma cell layers form the cortex (2b in Figure 7B,F), in which the exodermis is the first layer (2a in Figure 7D,F,I) and the endodermis is the innermost layer (2c in Figure 7B,G-I). The vascular tissue is surrounded by the pericycle (3c in Figure 7C,G-I), while a distinct separation of xylem (3a in Figure 7C,G,I) and phloem elements (3b in Figure 7C,G,I) was visible.
The cortical cells of the control root were round and arranged in homocentric circles ( Figure 7A-D). The cortical cells proximal to the endodermis were smaller and compactly arranged without intercellular spaces, while those near the exodermis were larger with small and equally distributed intercellular spaces ( Figure 7B-D). Differences in the morphology of specific cell types in root sections were observed under saline conditions. In the salt-treated cortex, the cells were arranged in a non-orderly way, and the intercellular spaces were variable, ranging in area from small to large, indicating the formation of aerenchyma, which is usually observed under conditions of oxygen deficiency, such as submergence and waterlogging (arrows in Figure 7I-K). The diameters of cells ranged from 10 µm to more than 20 µm ( Figure 7F-K), while the shapes were circular, oval, elongated and, in some cases, polygonal ( Figure 7F-K). The endodermal cells were compactly arranged without intercellular spaces, despite their size heterogeneity (arrows in Figure 7L). The phloem elements were distinct between the xylem rays (3b in Figure 7L,M).
Cells 2023, 12, x FOR PEER REVIEW 1 small and equally distributed intercellular spaces ( Figure 7B-D). Differences in th phology of specific cell types in root sections were observed under saline conditio the salt-treated cortex, the cells were arranged in a non-orderly way, and the interc spaces were variable, ranging in area from small to large, indicating the format aerenchyma, which is usually observed under conditions of oxygen deficiency, s submergence and waterlogging (arrows in Figure 7I-K). The diameters of cells r from 10 µm to more than 20 µm ( Figure 7F-K), while the shapes were circular, oval gated and, in some cases, polygonal ( Figure 7F-K). The endodermal cells were com arranged without intercellular spaces, despite their size heterogeneity (arrows in 7L). The phloem elements were distinct between the xylem rays (3b in Figure 7L,M In the control roots, after aniline blue staining, the xylem of the vascular cylinder fluoresced (3a in Figure 7L), as did the sclerenchymatic tissue surrounding the sieve tubes and companion cells of the phloem (3b in Figure 7L). In the control roots, the cell walls of the endodermis were not thickened, with only a weak fluorescent signal at the anticlinal cell walls (arrows in Figure 7L). In contrast, in the salt-treated roots, the thickening in the endodermal cells appeared not only at the anticlinal but also at the periclinal cell walls (arrows in Figure 7M). Changes were also observed in the exodermis of the control and salt-treated samples. The cells of the exodermis were larger than those observed in the control sample (compare 2a in Figure 7O to 2a in Figure 7N), and their cell walls fluoresced intensely after staining with aniline blue (2a in Figure 7O).
JIM13-bound AGPs were localized throughout the root cross-sections ( Figure 7P,Q), while a weaker fluorescent signal was observed under saline conditions, especially in the cortex ( Figure 7R,S). Fully methyl-esterified HGs detected by the LM20 antibody were present in the cortical cells, as well as in the vascular cylinder in control root cross-sections ( Figure 8A,C). A distinct signal was observed in the younger phloem cell walls near the cambium ( Figure 8C), while the anticlinal cell walls of the endodermis exhibited a lack of an HG localization signal, possibly due to the presence of the Casparian strip formed at the anticlinal cell wall (arrow in Figure 8C). In the salt-treated roots, the cortical cells showed a weaker signal ( Figure 8D), as did the periclinal endodermal cell walls, but not the anticlinal ones (arrow in Figure 8E).
The JIM7 antibody, which can bind to partly demethyl-esterified HGs that display a high degree of methyl esterification, showed a lower fluorescent signal in the root sections of the salinity treatment in comparison to the control ( Figure 8F-H). In the salt-treated samples, deformations in the shape of cortical cells appeared ( Figure 8G,H), as well as large intercellular spaces (arrows in Figure 8G,H).
JIM5-HG epitopes were detected throughout almost the entire salt-treated and control root sections ( Figure 8I-K). Demethyl-esterified homogalacturonans were detected in younger phloem cell walls closer to the cambium, as well as in the cortical cell walls ( Figure 8I). Demethyl-esterified HGs were not detected in the anticlinal endodermal cell walls (arrows in Figure 8I). Although JIM5 fluoresced evenly in the cortical cells of the control sample ( Figure 8J), in the salt-treated root samples, the cells displayed some acute signaling spots at the junction sites between two neighboring cells ( Figure 8K). LM18 immunolocalization resulted in similar patterns in control and salt-treated roots ( Figure 8L-Q). Cortical cell deformities were observed in salt-treated root cross-sections ( Figure 8M,P,Q) compared to control ones ( Figure 8L,O). The vascular cylinder phloem cells exhibited fluorescent signals in control and salt-treated roots ( Figure 8N,Q).
Cell Morphology and AGP and HG Immunolocalization in Lefkolia Roots
The control cortical cells of Lefkolia roots are similar in shape, arrangement ( Figure 9A-C) and size distribution (2b and 3c in Figure 9B,C) to those of Koroneiki. The salt-treated cortical cells were compactly arranged, but their shape was irregular and variable, while intercellular spaces also varied in size ( Figure 9D-F). The salt-treated Lefkolia endodermal cells were distinct from the cortex and pericycle (3c in Figure 9F), because they were nicely arranged in homocentric cycles, unlike in Koroneiki. The xylem and phloem elements were easily recognizable (3a and 3b in Figure 9F). Although salt-treated Lefkolia roots exhibited diversity in cortical cell morphology, no cell wall deformations were observed, while the cell shape was more regular than in Koroneiki. Moreover, the formed intercellular spaces were larger in Koroneiki. Overall, the salt-induced cell morphology changes in Lefkolia were minor, indicating a better adaptation capacity.
Aniline blue revealed a slight thickening of control exodermal cells ( Figure 9G), while cells of the endodermis fluoresced at their anticlinal cell walls ( Figure 9G). In the salttreated samples, thickening occurred not only in the cell walls of the exodermis but also in 2-3 rows of cortical cells below the exodermis (2b in Figure 9H). The anticlinal cell walls of the endodermis of the salt-treated samples were intensely thickened (2c in Figure 9I).
The JIM13-bound AGPs displayed a strong fluorescence signal only in the rhizodermis of the control roots ( Figure 9J) but not in salt-treated roots ( Figure 9K). Intense AGP signaling was detected in salt-treated cortical cells ( Figure 9L). A strong AGP signal was observed in the control xylem but not in the phloem ( Figure 9J), while an AGP signal was detected in both the salt-treated root phloem and xylem ( Figure 9L). The JIM7 antibody, which can bind to partly demethyl-esterified HGs that display a high degree of methyl esterification, showed a lower fluorescent signal in the root sections of the salinity treatment in comparison to the control ( Figure 8F-H). In the salt-treated samples, deformations in the shape of cortical cells appeared ( Figure 8G,H), as well as large intercellular spaces (arrows in Figure 8G,H).
JIM5-HG epitopes were detected throughout almost the entire salt-treated and control root sections ( Figure 8I-K). Demethyl-esterified homogalacturonans were detected in The control root exhibited an intense LM20-HG epitope signal in phloem cell walls ( Figure 10A), while, in salt-treated roots, the signal was detected mostly in the sclerenchymatic fibers ( Figure 10B). Fully methyl-esterified homogalacturonans were present through-out the cortex in control and salt-treated roots ( Figures 10A and 10B, respectively). No signal was observed in the endodermal anticlinal cell walls of the control and salt-treated samples (arrows in Figures 10C and 10D, respectively). The absence of fully methyl-esterified HGs at the anticlinal cell walls of the endodermis is consistent with the endodermis of Koroneiki, indicating the presence of the Casparian strip at the anticlinal cell walls (arrows in Figures 10C and 10D respectively). The control root exhibited an intense LM20-HG epitope signal in phloem cell walls ( Figure 10A), while, in salt-treated roots, the signal was detected mostly in the sclerenchymatic fibers ( Figure 10B). Fully methyl-esterified homogalacturonans were present throughout the cortex in control and salt-treated roots ( Figure 10A and Figure 10B, respectively). No signal was observed in the endodermal anticlinal cell walls of the control and salt-treated samples (arrows in Figure 10C and Figure 10D, respectively). The absence of fully methyl-esterified HGs at the anticlinal cell walls of the endodermis is consistent with the endodermis of Koroneiki, indicating the presence of the Casparian strip at the anticlinal cell walls (arrows in Figure 10C and Figure 10D respectively).
JIM7-HG epitopes were detected in the entire section of control and salt-treated roots JIM7-HG epitopes were detected in the entire section of control and salt-treated roots ( Figure 10E,F). JIM7-HG epitopes were not detected in the cell walls of the rhizodermis in either control or salt-treated samples ( Figures 10E and 10F, respectively). The cortical and endodermal cells of the salt-treated samples fluoresced intensely ( Figure 10H), while the vascular cylinders of both control and salt-treated samples displayed a plethora of demethyl-esterified HGs with a high degree of methyl esterification ( Figure 10G,H). Signals were detected in the sclerenchymatic fibers above the phloem and in young phloem cells near the cambium ( Figure 10H). Partially methyl-esterified homogalacturonans were also observed in the anticlinal cell walls of the endodermis under saline conditions (arrows in Figure 10H). either control or salt-treated samples ( Figure 10E and Figure 10F, respectively). The cortical and endodermal cells of the salt-treated samples fluoresced intensely ( Figure 10H), while the vascular cylinders of both control and salt-treated samples displayed a plethora of demethyl-esterified HGs with a high degree of methyl esterification ( Figure 10G,H). Signals were detected in the sclerenchymatic fibers above the phloem and in young phloem cells near the cambium ( Figure 10H). Partially methyl-esterified homogalacturonans were also observed in the anticlinal cell walls of the endodermis under saline conditions (arrows in Figure 10H). LM19-and LM18-HG epitopes were detected in control and salt-treated roots ( Figure 10I-T). An unesterified homogalacturonan signal was present in the cells of the rhizodermis of the control sample but not in the salt-treated roots of Lefkolia ( Figure 10I,J). Unesterified homogalacturonans were observed in the entire cortex as well as in the vascular cylinder ( Figure 10I-N). LM19 epitopes were not present at the anticlinal cell walls of the control and salt-treated endodermis (arrows in Figure 10K,L). Intense signals were also observed in the sclerenchymatic fibers of the phloem in the vascular cylinder in control and salttreated samples (arrows in Figure 10M,N), whereas younger phloem cells near the cambium showed weaker fluorescence ( Figure 10M,N). Partially methyl-esterified homogalacturonans were detected throughout the entire control root structure ( Figure 10O). Strong signals of partially methyl-esterified homogalacturonans, detected by the LM18 antibody, were exhibited in the phloem of control and salt-treated samples ( Figure 10O,P), while no LM18 signal was detected at the cell walls of exodermal cells ( Figure 10O,P). Demethyl-esterified HGs detected by the LM18 antibody were present in the cells of the rhizodermis of the control sample but not in the salt-treated roots of Lefkolia ( Figure 10O,P). The lack of an LM18-HG signal in the anticlinal cell walls of the endodermis was observed again (arrows in Figure 10Q,R,S), indicating the presence of a modified cell wall at the Casparian strip site. In the salt-treated samples, no signal of the LM18-HG epitope was detected at the anticlinal cell walls of the exodermis (arrows in Figure 10T).
Cell Morphology and AGP and HG Immunolocalization in Arvanitolia Roots
The control and salt-treated cortical cells were uniformly arranged around the root vascular cylinder ( Figure 11A-F). The intercellular spaces were equally distributed between cortical cells, while the endodermal cells (3c in Figure 11C,F) surrounded the vascular cylinder, where xylem rays and phloem cells are noticed (3a and 3b in Figure 11C,F). Salt treatment did not impact either the cell morphology or root structure. No large intercellular spaces and irregular cell walls or cell wall deformities were observed. The cortical cell size and shape were regular and similar to those of the control. Interestingly, the salt-treated rhizodermal cell shape was irregular and different from that observed in the control sample (compare 1 in Figure 11D to 1 in Figure 11B).
Although Arvanitolia exhibited strongly thickened exodermal cell walls of the cortex regardless of salinity ( Figure 11I,J), exodermal cells as well as the second layer of cortical cells displayed strongly thickened cell walls in response to salt treatment ( Figure 11I,J). In the control roots of Arvanitolia, staining with aniline blue revealed a strong signal at the xylem of the central cylinder ( Figure 11G), while the phloem did not fluoresce. Intense signals were observed at the anticlinal and, in some cells, periclinal cell walls of the endodermis (arrows in Figure 11G,H). In the control samples, Arvanitolia exhibited strongly thickened exodermal cell walls of the cortex ( Figure 11I), while in salt-treated samples, not only the exodermal cells but also 2-3 layers of cortical cells were thickened, as was observed in Lefkolia samples (compare Figure 11J to Figure 9H).
The AGP-bound epitopes exhibited a strong signal throughout the entire control root ( Figure 11K). Intense signals were detected in the rhizodermis and the entire root vascular cylinder (phloem and xylem) and in cortical cells ( Figure 11K). Under saline conditions, the overall fluorescent signal displayed a pattern of distribution that was similar to that in the control roots ( Figure 11L).
In the control roots of Arvanitolia, fully methyl-esterified LM20-HG epitopes were detected in the cells of the cortex and vascular cylinder of the root (Figure 12A,C). A weaker signal was observed in the entire root structure under saline conditions ( Figure 12B,D). The cortex showed an overall lower fluorescent signal ( Figure 12B), while the salt-treated sample exhibited a weak signal in the rhizodermis and the exodermis and an even weaker signal in cortical cells (compare Figure 12B to Figure 12A). As far as the JIM7-HG epitope is concerned, intense signaling was detected throughout the entire root structure. The rhizodermis and cortical cells displayed intense fluorescent signals in both the control and salt-treated samples ( Figure 12E,F), while the anticlinal cell walls of the exodermis lacked JIM7-HG epitopes (arrows in Figure 12E,F). However, the endodermal anticlinal cell walls displayed a fluorescent signal ( Figure 12E,F), as observed in Lefkolia roots too. In the control roots of Arvanitolia, fully methyl-esterified LM20-HG epitopes were detected in the cells of the cortex and vascular cylinder of the root (Figure 12A,C). A weaker signal was observed in the entire root structure under saline conditions ( Figure 12B,D). The cortex showed an overall lower fluorescent signal ( Figure 12B), while the salttreated sample exhibited a weak signal in the rhizodermis and the exodermis and an even weaker signal in cortical cells (compare Figure 12B to Figure 12A). As far as the JIM7-HG epitope is concerned, intense signaling was detected throughout the entire root structure. The rhizodermis and cortical cells displayed intense fluorescent signals in both the control and salt-treated samples ( Figure 12E,F), while the anticlinal cell walls of the exodermis lacked JIM7-HG epitopes (arrows in Figure 12E,F). However, the endodermal anticlinal cell walls displayed a fluorescent signal ( Figure 12E,F), as observed in Lefkolia roots too. LM19-HG epitopes showed similar signal intensities in control and salt-treated roots of the Arvanitolia variety. The signal was hardly detected in the cells of the exodermis in the control and the salt-treated samples (arrows in Figure 12G,H), whereas phloem and xylem elements displayed weak fluorescent signals under salt treatment ( Figure 12H). The control, though, demonstrated signals in the rhizodermis and phloem ( Figure 12G).
JIM5-HG epitopes showed intense fluorescent signals in the cells of the rhizodermis of control and salt-treated roots ( Figure 12I,J). In the cortical cells of the control root samples, the JIM5-HG epitope was evenly distributed at the cell walls, while in the salt-treated root samples, the JIM5-HG epitope was specifically present at the junction sites of the intercellular spaces between adjacent cells (compare Figure 12J to Figure 12I). The absence of partially methyl-esterified homogalacturonans was observed both in the anticlinal cell walls of the exodermis (arrows in Figure 12I,J) and in the endodermis of the control and salt-treated root samples (asterisks in Figure 12I,J). Weak fluorescent signals in phloem and xylem elements were detected in control and salt-treated roots ( Figure 12I,J).
JIM5-HG epitopes showed intense fluorescent signals in the cells of the rhizodermis of control and salt-treated roots ( Figure 12I,J). In the cortical cells of the control root samples, the JIM5-HG epitope was evenly distributed at the cell walls, while in the salt-treated root samples, the JIM5-HG epitope was specifically present at the junction sites of the intercellular spaces between adjacent cells (compare Figure 12J to Figure 12I). The absence of partially methyl-esterified homogalacturonans was observed both in the anticlinal cell walls of the exodermis (arrows in Figure 12I,J) and in the endodermis of the control and salt-treated root samples (asterisks in Figure 12I,J). Weak fluorescent signals in phloem and xylem elements were detected in control and salt-treated roots ( Figure 12I,J). The cell morphology and root structure were the least affected by salt treatment, primarily in Arvanitolia and then in Lefkolia, suggesting that tolerance to salinity is associated with root morphology. Moreover, Koroneiki, a cultivar known to be sensitive to salt stress, was severely affected not only in its root structure but also in its cell wall composition.
Discussion
Two-year-old olive trees of four cultivars were grown in containers 90 cm in height in order to mimic the growth of the rooting system in conditions similar to those in the field. The root length of a representative Koroneiki tree was 86.7 cm long, which is similar to the average root length of an olive tree growing in the orchard. In this way, the investigation of molecular adaptation under NaCl stress provided a better estimation of the cultivar response capacity in the field.
The effect of salinity on olive trees has been thoroughly investigated and has been found to result mainly in plant growth inhibition [16,17]. The morphological characterization of the four cultivars suggested phenotypic differences in the upper part of the tree, such as significant reductions in leaf area, in accordance with previous reports [15,17]. Lefkolia showed minor symptoms and a leaf area identical to that of control trees, indicating high tolerance to salinity. Leaf tip necrosis and leaf drop were observed in Koroneiki, while Gaidourelia showed a significant decrease in leaf height by the end of 90 days, an indication of sensitivity to salinity, in accordance with previous reports [14,17]. In the salt-sensitive Koroneiki, chlorophyll a and total chlorophyll decreased under salt stress, while in Arvanitolia, a salt-tolerant cultivar, chlorophyll contents increased, including chlorophyll b. The salt-induced alterations in leaf chlorophyll might be attributed to the conversion of Chlb to Chla, thus resulting in higher Chla [71].
Root endodermal cells usually do not progress beyond the primary stage of Casparian strip formation during endodermal development. Although the endodermis remains in the primary stage in most eudicots, salinity accelerates endodermal development, and the formation of suberin lamellae frequently extends closer to the root tip. The cell walls of Koroneiki root endodermal cells are thickened and clearly distinguished by forming a fluorescent cylinder surrounding the vascular tissue. In Arvanitolia and Lefkolia, the acceleration of endodermal development and the formation of thickened exodermal cells were observed. Arvanitolia and Lefkolia exhibited 1-2 and 4-5 rows of cortical cells with thickened cell walls, respectively. Moreover, Lefkolia roots exhibited diversity in cortical cell morphology under saline conditions but no deformations in their shape or cell walls after 45 days of salinity. The intercellular spaces between root cortical cells in Lefkolia varied in size but were not as wide as in Koroneiki under saline conditions. Overall, significant differences were observed between control and salt-treated root sections in Lefkolia.
Aerenchyma formation was observed in the Koroneiki root cortex, but not in Lefkolia or Arvanitolia after 45 days of salinity. Similar observations were reported in soybean plants and wheat seedlings [72,73]. Combined salinity and waterlogging stress resulted in cortical aerenchyma development, which improved salt resistance and sodium ion exclusion [74], while soybean plants exhibited lysogenic aerenchyma formation, particularly at high NaCl concentrations, for air space creation and the prevention of toxic ion uptake [72].
The cortical cells in contact with the exodermis fluoresced after aniline blue staining in Lefkolia and Arvanitolia. Salinity induced the deposition of callose in cortical cell walls in sorghum [75] and increased the callose-degrading enzymes in barley roots [76]. Osmotic stress is known to induce callose deposition in root cell walls [77]. The aniline blue staining of root exodermal cells is similar to results in salt-treated cotton seedling roots [78]. Moreover, salinity increased the deposition of suberin and lignin not only in endodermal but also in exodermal cells, converting them into cells impermeable to water [79].
The physiological significance of P4Hs under abiotic stress has not been thoroughly investigated, despite their demonstrated involvement in several growth and developmental programs [44,45,80,81]. Under saline conditions, the majority of olive P4Hs were downregulated in leaves in tolerant Arvanitolia and Lefkolia (Figure 4), while, in the two sensitive cultivars, Koroneiki and Gaidourelia, they were mainly upregulated after 90 days, indicating a putative inverse relation between high P4H expression levels and tolerance. Moreover, Arvanitolia showed minimal down-or upregulation in roots and leaves, suggesting that stable P4H expression might indicate higher tolerance. Overall, no distinct patterns of P4H expression were observed in roots among the four cultivars. All cultivars, except Koroneiki, showed an expression peak for OeP4H1 after 45 days of stress, while OeP4H2 increased in all four cultivars. Interestingly, OeP4H7 was downregulated in all cultivars, with the exception of Gaidourelia, in which it was upregulated throughout the time course. OeP4H1, OeP4H6, OeP4H9 and OeP4H10-like showed opposite expression patterns in the leaves of salt-tolerant compared to salt-sensitive cultivars. Downregulation in salt-tolerant plants and upregulation in salt-sensitive ones were observed, indicating involvement in cultivar resilience.
In tolerant cultivars, stable AGP expression was detected in roots, while in leaves, the upregulation of both AGPs was observed. In the roots of Populus (Populus tremuloides), the expression of 18 FLAs was detected under saline conditions, while 6 of them were significantly induced [82]. Moreover, among rice AGPs, only one was induced by salt in 7-day-old seedlings, while three and seven AGPs were significantly up-and downregulated by both salt and drought, respectively [83]. This is in agreement with the only two olive AGPs that showed detectable expression under saline conditions ( Figure 5). In addition, in tomato roots, the expression of five AGPs was strongly repressed under saline conditions [84]. A reduction in AGP epitopes in the cytoplasm, plasma membrane and tonoplast in tobacco BY-2 cells under salt treatment was detected due to the downregulation of 17 AGPs' transcripts, while AGP accumulation was observed in culture media, suggesting that AGPs might function as sodium carriers through vesicle trafficking [31]. However, the massive upregulation of AGPs was observed in tobacco BY-2 cells under NaCl stress [30].
Minor changes in leaf and root OeP4H expression were observed for Lefkolia and Arvanitolia under saline conditions, while sensitive Koroneiki and Gaidourelia exhibited upregulation in leaves and no specific expression trend in roots. These data suggest that the leaves and roots activated distinct mechanisms to respond to salinity. In Arvanitolia, minor changes were detected in the expression levels of OeAGPs and in the immunolocalization levels of JIM13-bound AGPs in roots under saline conditions. Moreover, the JIM13-bound fluorescent signal showed identical distribution patterns in control and salt-treated root sections after a 45-day treatment. These results indicate similar levels of AGP content in salt-treated and control plants, which might be associated with similar cortical cell size, shape and intercellular spaces. In Lefkolia, a stronger signal of JIM13-bound epitopes in root cortical cells under saline conditions was accompanied by an increase in OeAGP10-like expression in roots. However, the JIM13 epitopes showed a stronger signal in cortical cells but also a lower signal in rhizodermal cells under saline conditions. Therefore, the higher expression of OeAGP10-like in salt-treated roots might be attributed to the higher number of cortical cells that upregulated this mRNA compared to rhizodermal cells. In sensitive Koroneiki, the downregulation of both OeAGP4-like and OeAGP10-like after 45 days of salinity was accompanied by a lower signal of JIM13-bound epitopes. The fluorescent signal was weaker under saline conditions, mainly in the cortex, where the cells were arranged in a non-orderly way and their shape, size and intercellular spaces were variable. These data might indicate an association between AGP content and cortical cell structure and morphology, taking into consideration the known involvement of AGPs in cell size, enlargement and expansion [36,41].
In Arvanitolia, LM2 epitopes were induced in roots under saline conditions, while JIM13 epitopes were marginally downregulated in Gaidourelia roots under stress according to Western blot analysis. These results might indicate the induction of AGPs under saline conditions, particularly in tolerant cultivars, suggesting the association of AGPs with stress resilience. The immunolocalization of AGPs showed salinity-induced weak and strong signals in the cortices of Koroneiki and Lefkolia roots, respectively. In Arvanitolia under saline conditions, strong signals were detected in the root phloem and xylem. These data indicate that variation in AGP expression in the root cortex and stele is not consistent among cultivars. Due to this cell-type-dependent variation in AGP expression, Western-blot-based AGP content might not be considered comparable to immunodetection-based AGP levels in salinity and control roots.
Co-expression patterns of OeAGP10-like and OeP4H1 in the roots of Gaidourelia and Koroneiki and in the leaves of Lefkolia were observed. This co-expression pattern in three cultivars and two plant organs might suggest OeP4H1 specificity for the proline hydroxylation of OeAGP10-like. However, gene regulatory co-expression networks are required to identify co-expression patterns among olive P4Hs and substrate proteins such as AGPs, extensins and hormone peptides, which are known to be involved in abiotic stress responses [85].
Salinity causes the softening of the cell wall, and this decrease in stiffness was attributed to high Na+, which disrupts ionic interactions, such as the egg-box structures of pectins, in which Ca 2+ ions and de-esterified homogalacturonans are major structural components [86]. These are considered load-bearing structures, and disruption by higher Na + levels due to salinity stress results in the softening of the cell wall. The sensing of the disruption of these structures was attributed to the FERONIA (FER) receptor, which in turn initiates Ca 2+ transient induction to restore defects and maintain cell wall integrity under salinity stress [86].
Uronic acids in terminal positions in AGPs are considered essential for Ca 2+ binding, while AGPs bind Ca 2+ more strongly than pectins [38]. There is also a stoichiometric relation between Ca 2+ and the carboxyl groups of AGPs, as previously demonstrated [38], while the Arabinogalactan carbohydrate in AGPs was shown to bind and release Ca 2+ at the cell surface [87]. Arabidopsis Arabinogalactan β-glucuronyltransferase mutants, which add glucuronic acids to AGPs, exhibited a decrease in the capacity to bind Ca 2+ , indicating a role as a putative Ca 2+ capacitor [87]. Moreover, it was suggested that structural variations in the AGP structure, such as the O-methylation of monosaccharides, might affect the binding preference between Ca 2+ and Na + ions [38].
The role of AGPs might be considered important since Ca 2+ binding by the uronic acids of AGPs possibly regulate their levels in the plasma membrane. In this context, the content and the subcellular localization of AGPs in olive roots might regulate Ca 2+ fluxes under NaCl stress and, as a result, the response and tolerance to salinity. Salinity prevented the symplastic xylem from loading Ca 2+ in the roots, resulting in the growth inhibition of leaves sensitive to salt [79]. The leaf acquires its final size due to cell division and cell elongation. Cell division, which regulates leaf initiation, was not affected by salt stress in sugar beet, but leaf expansion was a salt-sensitive process [88], depending on the Ca 2+ status [89].
Alternatively, AGPs might function as carriers of Na + and, after binding to the carboxyl groups of demethyl-esterified pectins, might be excluded from the vacuoles through vesicles as a mechanism to maintain Na + contents at low levels within the cells [31]. The accumulation of AGPs in the xylem sap of Brassica has also been observed under salinity stress [33], which might suggest a role in Na + ion translocation to the upper part of the plant.
Demethyl esterification of homogalacturonans (HGs) is important for pectin remodeling, which is needed for cell elongation [90]. Previous studies suggested that low pectin levels might result in the inhibition of root growth under saline conditions [91], while the abundance of methyl-esterified HGs slightly decreased in olive root cortical cell walls under saline conditions.
In Koroneiki, decreases in the signal intensity of AGPs and methyl-esterified and demethyl-esterified HGs were observed under saline conditions, while in salt-stress-tolerant Lefkolia and Arvanitolia, the signal strength was similar to the control in salt-treated roots, with the exception of AGP upregulation in leaves. These results indicate an association between tolerance to salinity and the levels of AGPs and demethyl-and methyl-esterified HGs in roots. Recently, it was demonstrated that Rhamnogalacturonans-I (RGIs) were covalently linked to Arabinogalactan proteins (AGPs) in cultured Arabidopsis cell walls, indicating the strong interaction of pectins with AGPs [92]. Moreover, AGPs are considered to be part of tight complexes with cell wall polysaccharides through covalent and noncovalent linkages [93].
Moreover, an Arabidopsis FLA16 mutant, which showed a short-stem phenotype, exhibited lower levels of cellulase synthase gene expression, indicating the involvement of FLAs in cell wall biosynthesis and possibly the repair of cell wall defects due to salinity [94]. The Arabidopsis sos5/fla4 mutant showed swollen root tip cells under salt stress due to cell expansion defects [34]. Alterations in the cell wall and middle lamellae were also observed, while a synergistic role with ABA as an ABA signaling component was suggested [95]. In addition, Arabidopsis mutants of two AGP-specific galactosyltransferases (GALT2 and GALT4) and two cell-wall-associated leucine-rich repeat receptor-like kinase (FEI kinase) mutants showed phenotypes similar to sos5/fla4, indicating participation in the same signaling pathway [96].
In this context, the lower AGP content in the root cortex of Koroneiki might be involved in its higher sensitivity to NaCl stress. Moreover, specific gene expression patterns in P4Hs and AGPs were observed not only in roots but also in leaves, which might be related to the translocation of Na + ions to the upper part of the tree in sensitive olive cultivars. Aerenchyma formation in the roots of Koroneiki under salinity treatment suggests that longterm salt stress might induce a hypoxia-related response in olive trees. The physiological significance of P4Hs and AGPs, as well as their involvement in signaling pathways under saline conditions, needs to be further investigated in olive trees.
Further research is required on the role of Ca 2+ levels in the response of olive trees to salt stress, as well as studies on additional components of salinity-sensing machinery such as sos5/FLAs, AGP-specific galactosyltransferases and FERONIA in relation to the Na + contents in roots and leaves and their putative translocation mechanisms.
Supplementary Materials: The following supporting information can be downloaded at https: //www.mdpi.com/article/10.3390/cells12111466/s1. Figure S1: Olive trees of Koroneiki, Lefkolia, Arvanitolia and Gaidourelia; Figure S2: CLUSTAL multiple sequence alignment of putative Olea Europea P4H (OeP4H) amino acid residues, highlighting the catalytic domain with its distinct functional domains and ER signal; Figure S3: Tertiary structure of OeP4H1 protein; Figure S4: Western blot analysis of LM2-and JIM13-bound AGPs in roots of Arvanitolia, Lefkolia, Koroneiki and Gaidourelia olive cultivars under a salinity time course of the second biological replicate; Figure S5: Western blot analysis of LM2-and JIM13-bound AGPs in roots of Arvanitolia, Lefkolia, Koroneiki and Gaidourelia olive cultivars under a salinity time course of the third biological replicate; Table S1: Primary protein sequence analysis and protein domain prediction; Table S2: Protein sequence analysis and protein localization prediction; Table S3: Table of primers used for the qPCR analysis; Table S4: Table of
|
2023-05-27T15:13:39.103Z
|
2023-05-24T00:00:00.000
|
{
"year": 2023,
"sha1": "6161b01f1225ecdc162606b8cbef66ea44b1f043",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/cells12111466",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "103821d7746fde2329487945dc3647e9aa6ee7de",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
1620539
|
pes2o/s2orc
|
v3-fos-license
|
Recombinant Adiponectin Does Not Lower Plasma Glucose in Animal Models of Type 2 Diabetes
Aims/Hypothesis Several studies have shown that adiponectin can lower blood glucose in diabetic mice. The aim of this study was to establish an effective adiponectin production process and to evaluate the anti-diabetic potential of the different adiponectin forms in diabetic mice and sand rats. Methods Human high molecular weight, mouse low molecular weight and mouse plus human globular adiponectin forms were expressed and purified from mammalian cells or yeast. The purified protein was administered at 10–30 mg/kg i.p. b.i.d. to diabetic db/db mice for 2 weeks. Furthermore, high molecular weight human and globular mouse adiponectin batches were administered at 5–15 mg/kg i.p. b.i.d. to diabetic sand rats for 12 days. Results Surprisingly, none of our batches had any effect on blood glucose, HbA1c, plasma lipids or body weight in diabetic db/db mice or sand rats. In vitro biological, biochemical and biophysical data suggest that the protein was correctly folded and biologically active. Conclusions/Interpretation Recombinant adiponectin is ineffective at lowering blood glucose in diabetic db/db mice or sand rats.
Introduction
Adiponectin is a 30 kDa protein (244-247 amino acids, dependent on species) secreted exclusively by adipose tissue [1]. It is homologous to complement factor C1q and contains a Cterminal globular head and a N-terminal collagen-like domain [1]. At least 3 different oligomeric adiponectin forms can be found in plasma: trimers, hexamers and high molecular weight (HMW) adiponectin [2][3][4]. Hexamers consist of two trimers that are linked together by a single disulfide bridge [3] whereas MHW adiponectin predominantly consists of 6 trimers [5]. The crystal structure of the trimeric globular form resembles the structure of TNFa [6] in spite of relatively low homology between the two proteins.
The plasma concentration of adiponectin is usually in the range of 1-20 mg/l which is high for a hormone. The adiponectin plasma concentration correlates inversely with diabetes, obesity and cardiovascular disease (CVD) [7][8][9][10]. In contrast, adiponectin levels seem to be increased in chronic inflammatory conditions where the adipose tissue mass is not increased, e.g., systemic lupus erythematosus, rheumatoid arthritis, inflammatory bowel disease, and cystic fibrosis [11]. Although a low (or high, in the case chronic inflammatory conditions) plasma adiponectin concentration appears to be a good marker for a number of pathophysiological conditions, the precise function and mechanism of action of adiponectin and its different oligomeric forms has proved elusive. Leptin deficient ob/ ob mice with transgenic over expression of adiponectin in adipose tissue have normal levels of plasma glucose, insulin, non-esterified fatty acids and triglycerides even though the animals are significantly more obese than their non-transgenic ob/ob littermates [12]. Several groups have reported blood glucose lowering and/or insulin-sensitizing effects of recombinant full-length or globular adiponectin. The anti-hyperglycemic effect varies from study to study probably due to the fact that each study utilizes a unique combination of animal model (ob/ob, KKAy, FVB, C57BL/6J, Streptozotocin treated and high fat fed mice), adiponectin form (full-length and globular from mouse or man), expression host (E.coli, mammalian cells, P. pastoris) and dosing regimen [3,[13][14][15][16][17][18][19]. The reported effects of adiponectin injections or transgenic over-expression on body weight in mice are also variable, and anything from weight loss to weight gain has been reported [12,14,[19][20][21][22]. Furthermore, over expression of adipo-nectin has been shown to reduce atherosclerosis in Apolipoprotein E deficient mice [19,23].
On a cellular level it has been demonstrated that adiponectin potentiates the inhibitory effect of insulin on glucose production from primary hepatocytes [13]. Moreover, adiponectin has been reported to stimulate glucose uptake in muscle and fat cells as well as b-oxidation in muscle cells in an AMP kinase dependent manner [24][25][26]. Other cellular actions of adiponectin include anti-inflammatory effects on macrophages/monocytic cells [27][28][29] and endothelial cells [30,31].
The purpose of the current study was to develop an effective expression and purification method for recombinant adiponectin and to evaluate the anti-diabetic potential of the different adiponectin forms in animal models of type 2 diabetes.
The original signal peptides were replaced by the human CD33 signal peptide.
Expression
Stable cell lines were selected using methionine sulfoximine (MSX, Lonza Biologics). A cell line expressing high levels of highmolecular weight adiponectin was chosen for protein production. HEK293-6E cells expressing murine adiponectin were cultured in FreeStyleTM medium (Gibco) for 5 days. Culture supernatants were harvested for protein purification.
Mouse globular adiponectin with an amino terminal FlagTag, DYKDDDDK (FLAG-mgAd) was cloned into the POT plasmid and expressed in the Saccharomyces cerevisiae strain ME1719 as previously described [32]. The coding sequence, corresponding to amino acids 109-247 of accession number Q60994 (SWISS-Prot Database), was synthesized by GENEART GmbH (Regensburg, Germany). An asparagine to glutamine substitution was introduced in position 233 to prevent N-glycosylation.
Purification
The culture supernatant from the CHOK1SV cells was diluted with 1 mM CaCl 2 in buffer A (20 mM Tris-HCl, pH 7.5). The protein was captured on a Q Sepharose FF column (GE Healthcare, Uppsala, Sweden) and eluted with 400 mM NaCl, 1 mM CaCl 2 in buffer A. Peak fractions were pooled and further purified on a Source 15Q (GE Healthcare) column using a gradient of NaCl in 1 mM CaCl 2 /buffer A. The HMW form of adiponectin was isolated on a Superdex 200 HR 26/60 using 5 mM CaCl 2 , 150 mM NaCl in buffer A as eluent.
The culture supernatant of HEK293-6E cells expressing murine adiponectin was adjusted to 1 M ammonium sulphate in buffer A. The protein was captured on a Phenyl Sepharose Fast Flow column (GE Healthcare) and eluted with buffer A. Peak fractions were pooled and diluted 5-fold with buffer A. The diluted pool was further purified on a Q Sepharose HP (GE Healthcare) column using a gradient of NaCl in buffer A. The wildtype protein was further purified on an SP Sepharose HP column (GE Healthcare) using a gradient of NaCl in 10 mM citric acid pH 4.0.
The supernatant of S. cerevisiae expressing FLAG-mgAd was adjusted to pH 7.5 and applied to a Q Sepharose FF column and eluted with a gradient of NaCl in 1 mM CaCl 2 /buffer A. Peak fractions were pooled and further purified by Superdex 200 gelfiltration.
P. pastoris supernatant containing hgAd was desalted, adjusted to 1 M ammonium sulphate in buffer A, loaded onto a Butyl Sepharose FF column (GE Healthcare) and eluted with buffer A. The pool was desalted against the same buffer and further purified on a Source 30Q column (GE Healthcare). The product was eluted by a gradient of NaCl in buffer A, concentrated by ultrafiltration and further purified by Superdex 200 gelfiltration. Alternatively the protein was purified as previously described Liu et al. 2007] followed by a gelfiltration with a Superdex 75 HR 26/ 60 column (GE Healthcare) using 5 mM CaCl 2 , 150 mM NaCl in buffer A as eluent.
Endotoxin Assay
All preparations of adiponectin were tested for endotoxins by a kinetic turbidometric assay using Limulus Amebocyte Lysate (Charles River Laboratories, Wilmington, MA, USA) and contained less than 5 endotoxin units/mg.
Analytical Size-exclusion Chromatography
Analytical SEC was carried out using a 30067.8 mm BioSep-SEC-S3000 column (Phenomenex, Torrance, CA, USA). The column was equilibrated in 0.2 M Tris-HCl pH 7.8. Proteins were eluted using isocratic elution at a flow rate of 1 ml/min. Eluted proteins were detected by UV absorption at 280 nm. A standard Waters Alliance 2695 separation unit equipped with a Waters 2487 dual UV absorbance detector (Waters, Milford, USA) was used for all separations.
Dynamic Light Scattering
Samples (4 mg/ml) were centrifuged at 13,0006g for 10 minutes and 25 ml dispensed to a 384 well plate. The plate was centrifuged at 10006g and analysed in a DynaPro TM Plate Reader Plus (Wyatt Technology Corporation, Santa Barbara, CA, USA). Each well was measured 20 times with 10 second acquisitions at 25uC. The hydrodynamic radius (R h ) of the sample was obtained from the measured translational diffusion coefficient using the Stokes-Einstein relation from culumants analysis and presented as R h,average . Analysis was performed by DYNAMICS software from Wyatt Technology Corporation.
AF4-MALS
Asymmetric-Flow Field-Flow Fractionation Multi-Angle Light Scattering (AF4-MALS) was performed using an Agilent 1200 liquid chromatography system (Agilent Technologies, Palo Alto, CA). In addition the system was equipped with an Eclipse 3 Separation System, a DAWN HELEOS MALS detector and an Optilab rEX refractive index detector (Wyatt Technology Corporation). A Short Channel with a spacer height (wide) of 350 mm with polyethersulfone (PES) membranes with a cut-off of 10 kDa was used. The channel flow rate was set to 1 ml/min and the cross flow rate was constant at 1.3 or 2 ml/min. 20 mM Tris-HCl pH 7.4 including 100 mM NaCl and 1 mM CaCl 2 was used as buffer. Weight average molar mass (M W ) was determined by using the software ASTRAH V (Wyatt Technology) using a refractive index increment (dn/dc) of 0.185 mL g 21 .
1D-SDS-PAGE
Samples for 1D-SDS-PAGE were diluted and mixed with SDSsample buffer without heating. 4-12% Bis-Tris gel with MESrunning buffer and a 3-7% Tris-Acetate with Tris-Acetate running buffer (Novex, Invitrogen, Carlsbad, USA) were used for separation. After electrophoresis the gels were stained with Instant Blue (Expedeon, Harston, UK) according to the manufacturer instructions.
AMPK Activity in Primary Hepatocytes
Hepatocytes were isolated from anaesthetized male Sprague Dawley rats using a two-step collagenase perfusion technique. The cells were plated onto collagen-coated plates in basal medium (Medium 199, 5.5 mM glucose, 0.1 mM dexamethasone, 100 units/ml penicillin, 100 mg/ml streptomycin, 2 mM L-glutamine ) with the addition of 1 nM insulin and 4% FCS. The medium was replaced with basal medium containing 1 nM insulin and 4% FCS 1-2 hours after initial plating in order to remove dead cells. Cells were stimulated the following day in basal medium with or without adiponectin for 60 minutes and lysed by adding ice-cold cell extraction buffer (Biosource) and incubating 30 min on ice. AMPK activity was measured as phoshorylated ACC by ELISA. In brief lysate was added to streptavidincoated 96 well plates (Pierce) and incubated o/n at 4C. Plates were washed (TBS-T), anti-pS79ACC 1:3000 in TBS (Upstate ) was added and the plates incubated for 3 hours at rt. The plates were then washed (TBS-T), anti-rabbit-IgG-HRP 1:3000 in TBS (Immunopure cat# 31460) was supplemented and incubated 1 hour at rt. HRP activity was visualised using chromogen and stop buffer from Biosource followed by detection via ELISA reader.
MCP-1 Release from THP-1 Cells
THP-1 cells (ATCC) were cultured as described by the supplier. Cells were plated in 96 well plates (50,000 THP-1 cells/well) and stimulated with adiponectin (10 pM-1 mM) for two hours prior to 24 h palmitate (100 mM) challenge. Media was isolated and human MCP-1 was detected using a LincoPlex kit (Millipore, Billerica, MA).
Animals
Male db/db mice (Taconic Europe) weighing from 45 to 50 g were randomized according to non-fasted blood glucose levels,glycated hemoglobin (HbA1c), and body weight. Mice were were fed ad libitum (Altromin 1324 standard diet, Brogaarden, Denmark) and dosed from weeks 11-12. Male sand rats (Psammomys obesus) from Harlan Laboratories, Jerusalem, Israel had ad libitum access to low energy diet (3084-111507, 2.5 kcal/g, Harlan Teklad). They were fed high energy diet (HE diet, Formulab Diet 5008, 3.5 kcal/g, Lab Diet) for a test-period of 10 days. Animals that responded to the HE diet with mean blood glucose .10 mM were fed HE diet for additional 3 weeks to induce stable diabetes. The diabetic animals were randomly assigned to three treatment groups.
All animals were housed at 23uC under standard conditions in a 12:12 h light/darkness cycle. Principles of laboratory animal care were followed and study approval was obtained from the Animal Experiments Inspectorate, Danish Ministry of Justice. (Dyreforsøgstilsynet, Slotsholmsgade 10, 1216 København K, Denmark).
In vivo Studies
The db/db mice were dosed intraperitoneally (ip) once (metformin) or twice (adiponectin or vehicle) daily at 10 a.m. and 8 p.m. for 14 days. HbA1c was measured on day 0, 6 and 13. Blood glucose was measured at 7 a.m., 10 a.m., 1 p.m., 3 p.m., 6 p.m., and 9 p.m. on day 0, 6 and 13. Plasma concentrations of free fatty acids and triglycerides were determined at the end of the study.
Nineteen weeks old diabetic Sand rats were dosed ip twice daily and received either vehicle, mgAd, or hAd, respectively at 8 a.m. and 6 p.m. All sand rats treated with adiponectin were kept on HE diet throughout the study. Body weight and blood glucose, taken in the unfasted state before the morning dose, were measured before the start of the study and four times during the dosing period. HbA1c was measured before the start and at the end of the study period.
Analysis of Samples from in vivo Studies
Blood glucose was measured in 10 ml full blood samples taken from the tip of the tail by puncturing the capillary bed with a lancet, using a 10 ml heparinised capillary tube to sample the blood. The capillary tube was then shaken into 500 ml glucose/ lactate System Solution and measured in a Biosen, autoanalyser (EKF Diagnostics GmbH, Magdeburg, Germany) according to the manufacturer's instructions.
HbA1c was measured in 5 ml full blood sample taken from the tip of the tail by puncturing the capillary bed with a lancet, using a heparinised capillary tube to sample the blood. The capillary tube was then shaken into 500 ml Hitachi Hemolyzing Reagent and measured in a Hitachi 912 autoanalyser (Roche A/S Diagnostics, Mannheim, Germany), according to the manufacturers instructions.
Plasma triglycerides and free fatty acids were also measured using Hitachi 912 autoanalyser according to manufactures instructions.
Measurement of Recombinant Adiponectin in Mouse Plasma
Human adiponectin in mouse plasma was measured by ELISA. Taking advantage of the multimeric nature of adiponectin the same monoclonal antibody MAB1065 (R&D systems) was used as both catcher and detector. MAB1065 is directed against the globular part of human adiponectin and does not cross-react with the murine counterpart.
Generation and Characterization of Recombinant Adiponectin Constructs
The recombinant adiponectin batches generated for this study include the carboxy-terminal globular domain, low-molecular weight (LMW) as well as high molecular (HMW) forms. Murine and human adiponectin forms were expressed in a variety of microbial and mammalian host systems. Table 1 provides an overview of the constructs that were tested in vivo.
All batches underwent extensive biochemical and biophysical characterization to ensure both purity and integrity of the samples after the purification procedure, but also to verify that the proteins behaved as expected and as previously described for similar constructs. The experimentally determined properties of the various forms are summarized in Figure 1 and Table 1. Full length human adiponectin (hAd) was expressed especially well in mammalian CHOK1SV cells, and an expression level of several grams per litre of conditioned media was obtained. Purification yielded a homogeneous preparation of HMW adiponectin. Asymmetric-Flow Field-Flow Fractionation Multi-Angle Light Scattering (AF4-MALS) measurements showed a molecular mass of 448 kDa, which is in good correspondence with an 18-mer. Moreover, we could detect the previously described hydroxyprolyl and glycosylated hydroxylysyl modifications of the collageneous domain in our preparation.
Full length murine adiponectin (mAd) was produced transiently in HEK293T cells. This expression system predominantly gave trimeric and hexameric mAd. As no size fractionation was applied, this mixture of oligomeric forms was preserved throughout purification. The C39S version of the same construct (C39S mAd) was expressed exclusively as a trimer.
Murine globular adiponectin (mgAd) was either produced by trypsin cleavage of mAd or C39S mAd, or expressed with an amino-terminal FLAG-tag in Saccharomyces cerevisiae (FLAG-mgAd, with a mutation of asparagine 233 to glutamine to prevent unwanted N-glycosylation). Moreover, we generated tagless human globular adiponectin (hgAd) by expression in Pichia pastoris, as previously described [17].
It should be noted that all of our preparations have been monitored for endotoxin content. The majority of batches had endotoxin levels well below 1 EU/mg and no batch was above 5 EU/mg.
In vitro Biological Evaluation
Full-length adiponectin has previously been shown to activate AMP kinase in hepatocytes in vitro [33]. To evaluate the biological activity of our recombinant adiponectin batches, we accordingly decided to test the effect on AMP kinase activity in primary rat hepatocytes. Surprisingly, none of our batches led to an increase in phosphorylated ACC in hepatocytes in vitro (data shown for purified HMW, trimeric and hexameric hAd in figure 2). It should be mentioned that some low purity adiponectin batches caused an increase in phosphorylated ACC in hepatocytes in vitro. However, this effect turned out to be due to contamination e.g. with glycerol from certain spin filters (data not shown). Since our recombinant adiponectin batches previously have been shown to possess immuno-modulatory properties [27] we decided to test the effect of hAd on LPS-and palmitate stimulated production of MCP-1 in THP-1 cells. Figure 3 shows a dose dependent inhibition of MCP-1 production by two different batches of hAd.
In vivo Evaluation in db/db and ob/ob Mice
The pharmacokinetics properties of hAd was studied in detail in Sprague Dawley rats ( figure 4). The half-life of hAd was approximately 6 hours and the maximum plasma concentration (approximately 20 mg/l in the 13.4 mg/kg group) was reached after 5-6 hours. Plasma exposure in db/db mice was assessed by single time point measurements due to the limited amount of blood that can be drawn from a mouse. Plasma exposure 6 hours after a single i.p. injection of 30 mg/kg hAd was approximately 100 mg/l and the plasma exposure of hgAd in db/db mice 3 hours after a single i.p. injection of 10 mg/kg hgAd was approximately 70 mg/l. The antibody employed for the measurement of hAd and hgAd was specific for human adiponectin and did not crossreact with endogenous rat or mouse adiponectin.
Initially, we tested the acute effect of full-length adiponectin produced in E. coli and mammalian cells (CHO and HEK293 cells) on blood glucose in diabetic ob/ob and db/db mice. Briefly, two batches of E. coli produced adiponectin and four batches of mammalian cell produced adiponectin were injected i.p. at 30-40 mg/kg to diabetic mice (n = 9-13 per group). However, none of the six batches were capable of lowering blood glucose (data not shown). The adiponectin doses chosen for this experiment as well as the experiments mentioned below were based on the highest doses employed in previously published studies (13;17).
The failure of the six adiponectin batches to reduce blood glucose acutely in diabetic ob/ob and db/db mice prompted us to conduct sub-chronic experiments in diabetic db/db mice. The data from one of these experiments are shown in Figure 5 as an example. As can be seen, hAd injected i.p. b.i.d. was unable to reduce blood glucose on day 0, 6 and 13. Moreover, hAd did not affect HbA1c or body weight. Metformin (positive control) was injected i.p. q.d. at 8 p.m. and lowered blood glucose significantly at day 0, 6 and 13. The lack of blood glucose measurements between 9 p.m. and 6 a.m. the following morning leads to an underestimation of the blood glucose lowering effect of metformin.
The data from sub-chronic testing in diabetic db/db mice are summarized in Table 2. As can be seen from the table neither C39S mAd, hAd, mAd, muse globular adiponectin (mgAd), human globular adiponectin (hgAd), nor FLAG tagged mouse globular adiponectin (FLAG-mgAd) had any effect on HbA1c, blood glucose on day 0, 6 and 13, body weight, plasma triglycerides (TG) or plasma free fatty acids (FFA) after 2 weeks of i.p. dosing.
In vivo Evaluation in Sand Rats
The hyperphagic and hyperglycaemic characteristics of both the ob/ob and db/db the mouse models are caused by a deficient leptin system (i.e., a lack of leptin or the leptin receptor). To test whether the lack of a blood glucose lowering effect by adiponectin was specific for db/db (and ob/ob) mice and potentially dependent on a normal regulation of leptin, we decided to test two adiponectin batches in diabetic Psammomys obesus (sand rats). As shown in Table 3, neither mouse globular adiponectin (mgAd) nor wild-type human adiponectin (hAd) had any effect on HbA1c, blood glucose on day 0 and 12 or body weight after 13 days of i.p. dosing.
Discussion
In the present study, we show that high expression levels (.1 g/ l) of secreted soluble full-length adiponectin can be obtained, when CHOK1SV cells are used as an expression host. The CHOK1SV cells are capable of producing all three adiponectin forms (trimeric, hexameric and HMW adiponectin), but the relative Blood glucose profiles were taken at day 0 (A), day 6 (B) and day 13 (C) of the study at 7 a.m., 10 a.m., 1 p.m., 3 p.m., 6 p.m., and 9 p.m. HbA 1c (D) was measured on the same days and additionally 7 and 14 days before the start of dosing. Body weight (E) was monitored daily. The data is representative for all studies performed with db/db mice. The data for all other forms of adiponectin is summarized in Table 2 production of each form varies from clone to clone. Transiently transfected HEK293 cells predominantly produce trimeric and hexameric adiponectin. Our extensive biochemical and biophysical characterization has revealed that the batches used in this study were homogeneous, of the expected size -both in terms of the single amino acid chains and the degree of oligomerization, and contained the previously described post-translational modifications.
We also show that CHO cell-produced human adiponectin (hAd) is functional in vitro as it dose-dependently inhibits palmitate stimulated MCP-1 release in THP1 cells. Moreover, our hAd has previously been shown to promote an anti-inflammatory phenotype in human and mouse macrophages [27]. Furthermore, it should be mentioned that another lab recently has observed convincing immuno-modulatory properties of our recombinant adiponectin (Parth Narendran, personal communication). Taken together, these findings suggest that the described in vivo studies were carried out with recombinant adiponectin that was biochemically intact, correctly folded, and biologically active. Even though the literature suggests that full-length adiponectin can stimulate the AMP kinase activity in hepatocytes [33], we did not see any effect of our full-length batches. However, it is our experience that contaminating factors (e.g. glycerol) in protein preparations easily can increase the level of pACC in cells, which can make AMP kinase activity a slightly tricky endpoint for the assessment of cellular effects.
Collectively, we have conducted six acute experiments in diabetic db/db or ob/ob mice and six two-week subchronic experiments in diabetic db/db mice. The animals were dosed with mouse or human full-length or globular adiponectin batches. A proportion of full-length batches consisted primarily of the trimeric and/or hexameric form (mAd and C39S mAd) whereas other batches consisted mainly of HMW adiponectin (hAd). Plasma exposure of human full-length adiponectin reached 5-10 times the level reported for healthy mice. However, we failed to detect any blood glucose lowering effect in any of the experiments. This observation leads us to surmise that exogenously administered recombinant adiponectin is ineffective in lowering plasma glucose in leptin deficient mouse models of type 2 diabetes. This conclusion is supported in part by a recent study where adenovirus mediated overexpression of adiponectin was relatively ineffective in improving glucose clearance in ob/ob mice in spite of a threefold increase in the plasma adiponectin level [34]. In contrast, two earlier studies have shown an anti-diabetic effect of adiponectin in ob/ob mice. In the first of these full-length mouse adiponectin produced in HEK293 cells lowered blood glucose acutely [13]. However, only four animals per treatment group were used in that study. From our experience, it is difficult to achieve a definitive conclusion in this model when such a small number of mice per group is employed. In the second ob/ob study, transgenic overexpression of adiponectin in adipose tissue prevented development of diabetes and hyperlipidemia [12]. However, this was a prevention study where animals were exposed to a high adiponectin level from birth while our findings are based on treatment of animals with established diabetes. Accordingly, the conclusion drawn from the latter study does not necessarily conflict with our findings. It is also possible that adiponectin is required to be processed by adipocytes, in a currently unknown manner, in order for it to be metabolically active.
To test the anti-diabetic effect of adiponectin in a non-leptin deficient model, we decided to dose adiponectin i.p. b.i.d. to sand rats. Once more, we failed to detect any blood glucose lowering effect.
Thus, we conclude that our recombinant adiponectin preparations are ineffective in lowering blood glucose in animal models of type 2 diabetes. This conclusion is supported to some extent by the fact that numerous pharmaceutical and biotech companies (e.g. Protemix, Merck KGaA (Merck Serono), Maxygen) which have worked on adiponectin over the past decade, have been unable to progress their research projects beyond the pre-clinical stages.
Finally, a note of caution should be made with regard to endotoxin contamination of protein preparations that are injected in diabetic animals. Examination of the literature reveals that approximately half of the published studies where adiponectin was administered to diabetic rodents fail to mention whether or not endotoxin levels were monitored, or whether measures were taken to avoid or remove endotoxin contamination. At the same time, endotoxins have been reported to lower blood glucose in a number of mouse strains [35]. From our experience, endotoxins are almost invariably introduced upon handling and purification of proteins if no specific measures are taken to prevent it. This is also true for samples from endotoxin-free expression systems such as mammalian cells and yeast. Common sources include bacterial contaminations in columns, containers and purification equipment. It is crucial to use disposable material wherever possible and sanitize all other equipment thoroughly, e.g., by use of concentrated base. It should also be noted, that we have not been able to remove existing endotoxin contaminations from preparations of adiponectin expressed in mammalian systems, possibly due to affinity of lipopolysaccharaides to the glycosylations. Accordingly, researchers in diabetes should be encouraged to report endotoxin levels All adiponectin batches as well as the vehicle were dosed twice daily (b.i.d.). All blood glucose measurements were performed in the morning (8 a.m.). Changes in blood glucose concentration and body weight (D) are given as the difference between day 0 and day 12. doi:10.1371/journal.pone.0044270.t003 when they purify proteins for the purpose of injection into animals.
In this instance, it should be emphasized that all adiponectin batches employed in the current study had very low levels of endotoxins.
In conclusion, our work suggests that adiponectin is ineffective with regard to lowering blood glucose in mice and Sand rats. This raises the question: What is the primary biological function of adiponectin? So far the data from testing of our adiponectin batches by other labs has generated support in favour of the conclusion that adiponectin is an immunomodulatory molecule [27] (Parth Narendran, personal communication).
|
2017-04-20T14:56:04.282Z
|
2012-10-01T00:00:00.000
|
{
"year": 2012,
"sha1": "831bbfbfcfa5a9be1acade13ba89597035bf9246",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0044270&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "831bbfbfcfa5a9be1acade13ba89597035bf9246",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
6627620
|
pes2o/s2orc
|
v3-fos-license
|
Arterial pulse wave velocity, inflammatory markers, pathological GH and IGF states, cardiovascular and cerebrovascular disease.
Blood pressure (BP) measurements provide information regarding risk factors associated with cardiovascular disease, but only in a specific artery. Arterial stiffness (AS) can be determined by measurement of arterial pulse wave velocity (APWV). Separate from any role as a surrogate marker, AS is an important determinant of pulse pressure, left ventricular function and coronary artery perfusion pressure. Proximal elastic arteries and peripheral muscular arteries respond differently to aging and to medication. Endogenous human growth hormone (hGH), secreted by the anterior pituitary, peaks during early adulthood, declining at 14% per decade. Levels of insulin-like growth factor-I (IGF-I) are at their peak during late adolescence and decline throughout adulthood, mirror imaging GH. Arterial endothelial dysfunction, an accepted cause of increased APWV in GH deficiency (GHD) is reversed by recombinant human (rh) GH therapy, favorably influencing the risk for atherogenesis. APWV is a noninvasive method for measuring atherosclerotic and hypertensive vascular changes increases with age and atherosclerosis leading to increased systolic blood pressure and increased left ventricular hypertrophy. Aerobic exercise training increases arterial compliance and reduces systolic blood pressure. Whole body arterial compliance is lowered in strength-trained individuals. Homocysteine and C-reactive protein are two inflammatory markers directly linked with arterial endothelial dysfunction. Reviews of GH in the somatopause have not been favorable and side effects of treatment have marred its use except in classical GHD. Is it possible that we should be assessing the combined effects of therapy with rhGH and rhIGF-I? Only multiple intervention studies will provide the answer.
Introduction
Arterial distensibility and arterial compliance decrease with age and atherosclerosis leading to increased systolic blood pressure (SBP) and a widening of the pulse pressure (PP), resulting in left ventricular hypertrophy, a risk factor for cardiovascular disease (CVD) (Laogun and Gosling 1982;Riley et al 1986).
Early changes in vascular properties precede the development of coronary artery disease (CAD) and hypertensive heart disease (Cohn et al 1995). Arterial pulse wave velocity (APWV) and compliance alterations occur early in the course of essential hypertension in relation to other CVD risk factors (Brinton et al 1996). Physical properties of the vasculature require a measurement of the distending force (ie, BP), using pressure transducers at relevant sites (Asmar et al 1995). The form of the pulse wave through the walls of the blood vessels is affected by the mechanical properties of the vessels. Pulse-wave augmentation results from refl ections of the pulse wave at distal sites (such as bifurcations), such that the forward wave and refl ected wave superimpose producing the typical arterial pulse wave, which differs at different sites (O'Rourke 1999). The PWV and the amplitude of refl ected waves are increased in stiffer arteries, and are reduced with increased arterial compliance (O'Rourke et al 2001). These features are dependent on a number of hemodynamic variables, including heart rate and PP (Wilkinson et al 2002a). Therefore, measurements of augmentation are generally corrected for PP and expressed as an "augmentation index" (AI x ).
These measurements are also affected by vasoactive drugs (O'Rourke 1992; Kelly et al 2001;Wilkinson et al 2001a) and by insulin (Westerbacka et al 1999) and are sensitive to inhibition of nitric oxide synthase (Kinlay et al 2001;Stewart et al 2003). The application of these measurements in concert with vasodilators has been proposed as an alternative approach for the measurement of vascular function (Wilkinson et al 2002b).
Arterial pulse wave velocity
Measurement of arterial wave propagation as an index of vascular stiffness and vascular health dates back to the early part of the last century (Bramwell and Hill 1922).
The arrival of the pulse wave at two different arterial sites is timed, and from estimates of the length of blood vessels between these sites (by measuring the distances at overlying skin sites), the velocity of this wave is calculated in m.s −1 (O'Rourke 1999;Lehmann 1999;Wilkinson et al 1999). Subtleties come in correctly estimating the intervening distances. This provides a reliable measure of arterial stiffness; at a given blood pressure, the stiffer the vessel, the less time it takes for the pulse wave to travel the length of the vessel. The APWV, especially of the aorta, has emerged as an important independent predictor of cardiovascular events. APWV increases with stiffness and is defi ned by the Moens-Korteweg equation, PWV = √(Eh/2PR), where E is Young's modulus of the arterial wall, h is wall thickness, R is arterial radius at the end of diastole, and P is blood density (Lehmann 1999).
The time delay between the arrival of a predefi ned part of the pulse wave, such as the foot, at two points is obtained either by simultaneous measurement, or by gating to the peak of the R-wave of the ECG. The distance travelled by the pulse wave is measured over the body surface and APWV is then calculated as distance divided by time (m.s −1 ) and depends on anatomical variation. The abdominal aorta tends to become more tortuous with age potentially leading to an underestimation of APWV (Wenn and Newman 1990). APWs can also be detected by using Doppler ultrasound (Sutton-Tyrrell et al 2001) or applanation tonometry (Wilkinson et al 1998b) where the pressure within a small micromanometer fl attened against an artery equates to the pressure within the artery. Increases in distending pressure increase APWV (Bramwell and Hill 1922). Therefore, account should be taken of the level of BP in studies that use APWV as a marker of cardiovascular risk.
An increase in HR of 40 beats per minute was shown to increase APWV by Ͼ1 m/s (Lantelme et al 2002). Also acute increases in heart rate (HR) markedly lower arterial distensibility, occurring in both large and middle-size muscular arteries within the range of "normal" HR values (Giannattasio et al 2003). Raised APWV occurs with a range of established cardiovascular risk factors (Lehmann et al 1998) including age (Bramwell et al 1923;Vaitkevicius et al 1993), hypercholesterolemia (Lehmann et al 1992a), type II diabetes mellitus (DM) (Lehmann et al 1992b), and sedentary lifestyle (Vaitkevicius et al 1993). In hypertension, carotid-femoral APWV is an independent predictor of both cardiovascular and all-cause mortality (Laurent et al 2001). The odds ratio for a 5 m.s −1 (a relatively large change in PWV) increment in PWV was 1.34 for all-cause mortality and 1.51 for cardiovascular mortality. In contrast, PP was independently related to all-cause mortality but only marginally related to cardiovascular mortality, indicating that specifi c assessment of arterial stiffness, with APWV, may be of greater value in the evaluation of risk. APWV ranged from 9 to 13 m.s −1 , whereas recently quoted values of carotid-femoral PWV in healthy individuals with average ages of 24 to 62 years ranged from around 6 to 10 m.s −1 (O'Rourke et al 2002). In hypertensive subjects without a history of overt cardiovascular disease APWV also predicts the occurrence of cardiovascular events independently of classic risk factors (Boutouyrie et al 2002). Aortic APWV Ͼ13 m.s −1 is a particularly strong predictor of cardiovascular mortality in hypertension (Blacher et al 1999a). Carotid-femoral APWV signifi cantly increases at a faster rate in treated hypertensives than in normotensive controls, although where BP was well controlled APWV progression was attenuated (Benetos et al 2002).
Aortic APWV, assessed by using Doppler fl ow recordings, independently predicts mortality in patients with endstage renal failure (ESRF), a population with a particularly high rate of cardiovascular disease (Blacher et al 1999b;Safar et al 2002). The benefi t associated with BP control in ESRF, by the use of anti-hypertensives, was independently related to change in aortic APWV, such that a reduction in APWV of 1 m.s −1 was associated with a relative risk of 0.71 for all-cause mortality (Guerin et al 2001).
The mechanisms of arterial stiffness
Windkessel theory treats the circulation as a central elastic reservoir (the large arteries), into which the heart pumps, and from which blood travels to the tissues through relatively nonelastic conduits (Oliver and Webb 2003). The elasticity of the proximal large arteries is the result of the high elastin to collagen ratio in their walls, which progressively declines toward the periphery. The increase in arterial stiffness that occurs with age (Hallock and Benson 1937) is the result of progressive elastic fi bre degeneration (Avolio et al 1998). The aorta and its major branches are large arteries, which can be differentiated from the more muscular conduit arteries, such as the brachial and radial arteries. The elasticity of a given arterial segment is not constant but instead depends on its distending pressure (Hallock and Benson 1937;Greenfi eld and Patel 1962). As distending pressure increases, there is greater recruitment of relatively inelastic collagen fi bres and, consequently, a reduction in elasticity (Bank et al 1996). In addition to collagen and elastin, the endothelium (Wilkinson et al 2002a) and arterial wall smooth muscle bulk and tone (Bank et al 1999) also infl uence elasticity. A number of genetic infl uences on arterial stiffness have also been identifi ed. Polymorphic variation in the fi brillin-1 receptor (Medley et al 2002), angiotensin II type-1 receptor and endothelin receptor (Lajemi et al 2001a) genes are related to stiffness. The angiotensin-converting enzyme (ACE) I/D polymorphism has been associated with stiffness (Balkestein et al 2001), but not consistently (Lajemi et al 2001b). As a consequence of differing elastic qualities and wave refl ection, the shape of the arterial waveform varies throughout the arterial tree. In healthy, relatively young subjects, whereas mean arterial pressure (MAP) declines in the peripheral circulation, SBP and PP are amplifi ed (Kroeker and Wood 1955). This amplifi cation is exaggerated during exercise (Rowell et al 1968) but reduces with increasing age (Wilkinson et al 2001). APWV in the brachial artery has been shown to increase with age, indicative of worsening arterial compliance (Avolio et al 1983). However, brachial artery PWV (baPWV) has been reported to change less with age than APWV in the aorta or lower limb arteries. In contrast, the assessment of distensibility using APWV is indirect and potentially affected by changes in blood fl ow and MAP that refl ect changes in more distal resistance vessels (Nichols and O'Rourke 1998). Atherosclerosis, hypertension, and diabetes produce macro-and micro-vascular changes that are refl ected in vascular function and physical properties before the development of overt clinical disease (Berenson et al 1992). Endothelial cells continuously release nitric oxide (NO), which is synthesized by the endothelial isoform of NO synthase (NOS-III) (Moncada et al 1991;Förstermann et al 1993). NOS-III activity is stimulated by chemical agonists such as serotonin, acetylcholine (ACh) and bradykinin (Furchgott 1984;Cocks et al 1985;Newby and Henderson 1990) and by fl ow-induced shear forces on the endothelial cell wall (Pohl et al 1986;Busse and Pohl 1993;Meredith et al 1996). Endothelium-derived NO diffuses toward the underlying vascular smooth muscle, producing relaxation. The vascular endothelium, therefore, plays a central role in the modulation of arterial smooth muscle tone and thus infl uences large-artery distensibility and the mechanical performance of the cardiovascular system (Ramsey et al 1995). By improving arterial elasticity, endothelium-derived NO reduces the arterial wave refl ection and reduces left ventricular work and the pulse pressure within the aorta. In addition, exogenous acetylcholine and glyceryl trinitrate (GTN) both increase arterial distensibility, the former mainly through NO production. This may help explain why conditions that exhibit endothelial dysfunction are also associated with increased arterial stiffness. Therefore, reversal of endothelial dysfunction or drugs that are large-artery vasorelaxants may be effective in reducing large-artery stiffness in humans, and thus cardiovascular risk. GTN reduces brachial artery stiffness and decreases wave refl ection (Yaginuma et al 1986). Moreover, drugs that stimulate endothelial NO release, such as ACh, also reduce muscular artery stiffness in vivo (Ramsey et al 1995;Joannides et al 1997).
Specifi c studies in exercise
Changes in APWV in dynamic exercise in normal individuals, may provide a reference point against which to compare the effect of disease states and the basis for evaluation of their component mechanisms (Naka et al 2003). Aerobic exercise training increases arterial compliance and reduces SBP and in cross-sectional studies, aerobically trained athletes have a higher arterial compliance than sedentary individuals (Mohiaddin et al 1989;Kingwell et al 1995;Vaitkevicius et al 1993). Hammer throwers recorded signifi cantly higher compliance in the radial artery of the dominant arm relative to both the contra-lateral arm and to an inactive control group (Giannattasio et al 1992). This could be explained by the dynamic nature of the action.
A single bout of cycling exercise increased whole body arterial compliance by mechanisms suggesting vasodilation. In exercising muscles, factors including local increases in temperature, carbon dioxide, acidity, adenosine, NO and magnesium and potassium ions may all contribute to local vasodilation (Kingwell et al 1997a). Four weeks of cycle training, in sedentary individuals, significantly increased forearm blood flow and blood viscosity, suggesting an increased basal production of nitric oxide from the forearm. The elevated shear stress in this vascular bed may contribute to endothelial adaptation (Kingwell et al 1997b). Both the proximal aorta and the leg arteries were significantly stiffer and contributed to significantly higher aortic characteristic impedance in strength-trained athletes (Bertovic et al 1999). However, large-artery stiffening associated with isolated systolic hypertension (ISH) is resistant to modification through short-term aerobic training (Ferrier et al 2001). In subjects 70 to 100 years old, aortic APWV was a strong, independent significant predictor of cardiovascular death, whereas systolic blood pressure or pulse pressure was not (Meaume et al 2001). GTN, an exogenous NO donor significantly increased brachial artery area and compliance and significantly decreased pulse wave velocity (Kinlay et al 2001). In contrast to the beneficial effect of regular aerobic exercise, resistance training does not exert beneficial influences on arterial wall buffering functions (Miyachi et al 2003). However, several months of resistance training "reduced" central arterial compliance in healthy men (Miyachi et al 2004). The 3-hydroxyl-3-methyl coenzyme A reductase inhibitors (statins), significantly reduced total and low-density lipoprotein cholesterol and triglyceride levels and increased high density lipoprotein cholesterol and significantly lowered APWV (Ferrier et al 2002). Atorvastatin significantly reduced arterial stiffness in patients with rheumatoid arthritis, possibly through an anti-infl ammatory action (Van Doornum et al 2004). Age, blood pressure, body mass index (BMI), triglycerides, blood glucose and uric acid were shown to be significant variables for baPWV in both genders (Tomiyama et al 2003). Central aortic stiffness would appear to be the initial main site for promotion of the development of coronary atherosclerosis and ischemic heart disease (McLeod et al 2004).
Physiology of growth hormone (GH)
The ability of the somatotroph cells in the anterior pituitary to synthesize and secrete the polypeptide, human growth hormone (hGH) are determined by a gene called the Prophet of Pit-1 (PROP1). When GH is translated, 70%-80% is secreted as a 191-amino-acid, 4-helix bundle protein and 20%-30% as a less abundant 176-amino-acid form (Baumann 1991) (Figure 1). Hypothalamic-releasing and hypothalamic-inhibiting hormones acting via the hypophysial portal system control the secretion of GH, which is secreted into the circulation (Melmed 2006).
In healthy persons, the GH level is usually Ͻ0.2 μg.L −1 throughout most of the day. There are approximately 10-12 intermittent bursts in a 24-hour period, mostly at night, when the level can rise to 30 μg.L −1 (Melmed 2006). GH secretion declines at 14% per decade from the age of 20 years (Iranmanesh et al 1991).
GH action is mediated by a GH receptor, which is expressed mainly in the liver and is composed of dimers that change conformation when occupied by a GH ligand (Brown et al 2005).
Cleavage of the GH receptor provides a circulating GH binding protein (GHBP), prolonging the half-life and mediating the transport of GH. Growth hormone activates the growth hormone receptor, to which the intracellular Janus kinase 2 (JAK2) tyrosine kinase binds. Both the receptor and JAK2 protein are phosphorylated, and signal transducers and activators of transcription (STAT) proteins bind to this complex. STAT proteins are then phosphorylated and translocated to the nucleus, which initiates transcription of growth hormone target proteins (Argetsinger et al 1993).
Intracellular GH signalling is suppressed by suppressors of cytokine signalling. GH induces the synthesis of IGF-binding proteins (IGFBP) and their proteases regulate the access of ligands to the IGF-I receptor affecting its action. Levels of IGF-I are at their peak during late adolescence and decline throughout adulthood, mirror imaging GH and are determined by sex and genetic factors (Milani et al 2004). IGF-I levels refl ect the secretory activity of growth hormone and is a marker for identifi cation of GH-defi ciency (GHD), or excess (Mauras and Haymond 2005). The production of IGF-I is suppressed in malnourished patients, as well as in certain disease states, such as liver disease, hypothyroidism, or poorly controlled diabetes.
In conjunction with GH, IGF-I has varying differential effects on protein, glucose, lipid and calcium metabolism and therefore body composition (Mauras et al 2000). Direct effects result from the interaction of GH with its specifi c receptors on target cells. In the adipocyte, GH stimulates the cell to break down triglyceride and suppresses its ability to uptake and accumulate circulating lipids. Indirect effects are mediated primarily by IGF-I.
Little is known about the expression of skeletal musclespecifi c isoforms of IGF-I gene in response to exercise in humans, nor the infl uence of age and physical training status. A single bout of isometric exercise stimulated the expression of mRNA for the IGF-I splice variants IGF-IEa and IGF-IEc (mechano growth factor [MGF]) within 2.5 hours, which lasts for at least 2 days after exercise (Greig et al 2006).
Gh defi ciency (GHD)
The therapeutic indications for recombinant human growth hormone (rhGH) in the UK are controlled by the National Institute for Clinical Excellence guidelines (NICE 2003), which has very strict guidelines and has recommended treatment with rhGH for children with: • Growth disturbance in short children born small for gestational age • Proven GH defi ciency • Gonadal dysgenesis (Turner's syndrome) • Prader-Willi syndrome • Chronic renal insuffi ciency before puberty (renal function decreased to less than 50%). Treatment should be initiated and monitored by a pediatrician with expertise in managing GH disorders; treatment can be continued under a shared-care protocol by a general practitioner. Treatment should be discontinued if the response is poor (ie, an increase in growth velocity of less than 50% from baseline) in the fi rst year of therapy. In children with chronic renal insuffi ciency, treatment should be stopped after renal transplantation and not restarted for at least a year. NICE (2003) has recommended rhGH in adults only if the following three criteria are fulfi lled: • Severe GH defi ciency, established by an appropriate method • Impaired quality of life, measured by means of a specifi c questionnaire • Already receiving treatment for another pituitary hormone defi ciency. Treatment should be discontinued if the quality of life has not improved suffi ciently by nine months. Severe GHD developing after linear growth is complete but before the age of 25 years should be treated with rhGH; treatment should continue until adult peak bone mass has been achieved. Treatment for adult-onset GH (A-OGH) defi ciency should be stopped only when the patient and the patient's physician consider it appropriate. Treatment with somatropin should be initiated and managed by a physician with expertise in GH disorders; maintenance treatment can be prescribed in the community under a shared-care protocol (British National Formulary 2008). A-OGH defi cient individuals are overweight, with reduced lean body mass (LBM) (Salomon et al 1989;Amato et al 1993;Beshyah et al 1995) and increased fat mass (FM), especially abdominal adiposity (Salomon et al 1989;Bengtsson et al 1993;Amato et al 1993;Beshyah et al 1995;Snel et al 1995). They have reduced total body water (Black et al 1972) and reduced bone mass (Kaufman et al 1992;O'Halloran et al 1993;Holmes et al 1994). There is also reduced strength and exercise capacity (Cuneo et al 1990(Cuneo et al , 1991a(Cuneo et al , 1991b and reduced cardiac performance and an altered substrate metabolism (Binnerts et al 1992;Fowelin et al 1993;Russell-Jones et al 1993;O'Neal et al 1994;Hew et al 1996). This leads to an abnormal lipid profi le (Cuneo et al 1993;Rosen et al 1993;De Boer et al 1994;Attanasio et al 1997) which can predispose to the development of cardiovascular disease. A-OGH defi ciency reduces psychological well-being and quality of life (QoL) (Stabler et al 1992;Rosen et al 1994). The use of rhGH is currently being used successfully to treat this defi ciency.
GH excess (acromegaly)
GH excess results in the clinical condition known as acromegaly. This condition is presented as a consequence of a pituitary tumor characterized by a multitude of signs and symptoms. Pituitary tumors account for approximately 15% of primary intracranial tumors (Melmed 2006). Acromegalics have an increased risk of DM, hypertension and premature mortality due to CVD (Bengtsson et al 1993).
The most common side effects following administration arise from sodium and water retention.
Weight gain, dependent edema, a sensation of tightness in the hands and feet, or carpal tunnel syndrome; can frequently occur within days (Hoffman et al 1996).
Arthralgia (joint pain), involving small or large joints can occur, but there is usually no evidence of effusion, infl ammation, or X-ray changes (Salomon et al 1989). Muscle pains can also occur. GH administration is documented to result in hyperinsulinemia (Hussain et al 1993) which may increase the risk of cardiovascular complications. GH-induced hypertension (Salomon et al 1989) and atrial fi brillation (Bengtsson et al 1993) have both been reported, but are rare. There have also been reports of cerebral side effects, such as encephalocele (Salomon et al 1989) and headache with tinnitus (Bengtsson et al 1993) and benign intra-cranial hypertension (Malozowski et al 1993).
Cessation of GH therapy is associated with regression of side effects in most cases (Malozowski et al 1993).
APWV in pathological GH states
The potential mechanisms accounting for this abnormality may result from a direct IGF-I mediated effect via increased production of NO. Qualitative alterations in lipoproteins have been described in GHD adults (O'Neal et al 1996), resulting in the generation of an atherogenic lipoprotein phenotype, which would contribute to endothelial dysfunction.
Growth hormone defi ciency (GHD)
Increased oxidative stress exists in GHD adults, which may be a factor in atherogenesis and reduced by GH therapy's effects on oxidative stress (Evans et al 2000). Endothelial dysfunction exists in GHD adults (Evans et al 1999), which is reversible with GH replacement (Pfeifer et al 1999). An impaired endothelial-dependent dilatation (EDD) response was documented in GHD adults, which significantly improved after GH treatment.
Patients with GHD, with increased risk of vascular disease, have impaired endothelial function and increased AI x compared with controls. Replacement of GH resulted in improvement of both endothelial function and AI x , without changing BP (Smith et al 2002).
Replacement of GH for 3 months corrected endothelial dysfunction in patents with chronic heart failure (Napoli et al 2002).
Renal failure induces GH resistance at the receptor and post-receptor level, with concomitant endothelial dysfunction, which can be overcome by replacement of GH (Lilien et al 2004).
Growth hormone excess
Acromegaly is associated with changes in the central arterial pressure waveform, suggesting large artery stiffening. This may have important implications for cardiac morphology and performance as well as increasing the susceptibility to atheromatous disease.
Large artery stiffness is reduced in "cured" acromegaly (GH Ͻ 2.5 mU.L −1 ) and partially reversed after pharmacological treatment of active disease (Smith et al 2003).
GH and infl ammatory markers of CVD
Human peripheral blood T cells, B cells, natural killer (NK) cells, and monocytes express IGF-I receptors (Wit et al 1993). Administration of either GH or IGF-I can reverse the immunodefi ciency of Snell dwarf mice (Van Buul-Offers et al 1986). GH replacement induced a signifi cant overall increase in the percent specifi c lysis of K562 tumor target cells, in healthy adults (Crist and Kraner 1990). NK activity was signifi cantly increased throughout the six weeks period of administration. In vitro studies, using human lymphocytes indicate that GH is important for the development of the immune system (Wit et al 1993). However, pre-operative administration of GH did not alter C-reactive protein (CRP), serum amyloid A (SAA) or interleukin-6 (IL-6, an infl ammatory cytokine) release (Mealy et al 1998). Homocysteine (HCY) concentration has been established as an independent risk factor for atherosclerosis (Eichinger et al 1998;Stehouwer and Jakobs 1998). CRP and IL-6 levels and central fat decreased signifi cantly in rhGH recipients in GHD after 18 months. Lipoprotein(a) and glucose levels signifi cantly increased, without affecting lipid levels (Sesmilo et al 2000). HCY impairs vascular endothelial function through signifi cant reduction of NO production. This appears to potentiate oxidative stress and atherogenic development (van Guldener and Stehouwer 2000). HCY levels were not signifi cantly elevated in GHD adults and HCY was considered to be unlikely to be a major risk factor for vascular disease, if there are no other risk factors present (Abdu et al 2001). Pegvisomant (GH receptor antagonist) did not induce signifi cant acute changes in the major risk markers for CVD, in apparently healthy abdominally obese men (Muller et al 2001). This suggested that the secondary APWV, infl ammatory markers, GH and IGF states, cardiovascular and cerebrovascular disease metabolic changes, eg, infl ammatory factors, which develop as a result of long-standing GHD are of primary importance in the pathogenesis of atherosclerosis in patients with GHD. Patients with active acromegaly have signifi cantly lower CRP and signifi cantly higher insulin levels than healthy controls and administration of pegvisomant signifi cantly increased CRP to normal levels (Sesmilo et al 2002). GH secretory status may be an important determinant of serum CRP levels, but the mechanism and signifi cance of this fi nding is as yet unknown. Infl ammatory markers are predictive of atherosclerosis and cardiovascular events (Ridker et al 2002;Danesh et al 2004). The metabolic syndrome (MS) is correlated with elevated CRP and a predictor of coronary heart disease and DM (Sattar et al 2003). IL-6 concentrations were signifi cantly increased in GHD, compared to BMI-matched and nonobese controls, respectively (Leonsson et al 2003). CRP significantly increased in patients compared to nonobese controls, but not signifi cantly different compared to BMI-matched controls. Age, LDL-cholesterol, and IL-6 were positively correlated, and IGF-I was negatively correlated to arterial intima-media thickness (IMT) in the patient group, but only age and IL-6 were independently related to IMT.
Potential mechanisms
Oxidative stress represents a mechanism leading to the destruction of neuronal and vascular cells.
Oxidative stress occurs as a result of the production of free radicals or reactive oxygen species (ROS). ROS consist of entities including the superoxide anion, hydrogen peroxide, superoxide anion, NO, and peroxynitrite. The production of ROS, such as peroxynitrite and NO, can lead to cell injury through cell membrane lipid destruction and cleavage of DNA (Vincent and Maiese 1999). Production of excess ROS can result in the peroxidation of docosahexenoic acid (DHA), a precursor of neuroprotective docosanoids (Mukherjee et al 2004). DHA is a fatty acid released from membrane phospholipids and is derived from dietary essential fatty acids. It is involved in memory formation, excitable membrane function, photoreceptor cell biogenesis and function and neuronal signaling. DHA may have a role in modulating IGF-I binding in retinal cells (Yorek et al 1989). Neuroprotectin D1 (NPD1) is a DHA-derived mediator that protects the central nervous system (brain and retina) against cell injury-induced oxidative stress, in cerebral ischemiareperfusion. It up-regulates the anti-apoptotic Bcl-2 proteins, Bcl-2, and Bclxl and decreases pro-apoptotic Bax and Bad expression (Bazan 2005). IGF-I also blocks Bcl-2 interacting mediator of cell death (Bim) induction and intrinsic death signalling in cerebellar granule neurons (Linseman et al 2002).
Dorsal root ganglia (DRG) neurons express IGF-I receptors (IGF-IR), and IGF-I activates the phosphatidylinositol 3-kinase (PI3K)/Akt pathway. High glucose exposure induces apoptosis, which is inhibited by IGF-I through the PI3K/Akt pathway. IGF-I stimulation of the PI3K/Akt pathway phosphorylates three known Akt effectors: the survival transcription factor cyclic AMP response element binding protein (CREB) and the pro-apoptotic effector proteins glycogen synthase kinase-3beta (GSK-3beta) and forkhead (FKHR). IGF-I regulates survival at the nuclear level through accumulation of phospho-Akt in DRG neuronal nuclei, increased CREB-mediated transcription, and nuclear exclusion of FKHR. High glucose levels increase expression of the pro-apoptotic Bcl protein Bim (a transcriptional target of FKHR). High glucose also induces loss of the initiator caspase-9 and increases caspase-3 cleavage, effects blocked by IGF-I, suggesting that IGF-I prevents apoptosis in DRG neurons by regulating PI3K/Akt pathway effectors, including GSK-3beta, CREB, and FKHR, and by blocking caspase activation (Leinninger et al 2004).
The unique role of IGF-IR in maintaining the balance of death and survival in foetal brown adipocytes, in IGF-IR defi ciency has been demonstrated (Valverde et al 2004).
A vascular protective role for IGF-I has been suggested because of its ability to stimulate NO production from endothelial and vascular smooth muscle cells. IGF-I probably plays a role in aging, atherosclerosis and cerebrovascular disease, cognitive decline, and dementia. In cross sectional studies, low IGF-I levels have been associated with an unfavorable profile of CVD risk factors, such as atherosclerosis, abnormal lipoprotein levels and hypertension, while in prospective studies, lower IGF-I levels predict future development of ischemic heart disease. The fall in the levels of GH (Iranmanesh et al 1991) and IGF-I (Milani et al 2004) with aging correlates with cognitive decline and it has been suggested that IGF-I plays a role in the development of dementia. IGF-I is highly expressed within the brain and is essential for normal brain development. IGF-I has anti-apoptotic and neuroprotective effects and promotes projection neuron growth, dendritic arborization and synaptogenesis (Ceda et al 2005).
Conclusion
Collectively, these data are consistent with a causal link between the age-related decline in GH and IGF-I levels and cardiovascular and cerebrovascular disease in senescence. Research into the benefi ts of replacement hormone therapy is still in its infancy. It was only three decades ago that rhGH became available and signifi cant progress into the somatopause and related pathologies has occurred. Could the future propose the concomitant use of rhGH and rhIGF as has been used in certain refractory cases of diabetes and GH resistance (Mauras and Haymond 2005)? The reviews of rhGH replacement in obesity have not been revolutionary (Liu et al 2007). It might be expedient to research the combination of rhGH and rhIGF in the variety of physiological GHD states to determine any benefi cial effects. After all, it wasn't until 1999 that hypothyroidism was identifi ed as being more appropriately treated with tri-iodothyronine (T 3 ) and tetra-iodothyronine (T 4 ), than T 4 alone (Bunevicius et al 1999).
Disclosure
The authors report no confl icts of interest in this work.
|
2017-06-20T03:02:21.697Z
|
2008-12-01T00:00:00.000
|
{
"year": 2008,
"sha1": "a457228c08d4b26ca84f33c539bdefd58b94b5f8",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=4308",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "dd9695f250f378c1a7d4d2ad9111774a04bc6f92",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
7061870
|
pes2o/s2orc
|
v3-fos-license
|
Clinical outcomes: to be a surrogate or not to be ...?
Clinical trials remain the bedrock of the introduction of new therapies for patients with cancer. Over the past 30 years there have been enormous improvements in outcomes for patients with breast cancer, based largely (but not exclusively) on the widespread implementation of the results of randomized trials. Widespread use of screening mammography, breast conservation, adjuvant hormonal therapy, adjuvant chemotherapy and, most recently, adjuvant trastuzumab have all been based on the results of well designed clinical trials. All of these interventions have been shown to either improve survival or, in the case of breast conservation, maintain survival despite less radical surgery. For most women with early breast cancer, it is the avoidance of the death sentence they feel hangs over them when they are first diagnosed with cancer, that is the most important reason why they undergo these treatments.
Clinical research in breast cancer remains as active as ever, with newer interventions being tested in ever larger and/or more complex trial designs. Many studies may not be designed to test questions about overall survival, with recent studies also addressing tolerability, issues of limited resources and, increasingly, means to target treatments to the subgroups of patients who really benefit from the specific therapy. The goal of studies that aim to optimize treatment may not be the same for the researcher and the patient. However, most would agree that it would be ideal if we could reduce the diagnosis of breast cancer to one that had the same implications as being diagnosed with a 'touch of blood pressure', namely the concept that although a few patients may still suffer unwanted consequences of the diagnosis, for the vast majority the implication is the necessity to undergo relatively nontoxic treatment that effectively prevents recurrence. To this end, many trials are now designed to address different primary end-points from the traditional one, still much beloved of the US Food and Drug Administration, of overall survival (OS).
Introduction
Clinical trials remain the bedrock of the introduction of new therapies for patients with cancer. Over the past 30 years there have been enormous improvements in outcomes for patients with breast cancer, based largely (but not exclusively) on the widespread implementation of the results of randomized trials. Widespread use of screening mammography, breast conservation, adjuvant hormonal therapy, adjuvant chemotherapy and, most recently, adjuvant trastuzumab have all been based on the results of well designed clinical trials. All of these interventions have been shown to either improve survival or, in the case of breast conservation, maintain survival despite less radical surgery. For most women with early breast cancer, it is the avoidance of the death sentence they feel hangs over them when they are first diagnosed with cancer, that is the most important reason why they undergo these treatments.
Clinical research in breast cancer remains as active as ever, with newer interventions being tested in ever larger and/or more complex trial designs. Many studies may not be designed to test questions about overall survival, with recent studies also addressing tolerability, issues of limited resources and, increasingly, means to target treatments to the subgroups of patients who really benefit from the specific therapy. The goal of studies that aim to optimize treatment may not be the same for the researcher and the patient. However, most would agree that it would be ideal if we could reduce the diagnosis of breast cancer to one that had the same implications as being diagnosed with a 'touch of blood pressure', namely the concept that although a few patients may still suffer unwanted consequences of the diagnosis, for the vast majority the implication is the necessity to undergo relatively nontoxic treatment that effectively prevents recurrence. To this end, many trials are now designed to address different primary end-points from the traditional one, still much beloved of the US Food and Drug Administration, of overall survival (OS).
Surrogate end-points: fit for purpose?
Even when improvements in OS are the ultimate aspiration of a study, it is common for reasons of timeliness to make another end-point the primary determinant of success. For adjuvant trials, the use of disease-free survival (DFS) is an accepted surrogate because, to date, mature follow up of both individual trials and their meta-analyses have consistently confirmed that improvements in DFS subsequently translate into firm improvements in OS. For advanced disease studies, time to disease progression (TTP) is frequently used, but in fact this much less commonly precedes clear improvements in OS, although a few notable exceptions exist (use of trastuzumab and in some studies involving taxanes). The reasons for this divergence are not entirely clear, because in both adjuvant and advanced disease studies the possibility exists of a loss of effect of the earlier use of a novel intervention as a consequence of its use after relapse/progression. Furthermore, most successful adjuvant interventions are designed on the basis of a positive improvement in TTP in an advanced disease study, and survival gains in early disease can often be seen despite the lack of gains in OS in a comparable advanced disease study. However, this may not be as big an issue as it first appears because it highlights an important question; when should a trial be designed simply to produce the data required to justify the testing of an intervention in early disease, and when should it be designed to benefit the patient population in whom it is actually being tested?
In advanced disease, although an improvement in survival remains the over-arching wish of most patients, when this is not likely there nevertheless remain important improvements that are highly clinically relevant. There are good data that improvements in quality of life and/or reduction in symptoms correlate with tumour response, so that where patients are very symptomatic, demonstration of an increased response rate is a worthwhile gain, provided that this does not come at the cost of major toxicity. In contrast, for many women with more indolent, relatively asymptomatic disease, absence of progression may be the primary goal. It is interesting to note, therefore, that patients with stable disease for at least 6 months on hormonal therapy often have similar OS to those whose disease actually shrinks on therapy, justifying the use of TTP and/or clinical benefit as the primary end-point for many such studies.
Conventional drug development model
For cytotoxics, the phase I-II-III development sequence (as shown in Figure 1) was in fact based on the experience that the maximum tolerated dose (MTD) in phase I often turned out to be an effective and tolerated dose in early stage disease. However, for cytotoxics the identification of a dose on the basis that it caused an acceptable level of cytotoxicity in normal tissues, which could then cause a desirable level of cytotoxicity in malignant tissues, is perhaps not a great surprise! All patients contribute toward the toxicity end-point in a conventionally designed phase I study, and so it is a relatively efficient way to reach the dose level likely to deliver efficacy if it exists. In contrast (Figure 2), the way in which we might determine a maximum biologically effective dose could be more difficult.
Therefore, in a translational phase I or dose-finding study, in which there is a good surrogate normal tissue with little variation in sensitivity (the biological equivalent of the bone marrow or gastrointestinal mucosa for a classical cytotoxic), this may not be a problem. When the surrogate end-point used is in the tumour, however, we have the problem that we will have to increase the number of patients at each dose level by a factor related to the proportion of patients with sensitive tumours. For example, if only half of the patients have sensitive disease, then in a cohort of three patients (the classic phase I design) there is a one in eight (0.125) chance that no biological effect will be seen at one level simply because all three tumours were totally resistant; across three active dose levels the chance that we will have one cohort with no responses becomes 0.29, or almost one in three! If one then considers the risk that a biologically maximally effective dose is identified because no responses are seen at a higher dose level, it becomes apparent that this can happen not infrequently. Hence, if only half of the patients have sensitive disease, then one will need at least five patients per dose cohort to have less than a 5% chance of seeing no response at a biologically active dose, a figure that rises as the proportion of responders falls.
Presurgery systemic therapy
Perhaps the area of clinical research where there is most interest in surrogates is in the use of systemic therapy before surgery. Essentially two models exist: the short 'preoperative' course, designed not to deliver systemic benefit but only short-term biological changes in the breast cancer; and the longer 'neoadjuvant' or 'primary systemic' therapy, in which clinical changes in the primary tumour are the goal, using drugs that deliver systemic benefits. It became clear over many years that when patients are treated with 3 to 6 months of chemotherapy, those patients who have no residual invasive disease in the primary tumour and/or ipsilateral axillary lymph nodes have the best long-term outcome. It was anticipated, therefore, that where additional therapy given before surgery increased the proportion of patients achieving such pathological complete responses (pCR), this would lead to gains in long-term outcome. To date, that has not been confirmed, in particular in the National Surgical Adjuvant Breast and Bowel Project (NSBAP) B-18 trial, in which addition of docetaxel before surgery doubled the proportion of patients achieving pCR but made no significant difference to the distant DFS. However, it remains true that on the one Traditional drug development -defining maximum tolerated dose (MTD) using advanced disease as a model. Biological drug development -defining biologically effective dose (BED) using advanced disease as a model. hand the addition of taxanes and/or trastuzumab to neoadjuvant chemotherapy does increase the proportion of patients achieving pCR, and on the other that the addition of those same agents to postoperative adjuvant therapy does lead to improved OS. Therefore, pCR would appear to be a surrogate predictor of a more effective adjuvant therapy; what it does not yet appear to do is to identify the precise patients who will benefit from the therapy! For patients given short-term exposure to a drug before surgery, there is interest in understanding what biological changes in that context mean for long-term clinical benefit. To date, no study has assessed the prognostic or predictive implications for a specific biological change induced by a short-term exposure to therapy before routinely timed surgery. However, we do have data on the biological changes seen after 2 weeks of exposure to tamoxifen and/or anastrazole, in patients who then continued on therapy, had surgery after a further 10 weeks of therapy and then were encouraged to continue on the same hormonal therapy in the adjuvant setting. In this setting, it was clear that those patients whose tumours had the lowest rate of proliferation at 2 weeks had the lowest rate of recurrence. These data would appear to indicate that where a tumour has a very low proliferation rate (either intrinsically or more probably because hormonal therapy has reduced it), there is a low rate of relapse, at least over the first few years. It does need to be borne in mind that where patients are given adjuvant tamoxifen, at least in the older trials that make up the Oxford Overview, the majority of relapses occur after the 5 years of therapy, so that the low proliferating tumours might just be the ones that take longer to relapse. However, it seems reasonable to take the view that reduction in proliferation after 2 weeks of hormonal therapy is a surrogate for identifying a lower rate of relapse during the first few years after surgery, and there is a high chance that therapies that are better at doing this (as was shown in the above study for the aromatase inhibitor anastrazole) will be better at preventing relapse during the first few years (as ATAC [Arimidex, Tamoxifen Alone or Combination] has shown).
Conclusion
For most patients, and for most trials in early disease, a treatment that cures more people, or one that cures just as many with fewer unwanted side effects, is the desired goal and no surrogate can really replace this. Use of DFS as the first end-point, because it has consistently been validated as predating improvements in OS, is perfectly acceptable, as long as studies with novel interventions continue to collect the follow-up data to demonstrate that this linkage applies to newer biological interventions just as it does for conventional treatments.
However, for many studies there is a good clinical justification for using other end-points that meet the needs of the patient population being studied; end-points such as TTP, response rate, and clinical benefit rate are therefore not just to be seen as surrogates but as the most appropriate end-point for that trial.
There remains the question as to when a surrogate is a valid end-point for subsequent improvements in DFS and/or OS. Two obvious candidates in the field of preoperative or neoadjuvant therapy are falls in proliferation in patients treated with primary (short-term or long-term) endocrine therapy, or the proportion of patients achieving pCRs to neoadjuvant chemotherapy. Both appear to be reliable, at least most of the time, but neither has yet been shown to have an ideal level of discrimination between agents that can and cannot lead to changes in ultimate outcomes. This lack of clear linkage could in fact be because some of the agents tested in these studies have themselves not delivered a large enough improvement in efficacy to confirm the link with DFS and/or OS.
Surrogates must be proven to be able to take the place of the real end-point in question, and to date, this linkage is only really established for some treatments. For new biological agents this is even less certain, although the success of trastuzumab suggests that we can have confidence that a model developed for untargeted cytotoxics, and loosely targeted hormonal therapies, may work out for many newer agents. However, in my view, this cannot be taken for granted, and the relevant longer term follow up is necessary in a generation of trials of newer agents with a size such that primary end-points are met within a short time of closure to accrual.
|
2016-05-12T22:15:10.714Z
|
2007-12-20T00:00:00.000
|
{
"year": 2007,
"sha1": "067ee74ff878df73a92548a011d40fff428fb646",
"oa_license": "CCBY",
"oa_url": "https://breast-cancer-research.biomedcentral.com/track/pdf/10.1186/bcr1824",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "067ee74ff878df73a92548a011d40fff428fb646",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
85546859
|
pes2o/s2orc
|
v3-fos-license
|
Institutions or Contingencies? A Cross-Country Analysis of Management Tool Use by Public Sector Executives
: Management tools are often argued to ameliorate public service performance. Indeed, evidence has emerged to support positive outcomes related to the use of management tools in a variety of public sector settings. Despite these positive outcomes, there is wide variation in the extent to which public organizations use management tools. Drawing on normative isomorphism and contingency theory, this article investigates the determinants of both organization-oriented and client-oriented management tool use by top public sector executives. The hypotheses are tested using data from a large-N survey of 4,533 central government executives in 18 European countries. Country and sector fixed-effects ordinary least squares regression models indicate that contingency theory matters more than normative isomorphism. Public executives working in organizations that are bigger and have goal clarity and executive status are more likely to use management tools. The only normative pressure that has a positive impact on management tool use is whether public sector executives have a top hierarchical position.
T he New Public Management (NPM) movement of the 1980s generated an influx of private sector management tools into public organizations (Hood 1991;Osborne 2006). Instruments such as strategic planning, performance appraisal, and management by objectives became the core of the public manager's toolbox. Because the private sector was considered a "role model" in efficiency and effectiveness, it was argued that these tools could also generate a more efficient and effective government (Diefenbach 2009). In recent years, insights from service management have complemented these more organization-oriented management tools with client-oriented management tools such as client surveys and quality management systems. These tools, it has been argued, can help public organizations become more responsive to the needs of their clients-thus answering the call for a more service-dominant approach to public management (Osborne, Radnor, and Nasi 2013).
At the heart of management tools' popularity is the assumption that these tools contribute to public service performance-be it indicators of efficiency and effectiveness or indicators of responsiveness (Andrews and Van de Walle 2013;Walker and Andrews 2015). Indeed, empirical evidence has emerged to support positive outcomes resulting from the use of organization-oriented and client-oriented management tools by public organizations (e.g., Audenaert et al. 2016;Poister, Pasha, and Edwards 2013), and recent meta-analyses have confirmed the significant and positive association, on balance, between several management tools and public service performance (Gerrish 2016;Walker and Andrews 2015). At the same time, critics have argued that adopting management tools does not always make sense and does not contribute to productivity or efficiency. Andrews (2010), for instance, reviewed studies on the introduction of performance management in public organizations and found that this made little difference to organizational efficiency.
Despite the overall popularity of management tools in public organizations as well as the evidence supporting (mainly) positive outcomes, widespread heterogeneity of tool use has been observed throughout the public sector, and little evidence has been uncovered aimed at explaining that heterogeneity (George and Desmidt 2014;Poister, Pitts, and Edwards 2010). Several authors have looked at management tool use across public organizations (e.g., Berry 1994;Streib 1989, 2005) and found significant individual, organizational, and sectoral differences in the extent to which management tools are used. Nevertheless, few studies have delved into the determinants of these differences (George and Desmidt 2014). Empirical studies have typically looked at management tools as something public organizations "have," a macro process that helps explain performance, whereas little is known about tools as something that public sector practitioners "do" (Bryson, Crosby, and Bryson 2009;George, Desmidt et al. 2017). More insights are required into the people actually using management tools in their daily practice. Hence, this article investigates the determinants of management tool use in the public sector and asks, Why are some public sector executives more prone to use management tools than others?
Drawing on normative isomorphism (Powell and DiMaggio 1991), we hypothesize that public sector executives who have significant private sector experience, who have a management degree, who hold high hierarchical positions, and who work in organizations that have unclear goals are more likely to use management tools as a mechanism to enhance their legitimacy. We complement these hypotheses with insights from contingency theory (Donaldson 2001) and argue that public sector executives working in bigger organizations with clear goals and executive status are more prone to use management tools because of their organizational context. These hypotheses are tested using a cross-sectional, large-N survey of 4,533 public sector executives from 18 European countries. Two country and sector fixed-effects ordinary least squares (OLS) regression models are constructed: one focused on organization-oriented management tools as the dependent variable and one focused on client-oriented management tools as the dependent variable.
This article contributes to the public management literature in that it is one of the first large-N empirical studies to investigate the role of normative isomorphism and contingency theory in explaining management tool use in public organizations and to use a sample of central government executives from 18 European countries. Insights into the determinants of management tool use can help explain the current heterogeneous situation in the public sector and provide policy makers with evidence on where to prioritize efforts when devising and implementing public management reforms. We also identify the applicability of normative isomorphism and contingency theory as theoretical frameworks within public management and assess which of the two frameworks has the strongest explanatory value in the specific case of management tool use by public sector executives.
Moreover, by using data from 18 European countries, the predominant Anglophile focus of public management literature is complemented by a broader European perspective. This is no trivial matter, as recent insights have emphasized the importance of context and culture when investigating core questions within public management (Meier, Rutherford, and Avellaneda 2017;O'Toole and Meier 2015). We account for this contextual reality by using evidence from 18 European countries and by identifying-through a country and sector fixed-effects model-which findings on management tool use hold across these European countries and sectors. Indeed, there are several differences between the European Union (EU) and U.S. public sectors, where most research has been focused, including the more extensive regulatory environment within the EU as well as the stronger influence of labor unions (Löfstedt and Vogel 2001). Additionally, public organizations within the EU experience less autonomy, stronger political control, and more regulated labor markets than their U.S. counterparts (Meier, Rutherford, and Avellaneda 2017). These contextual differences thus make the EU a particularly interesting setting to complement current U.S.-based studies.
We first present our hypotheses based on normative isomorphism and contingency theory. Next, we elaborate on our methods and data. We present the results from two country and sector fixedeffects OLS regression models and discuss the implications of our findings for public management theory and practice.
Theory and Hypotheses
There is an established research tradition that describes the use and diffusion of management tools in the public sector. Studies have looked at management tools in general (e.g., Poister and Streib 1994;Rivenbark and Kelly 2003) as well as specific tools such as management accounting (e.g., Lapsley and Wright 2004), performance measurement (e.g., Poister and Streib 1999;Torres, Pina, and Yetano 2011), management by objectives (e.g., Poister and Streib 1995), and strategic planning (e.g., Berry and Wechsler 1995;Poister and Streib 2005). These studies have predominantly focused on U.S. and U.K. local government, with some notable exceptions. For instance, Botner (1985) investigated management tool use at the U.S. state level. Damonte, Dunlop, and Radaelli (2014) looked at the use of policy control tools in 17 European countries. Van Dooren (2005) centered on performance measurement at the Flemish regional government level, and Jeannot and Guillemot (2013) focused on management tool use in French central government departments.
Typically, these studies describe the presence of management tools and potential differences across groups, whereas explaining these differences has received limited attention. More recent work has attempted to explain variation in and effects of the use of management tools, generally couched under the conceptual umbrella of innovation diffusion (e.g., Walker, Damanpour, and Devece 2010;Hansen 2011) or normative and mimetic isomorphism (e.g., Pina, Torres, and Yetano 2009). Our article builds on this research tradition using two complementary theoretical approaches to explain variation in management tool use by public sector executives. The first is normative isomorphism, and the second is contingency theory. Similar studies on the use of management tools have employed exactly this theoretical juxtaposition as well, and findings either support their interconnectedness (e.g., Carvalho, Gomes, and Fernandes 2012) or mainly support contingency theory (e.g., Laegreid, Roness, and Rubecksen 2007). Figure 1 presents the conceptual model, which will be discussed in the following sections.
Normative Isomorphism and Management Tool Use
The first theoretical approach is normative isomorphism. The managerialization of public sector executives is a process of professionalization and standard setting. This means that members of the profession will use practices associated with the management profession such as organization-oriented and client-oriented management tools. This professionalization is "the collective struggle of members of an occupation to define the conditions and methods of their work" (Powell and DiMaggio 1991, 152).
The shift in public sector executives' role from being administrators to becoming public managers means that members of this managerial occupation come to identify themselves not just as members of the administrative elite but also as members of the managerial profession, adhering to commonly accepted standards within that group. To garner legitimacy, they want to present themselves as new public managers. This involves embracing organization-oriented and client-oriented management tools as the method to perform managerial work. This identification with the managerial profession has its source in a shared formal education and in the existence of professional networks for the formulation and diffusion of norms (Ashworth, Boyne, and Delbridge 2009;Powell and DiMaggio 1991). Increasingly, also in the public sector, such networks and standard setting become transnational and international (Djelic and Sahlin-Andersson 2006). Teodoro (2014) looked at whether belonging to a specific profession makes managers manage differently. He argued that normative isomorphism helps explain "how professions shape executive management" (2014,983). However, in a study on management tool use in Norwegian state-level public organizations, Laegreid, Roness, and Rubecksen (2007) found that normative isomorphism was not a major explanation for such use. This may be explained by the insights of Ashworth, Boyne, and Delbridge (2009), who, in a study among 101 English public sector organizations, found that normative isomorphism had a strong effect on organizational strategies and cultures but a weak effect on structures and processes. Management tools fit the second category.
There are two reasons why public organizations are a good area in which to study normative isomorphism's impact on managers. First, a study by Frumkin and Galaskiewicz (2004) found that government organizations are more susceptible than private organizations to normative isomorphic pressures, thereby confirming one of the hypotheses put forward by Powell and DiMaggio (1991). Second, we follow the recommendation of Teodoro (2014Teodoro ( , 1000 to search "for evidence of normative isomorphism," especially in types of organizations in which executives come from different kinds of professions. This is particularly the case for European central government administrations, in which public sector executives show a trend toward managerialism (Van Thiel, Steijn, and Allix 2007) yet still have not converged around a shared (public) managerial identity (Meyer et al. 2014).
Public sector executives in European countries have a wide variety of professional backgrounds. Unlike some other industries, central government organizations are not dominated by one particular professional group at the managerial level. Some central government administrations are largely populated by traditional career bureaucrats, who have had legal training and spent most of their careers in the public sector. Others are populated by a new brand of managers who sometimes have private sector experience and management education. There are major country differences, with German top executives having enjoyed predominantly legal training and French top executives coming from the "Grandes Ècoles," but beyond this, one finds executives coming from a multitude of backgrounds and having a diverse set of educational backgrounds, ranging from law, political science, and economics to natural science, engineering, and medicine (Hammerschmid et al. 2016;Thijs, Hammerschmid, and Palaric 2018).
Figure 1 Model Predicting Management Tool Use by Public Sector Executives
These new public managers distinguish themselves from traditional bureaucrats by having more private sector work experience and an educational background in management studies (Meyer and Hammerschmid 2006; Van Thiel, Steijn, and Allix 2007). Management education is expected to have a positive impact on management tool use (Jarzabkowski et al. 2013), and having a background in the private sector is also expected to contribute to the professionalization of the managerial role. Public sector executives with a managerial degree and private sector experience can be expected, based on normative isomorphism, to more strongly associate with the profession of a manager-thus indicating that they are particularly prone to the use of management tools (Fernández Gutiérrez and Van de Walle 2018). This leads to the following two hypotheses: Hypothesis 1: Public sector executives with extensive private sector experience are more likely to use management tools.
Hypothesis 2: Public sector executives with a management degree are more likely to use management tools.
In many studies on normative isomorphism, scholars have looked at professional association membership as a key channel for such pressures. Examples are studies looking at membership in professional associations, as in the case of accountants (Christensen and Parker 2010), or membership in some association that groups organizational peers (Frumkin and Galaskiewicz 2004). Top public sector management in Europe, however, is a diverse and fragmented group, making association membership less suitable for the operationalization of normative pressures. However, top public officials, especially those working within the same policy field, meet regularly at certain forums. An alternative operationalization is to consider membership in a select group of top public sector executives (i.e., those at the top hierarchical level of their organization) as a proxy for such associational membership. Top public sector executives belong to a small group of peers, and they may feel pressure to use modern management tools in order to demonstrate their position as a modern top executive within the peer group. In addition, they are a visible group, subject to scrutiny by the public and by politicians, especially when they sit at the top of the hierarchy, making it necessary for them to use management tools for legitimacy purposes. In addition, it can be assumed that top public sector executives are exposed to international peers through Organisation for Economic Co-operation and Development (OECD) and EU meetings, where they also pick up new practices (Pal 2012). We therefore hypothesize the following: Hypothesis 3: Public sector executives at the highest hierarchical level of their organization are more likely to use management tools. Powell and DiMaggio (1991) hypothesized that goal ambiguity is also a reason why managers might experience normative pressures because ambiguous goals imply a necessity to adjust to perceived norms as an indicator of legitimacy. The reason is that organizations whose goals are not very clear depend on factors other than output for their legitimacy, whereas for organizations with clear goals, it is easier to demonstrate their relevance and legitimacy. It is expected that public sector executives in organizations with high goal ambiguity will be subjected to a higher degree of normative isomorphism and will use all kinds of modern, almost normative, management tools to demonstrate their legitimacy. In relation to this, Teodoro (2014) argued that public organizations have particularly ambiguous and competing goals. Hence, the power of normative isomorphism will be especially strong because the managerial profession, and its use of management tools, provides norms on how to run a public organization (Teodoro 2014). To summarize, when an organization has unclear goals, it is hard to show that it is actually doing a good job. Hence, management tools can help demonstrate to the outside world that the organization is a real, legitimate organization that uses the same tools that other professional organizations use. For this reason, we hypothesize the following: Hypothesis 4a: Public sector executives who perceive their organization to have high goal ambiguity are more likely to use management tools.
Contingency Theory and Management Tool Use
In their study of management tool use in Norwegian state-level public organizations, Laegreid, Roness, and Rubecksen (2007) concluded that neo-institutionalist approaches are insufficient to explain tool use. Instead, they suggested that tool use is largely driven by the "functional applicability of the tool" (Laegreid, Roness, and Rubecksen 2007, 407), and therefore contingency-based explanations may be of more use. Contingency theory argues that organizational structure and management behavior are contingent on the technical task and environment of the organization (Donaldson 2001). For instance, early contingency scholars found differences in the organizational structure of local government authorities depending on contingencies such as size, environment, interdependence, and change (Hinings, Greenwood, and Ranson 1975).
Contingency studies have shown that organizations that are characterized by standardization and formalization can be expected to more easily use management tools (Pugh et al. 1968). Burns and Stalker (1961) drew attention to one important aspect of standardization and formalization within organizations: goal clarity. When goals are clear, this leads to measurable outputs related to those goals-which, in turn, makes it easier to use management tools geared toward goal formulation and implementation. For instance, Van Dooren (2005) found that having measurable outputs in the organization was related to a higher uptake of performance measurement tools. Contingency theory would thus predict a higher use of management tools in organizations with goal clarity. This implies that the theoretical predictions on goal ambiguity/clarity emanating from contingency theory are exactly the opposite of those emanating from normative isomorphism. To summarize, clear goals imply easier to measure and manage processes and outputs which, in turn, make it easier to use management tools because of the strong fit between how management tools work (i.e., based on goals, indicators, etc.) and what is happening in the organization. We hypothesize the following: Hypothesis 4b: Public sector executives who perceive their organization to have high goal clarity are more likely to use management tools.
A more prosaic contingency factor is organizational size. On the one hand, organizational size increases the need to use management tools to control the organization and to know what is happening within the organization and in its environment (Van Dooren 2005). On the other hand, organizational size is also indicative of the administrative and financial capacity to actually implement and use management tools. Earlier studies of the use of management tools have indeed demonstrated the importance of organizational size (e.g., Botner 1985;Laegreid, Roness, and Rubecksen 2007;Poister and McGowan 1984)-resulting in following hypothesis: Hypothesis 5: Public sector executives who work in bigger organizations are more likely to use management tools.
Third, task matters. For instance, in their study of Norwegian state-level public organizations, Laegreid, Roness, and Rubecksen (2007) found that service-providing organizations are more likely to use quality management tools. Public organizations with executive status, as delivery organizations, are typically involved in routine, standardized, repetitive tasks and work directly with clients. As a result, we can expect these organizations to be more likely to use management tools compared with ministries. Moreover, public organizations with executive status have been particularly influenced by NPM's emphasis on being more responsive to clients as well as efficient and effective-resulting in a natural need for organizationoriented and client-oriented management tools (Laegreid, Roness, and Rubecksen 2007). This results in our final hypothesis: Hypothesis 6: Public sector executives who work in public organizations with executive status are more likely to use management tools.
Data
This article relies on data from the COCOPS (Coordinating for Cohesion in the Public Service of the Future) Top Public Executive Survey, a population survey of top public sector executives in central government in European countries collected as part of a large collaborative European research project (http://www.cocops.eu) (see Hammerschmid et al. 2016). It is a population survey because it targeted the entire population of central government managersincluding the regional level in federal countries-at the highest hierarchical levels, following a detailed mapping of government structures, top positions, and their incumbents by national research teams in each of the 18 participating European countries (Germany, France, Spain, Italy, Estonia, Norway, the United Kingdom, The Netherlands, Hungary, Austria, Portugal, Lithuania, Ireland, Sweden, Denmark, Finland, Iceland, and Croatia).
The total N is 6,824, or an average response rate per country of 31.50 percent. Response rates vary per country, ranging from 51 percent in Iceland or just above 40 percent in Finland and Sweden to just under 18 percent in Spain and Italy. Data were collected using a standardized questionnaire in the local language based on an English-language master questionnaire, using a combination of online questionnaires and paper-based questionnaires where local practice dictated such an approach or where initial response rates were too low. The COCOPS survey included questions on the use of NPMdriven, organization-oriented management tools such as strategic planning as well as service-driven, client-oriented management tools such as client surveys. The survey, method, description and data set are available in open access through the Gesis Social Science Data (Archive: https://dbk.gesis.org/dbksearch/).
In our analyses, we decided to not include respondents who had missing values on any of our variables. This is particularly the case for respondents who indicated they were not able to assess the extent of management tool use for certain management instruments; for that reason, they were removed from the data set. This reduced our data set to a total N of 4,533 public sector executives for our model predicting organization-oriented management tool use and 4,489 public sector executives for our model predicting clientoriented management tool use. We tested for nonresponse bias by conducting a time-trend extrapolation test comparing early and late respondents in the different country samples. In this test, the replies of two groups of early and late respondents were compared to assess whether replies significantly differed over time; no significant differences emerged. Moreover, sampling issues were avoided by surveying our entire population as opposed to extracting a sample framework (Lee, Benoit-Bryan, and Johnson 2012).
Because we are dealing with administrative elites who can be easily identified, privacy considerations were important, and therefore no administrative data on the respondents' organization could be linked to the survey answers. All data about the respondents' organization were thus provided by the respondents themselves. The strict privacy considerations meant that only limited representativeness checks could be performed on the data, but a number of such checks at the country level and on respondents' gender showed no major biases (Hammerschmid, Van de Walle, and Stimac 2013).
Dependent Variables
The dependent variables in this study are the use of management tools as reported by public sector executives. For a series of management tools, respondents were asked, "To what extent are the following instruments used in your organization?" This was measured on a seven-point scale, ranging from "not at all" to "to a large extent." Respondents were also explicitly offered a "cannot access" option, because not all public sector executives may be fully aware of the type of managements tools used throughout their organization. This resulted in missing data for specific financial management tools.
Because management tools typically have different outcomes (i.e., efficiency and effectiveness focus of organization-oriented, NPMdriven tools versus responsiveness focus of client-oriented, servicedriven tools), we performed an exploratory factor analysis (EFA) on the list of eight management tools using principal component analysis (PCA) with an oblique rotation (see table 1). We use PCA because we aim to make statements based on our data and thus do not seek to extrapolate beyond our data, and we use oblique rotation because we expect our management tool factors to be correlated (Field 2013). The EFA resulted in two factors, which we have labeled "organization-oriented tools" and "client-oriented tools." Both factors had acceptable Cronbach's alphas (> .70).
Before proceeding with the analysis, we first present a number of descriptive findings at the country level to provide context to our two dependent variables. Figure 2 and Figure 3 provide histograms containing the country averages across our sample for both organization-oriented management tool use and client-oriented management tool use. As these figures show, organization-oriented management tool use seems to be-on average-highest in the United Kingdom and Sweden and lowest in Spain and Hungary. Interestingly enough, these findings do not hold for client-oriented management tool use, with Lithuania and the Netherlands scoring highest and Croatia and Germany scoring lowest. While these findings give some general insights into the role of different administrative systems in explaining management tool use, it is important to note that these figures are based on broad averages across countries, and in-depth comparative case studies should be conducted to better contextualize these findings. Explaining between-country differences is not the focus of this study. Hence, as will be seen later on, we use country fixed effects to account for potential variation in management tool use attributable to countrylevel variables. 1
Independent Variables
The two theoretical processes at work and the resulting set of hypotheses are operationalized as follows. For normative pressure, we measured private sector working experience as follows: "How many years of work experience outside the public sector do you have? In the private sector?" The categories included none, less than 1 year, 1-5 years, 5-10 years, 10-20 years, and more than 20 years. Having a management education was measured by asking, "What was the subject of your highest educational qualification?," with management/business/economics being one of the categories. To establish the respondent's hierarchical position, we asked, "What kind of position do you currently hold?," with three answer categories: the top hierarchical level in the organization, the second hierarchical level in the organization, and the third hierarchical level in the organization. Goal ambiguity/clarity is a scale variable consisting of four items based on the work of Jung (2011) measured on a seven-point strongly disagree/strongly agree scale: (1) "Our goals are clearly stated," (2) "Our goals are communicated to all staff," (3) "It is easy to observe and measure our activities," and (4) "We mainly measure inputs and processes" (Cronbach's alpha .724). A high score on this variable implies goal clarity, whereas a low score implies goal ambiguity.
Contingency variables included in this study are organizational size, measured as the total number of employees in the organization (<50, 50-99, 100-499, 500-999, 1,000-5,000, >5000); executive status-whether or not the respondent works for an executive or subordinate government body rather than a ministry-type organization; and goal ambiguity/clarity (see earlier description).
Figure 2 Histogram of Country Averages of Organization-Oriented Management Tool Use
Finally, a number of sociodemographic control variables (gender, age, level of education) were added to control for differences in responses and related nonresponse patterns. Moreover, the policy sectors within which the respondent works were also added to control for management tool use variation related to policy-sectorspecific characteristics. Similarly, country dummies were included to control for country-specific characteristics.
To provide further insights into the types of organizations included in our sample, we added some descriptives concerning our organizational-level variables (namely goal clarity, organizational size, and executive status). First of all, the average score for goal clarity is 4.93 (min = 1, max = 7) with a standard deviation of 1.27. On average, the included organizations tend to have rather clear goals, but there is quite some variation around the mean. Second, most organizations constitute 100-499 employees (about 35 percent), whereas the other size categories are more evenly distributed (1,000-5,000 employees about 15 percent, 500-999 employees about 14 percent, above 5,000 employees about 13 percent, fewer than 50 employees about 13 percent, and 50-99 employees about 9 percent). Finally, most of the organizations are agencies (about 55 percent), although there are still a lot of ministries included in the sample (45 percent).
Common Source Bias
This study uses a single, self-reported survey to measure all variables. This implies that common source bias (CSB) could be an issue. We use the recent recommendations of George and Pandey (2017) to investigate and discuss CSB issues in our data. First, most of our independent variables are demographic characteristics or organizational characteristics that one can expect to be factual (i.e., private sector experience, management education, top hierarchical position, organizational size, executive status of organization). It is unlikely that these variables suffer from CSB. Second, one of our independent variables (i.e., goal clarity/ambiguity) is perceptual and might be influenced by CSB when correlated with perceptions of management tool use. However, the Harman's single factor test on the items underlying these variables does not support the assumption that the correlations are strongly inflated by CSB (i.e., 42 percent of variance explained by single factor). Third, the variables under investigation in our article are not part of the set of variables argued to suffer from CSB by previous public administration articles-we should not assume CSB to be a prima facie inflator of our correlations. Fourth, the survey used in our analysis is the first to measure our variables across European countries and with top public sector executives-there is simply no archival data available for us to use in substitute of the survey.
Statistical Analysis
We use OLS regression analysis to test the hypotheses. In what follows, we present some of the essential information underlying our choice for this technique as recommended by Lee, Benoit-Bryan, and Johnson (2012). Although the items of our dependent variables were initially measured on an ordinal scale, the commuted overall score looks at the average score across items and is no longer ordinal-a linear model is preferred. Before conducting the OLS regression analysis, we need to ensure that our model adheres to the assumptions underlying OLS regression analysis.
Figure 3 Histogram of Country Averages of Client-Oriented Management Tool Use
First, we assess potential issues of multicollinearity by investigating the variance inflation factors (VIFs) of the included predictors. None of the observed VIFs exceeds a value of 10, indicating that multicollinearity is not an issue. An explanation for this is related to the high variation in public organizations across the sample, with very large agencies but also very large ministries, and an allocation of certain delivery tasks (with high goal clarity) to agencies in some systems but to ministries in others. Second, we assess the normality assumption of the regression residuals by looking at the normal P-P plot of the residuals as well as the associated histogram. Both figures indicate normally distributed residuals. We also adhere to the normality assumption.
Third, we address the assumption of individual independence. Our sample of public sector executives are nested in countries, sectors, and organizations-they are not independent. We include dummies for country and sector to account for clustering at these levels (i.e., a country and sector fixed-effects model). Because we do not have data on the organization to which executives belong, we cannot account for clustering at the organizational level. Hence, our hypotheses based on contingency theory could suffer from type I error because these organizational level variables are "stretched" to the individual level. To provide credence to our findings, we included a robustness check: we ran the model again but only included data from the highest hierarchal level of public sector executives (i.e., 995 respondents). There should be very few of these respondents who share the same organization thus strongly minimizing type I error. If our robustness check does not differ from our original findings concerning contingency theory, we argue these findings to be robust.
Fourth, we test for the absence of heteroscedasticity-or "fanning out"-of our residuals by looking at the individual scatter plots and conducting the Breusch-Pagan test. No indication of heteroscedasticity is present. Finally, we test for influential observations by calculating the Cook's distance. No distance is above the cutoff point of 1. We can now move on to the actual results of our analyses.
Results
Two country and sector fixed-effects OLS regression models are presented in table 2. The first model includes the use of organization-oriented management tools as the dependent variable, and the second model includes the use of client-oriented management tools as dependent variable. Both models are statistically significant. Model 1 explains almost half of the variation in the use of organization-oriented management tools, and model 2 explains about one-third of the variation in the use of clientoriented management tools.
Looking at the normative pressures, our analyses indicate that only one of our four hypotheses can be accepted. Private sector experience has little impact on management tool use. 2 Only when public sector executives have more than 20 years of private sector experience are they more likely to use client-oriented management tools. In all other cases, private sector experience has a limited part to play, resulting in the rejection of hypothesis 1. The impact of having a management degree is also not significant. 3 Public sector executives who studied business, management, or economics are not more likely to use management tools (i.e., rejection of hypothesis 2). However, as expected based on normative isomorphism, public sector executives at the highest hierarchical level of their organization are more likely to use both organization-oriented and client-oriented management tools compared with those at lower levels (i.e., acceptance of hypothesis 3).
Finally, goal ambiguity is not a significant positive predictor of management tool use-rather, the exact opposite is the case. When assessing the role of goal clarity/ambiguity in management tool use, the arguments of contingency theory are more applicable than those of normative isomorphism: goal clarity positively relates to management tool use as opposed to negatively (i.e., acceptance of hypothesis 4b and rejection of hypothesis 4a). Importantly, when looking at the standardized regression coefficients of all predictors, we find that goal clarity is by far the strongest predictor of both organization-oriented and client-oriented management tool use.
Apart from the role of goal clarity, the other two hypotheses based on contingency theory are also accepted. Public sector executives working in bigger organizations are more likely to use management tools (i.e., acceptance of hypothesis 5), and the same holds for those working in public organizations with executive status (i.e., acceptance of hypothesis 6). Looking at our controls, gender and education level have limited impact, but age does have a significant impact-with public sector executives aged 56-65 being most likely to use organization-oriented and client-oriented management tools.
To check for potential type I errors in our analyses of the contingency-theory-based variables (see "Statistical analysis"), we reran the foregoing models but this time only used data from the executives at the highest hierarchical level of their organization (i.e., 995 respondents). In both models, the exact same results were uncovered (i.e., goal clarity significant positive and strongest predictor, organizational size significant positive predictor, and executive status significant positive predictor). This robustness check confirms the validity of our initial findings and the limited impact of type I error.
Discussion
This article used a cross-country, large-N survey of European public sector executives to answer following question: which public sector executives are particularly prone to the use of management tools? Hypotheses were defined based on normative isomorphism and contingency theory, and two country and sector fixed-effects OLS regression models were used to test these hypotheses. The models indicate that public sector executives who have the highest hierarchical positions in their organization and work in bigger organizations with perceived goal clarity and executive status are more prone to using management tools. This finding implies that, in the particular case of management tool use by public sector executives, contingency theory outperforms normative isomorphism. There are several implications of these findings for public management theory and practice.
Although normative isomorphism is argued to be a potent framework to predict management tool use in organizational theory (Powell and DiMaggio 1991), our findings do not support its importance. Management education and private sector experience did not seem to spark a need for more management tool use by the surveyed public sector executives-despite our argument that these normative pressures would generate a sense of belonging to the management profession and thus a norm to use management tools. Nevertheless, one normative pressure did have an important part to play: hierarchical level. In line with our argument, public sector executives at the highest hierarchical level of their organization can be argued to belong to a professional group of top-level public managers who are strongly scrutinized by the public and by politicians and thus more likely to use management tools in their search for legitimacy. This finding gives some credence to the applicability of normative isomorphism in explaining the use of public management practices (complementing the findings of, e.g., Decramer et al. 2012).
We would like to emphasize that although our findings on normative isomorphism are not particularly potent, there are other isomorphic pressures that we did not investigate. Specifically, Notes: Dummies for country and sector were included to control for the variation attributable to country-or sector-level variables. These are not presented in the table. + p < .10; * p < .05; ** p < .01; *** p < .001.
we encourage future research to include the role of coercive isomorphism (i.e., formal rules and regulations) and mimetic isomorphism (i.e., copying successful organizations) in explaining management tool use within the public sector (Powell and DiMaggio 1991). Previous studies have indicated the importance of these pressures for public management practices (e.g., Ashworth, Boyne, and Delbridge 2009;George, Baekgaard et al. 2018).
We join the findings of Laegreid, Roness, and Rubecksen (2007) by uncovering the importance of contingency theory over normative isomorphism. Indeed, all three hypotheses based on contingency theory were accepted (Donaldson 2001). Public sector executives working in bigger organizations are more likely to use management tools because of the capacity underlying such organizations. Size has been argued to be a proxy for professionalization-that is, bigger public organizations are more professionalized-and our findings give further credence to this assumption (e.g., Andrews and Boyne 2010;Jung 2013). Moreover, because of the standardization of practices and direct contact with clients, public sector executives working in public organizations with executive status are more likely to indicate management tool use. Future research could assess why. In particular, it would be interesting to uncover the conditions under which public sector executives working in ministries are still likely to use management tools, as well as those working in smaller public organizations.
Throughout our models, one contingency variable emerged as the strongest predictor of management tool use: perceived goal clarity. Those public sector executives who perceived their organization to have clear goals were also more likely to use management tools. The impact of this finding cannot be underestimated. Goal clarity is not always a typical characteristic of public organizations-quite the opposite has been argued (e.g., Chun and Rainey 2005;Jung 2011). Nonetheless, evidence has emerged arguing that goal clarity is an antecedent of public service performance and clearly matters (e.g., Jung 2014). Our study indicates that goal clarity also has beneficial impact on management tool use, and this finding could indicate that management tool use is a mediating variable in the goal clarity/ public service performance relation. Indeed, it could be that goal clarity, in part, contributes to public service performance because it enhances management tools use in public organizations. This observation is, at the moment, speculative, and we encourage future research to look into management tool use as a potential mediator in the goal clarity/public service performance relation.
An alternative view on the relation between goal clarity and management tool use, but one that cannot be tested in the framework of the current article, theory, and data, is that using management tools reduces ambiguity in the organization by forcing its operations within a common mold. In other words, there could be some reversed causality such that using management tools might help create goal clarity-thus allowing managers to be more reflexive and able to manage ambiguity. At the moment, however, this is speculative, and we encourage future work to explore this theoretical assumption.
Our findings have clear implications for policy makers and public managers. In the past couple of decades, public management reforms have become widespread in public organizations at all levels of government (Diefenbach 2009). Our results suggest that one standardized approach to implementing and assessing the progress of these reforms is unrealistic. Different organizations require different support and guidelines. It might be easier for bigger organizations with clear goals, standardized activities, and contact with clients to use management tools, whereas other organizations might struggle to meet the requirements and require more training as well as resources. Similarly, public managers should take into account the context in which they work and understand that this context will influence their intent to implement new managerial practices. We encourage a contingency approach to management tools in public organizations, in which policy makers and public managers investigate the context in which they work and adapt their reform initiatives and practices accordingly (see, e.g., Bryson, Berry, and Yang 2010;Poister, Pitts, andEdwards 2010 Woods 2009).
Limitations
Although our article is one of the first to investigate the determinants of management tool use by public sector executives across 18 European countries, some limitations need to be acknowledged. First, although we argued that CSB is not much of an issue in our analyses, using a cross-sectional survey does imply issues of endogeneity. Our findings are limited to associations, and we cannot make statements on causality (George and Pandey 2017). We suggest that future research address this issue by using, for instance, research designs based on difference-in-differences or longitudinal analyses.
Second, we chose to use cross-country data as a means to generalize our findings beyond specific countries. We did not aim to explain between-country variance, and we encourage other authors to investigate that variance by looking at country-level variables. The external contingencies of public organizations are different depending on the country. Such contingencies include the political environment (e.g., more conservative versus more liberal political leadership), administrative system and culture (e.g., western versus southern European traditions), or external pressures to implement savings (e.g., austerity regimes). Also, the ways in which managers are recruited may differ substantially (e.g., having a central independent body responsible for recruitment versus political appointments), which means that the group of top public managers in one country may display more of the characteristics of being a profession than that in other countries. It is therefore important for future studies to look in more detail at country-level determinants (see also Jeannot, Van de Walle, and Hammerschmid 2018).
Third, because of anonymity issues, we cannot assign respondents to specific organizations. This implies that the organizational-level variables in our model (i.e., organizational size and organizational type) might suffer from type I error (false positives) because these data are "stretched" to the individual level (Hox 2010)-although our robustness check did not identify substantial type I error.
In addition, such anonymity made it difficult to collect data on professional associations and networks to allow for a more finegrained operationalization of normative isomorphic pressures and also inhibited further analysis of the exact tasks of the organization as well as the type of agency involved. We encourage future research to offer a more detailed operationalization of normative isomorphism and contingency theory-including, for instance, the profession the manager comes from and their exposure to OECD or other European networks as well as different agency types (e.g., delivery versus advisory function).
Finally, we focused on formal education as a measurement of normative pressures. However, many training initiatives concerning management tools exist in government-although not necessarily resulting in formal degrees. This might imply that while an executive does not have a formal education in management, business, or economics he or she is still very knowledgeable about management tools because of a series of trainings and seminars. Future research can address this limitation by not only focusing on formal degrees but also assessing the impact of trainings, seminars, and other lifelong learning initiatives.
Conclusion
Management tool use is often argued to be the result of normative pressures experienced by managers as well as the contingencies of the organizations in which they work. In our study on public sector executives from 18 European countries, we find that organizational contingencies matter more than normative pressures in explaining management tool use. This suggests a better applicability of contingency theory over neo-institutional theories when predicting management tool use in a public sector setting. Future research can build on this finding and juxtapose other institutional pressures and contingencies than those included in our survey-including coercive and mimetic pressures or contingencies related to the environment of the organization. For practice, these findings suggest that public management reforms cannot neglect contextsmaller public organizations with goal ambiguity and nonexecutive status are less inclined to adopt these tools and might require more training and support.
Funding
The research leading to these results received funding from the European Union's Seventh Framework Program under grant agreement No. 266887 (Project COCOPS), Socio-economic Sciences and Humanities.
|
2019-01-16T04:27:39.309Z
|
2019-01-06T00:00:00.000
|
{
"year": 2019,
"sha1": "661a16c81fd4c897ca9c51fab41c62d889fd5302",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1111/puar.13018",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "075aa525c713d4f4521315f46a0882fd340fd77d",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Business"
]
}
|
236034341
|
pes2o/s2orc
|
v3-fos-license
|
Exact four-point function and OPE for an interacting quantum field theory with space/time anisotropic scale invariance
We identify a nontrivial yet tractable quantum field theory model with space/time anisotropic scale invariance, for which one can exactly compute certain four-point correlation functions and their decompositions via the operator-product expansion(OPE). The model is the Calogero model, non-relativistic particles interacting with a pair potential $\frac{g}{|x-y|^2}$ in one dimension, considered as a quantum field theory in one space and one time dimension via the second quantisation. This model has the anisotropic scale symmetry with the anisotropy exponent $z=2$. The symmetry is also enhanced to the Schr\"odinger symmetry. The model has one coupling constant $g$ and thus provides an example of a fixed line in the renormalisation group flow of anisotropic theories. We exactly compute a nontrivial four-point function of the fundamental fields of the theory. We decompose the four-point function via OPE in two different ways, thereby explicitly verifying the associativity of OPE for the first time for an interacting quantum field theory with anisotropic scale invariance. From the decompositions, one can read off the OPE coefficients and the scaling dimensions of the operators appearing in the intermediate channels. One of the decompositions is given by a convergent series, and only one primary operator and its descendants appear in the OPE. The scaling dimension of the primary operator we computed depends on the coupling constant. The dimension correctly reproduces the value expected from the well-known spectrum of the Calogero model combined with the so-called state-operator map which is valid for theories with the Schr\"odinger symmetry. The other decomposition is given by an asymptotic series. The asymptotic series comes with exponentially small correction terms, which also have a natural interpretation in terms of OPE.
Introduction
The concept of the renormalisation group underlies the universality in various critical phenomena [1]. A quantum field theory with (isotropic) scale invariance is a fixed point of the renormalisation group flow in the space of quantum field theories and represents a universality class.
Quantum field theories invariant under space/time anisotropic scale transformation, x → α x, t → α z t, (1.1) are also of interest. The exponent z = 1 characterises the degree of anisotropy of the system. These theories are also fixed points of the renormalisation group flow in the generalised theory space of anisotropic quantum field theories. 1 Because of this, these theories are also quite universal. There are many applications of quantum field theory models with anisotropic scale invariance. To illustrate the richness of the applications, let us list a few examples: dynamical critical phenomena in which time-dependent fluctuations around a critical point are considered [2,3], quantum critical phenomena [4], more general non-equilibrium critical phenomena such as the directed percolation universality class [5][6][7] relevant for the onset of turbulence [8,9], and the KPZ universality class in the surface growth phenomena [10]. Lucid introductions to these topics can be found in [11,12]. Another active area of research, with z = 2, is the BEC/BCS crossover (also called the fermions at unitarity), systems of nonrelativistic spin 1/2 fermions with fine-tuned contact interaction, which can be experimentally realised in cold atom systems [13][14][15][16][17].
The operator-product expansion(OPE) [18,19] where O i are local operators, summarises the short-distance physics of a quantum field theory, and is both useful and conceptually important. Consistency of successive OPEs imposes constraints on the theory, called the OPE associativity or the crossing symmetry. For the isotropic case, in particular, when the scale symmetry is enhanced into the conformal symmetry [20], the constraints are often so powerful that consideration of them alone almost fixes the theory itself. This approach, originally conceived by Polyakov [21], is called the conformal bootstrap program. It had remarkable success for quantum field theories in two spacetime dimensions as pioneered by the fundamental work by Belavin, Polyakov and Zamolodchikov [22]. In recent years, starting with [23], it has become clear that the program can be successful also in higher spacetime dimensions. See e.g. [24,25] for recent reviews. It is natural to ask whether a similar bootstrap approach can be successful for anisotropic theories. Theories with z = 2 would be the first target since in this case the scale symmetry can be extended to a larger symmetry, called the Schrödinger symmetry [26,27]. (The basic properties of the Schrödinger symmetry are briefly summarised in appendix A.) This enhancement is analogous to the enhancement of the scale invariance to the conformal symmetry which occurs for many interesting isotropic theories. 2 In particular, if the Schrödinger symmetry is present, one can classify the local operators into primary operators and their descendants (those operators obtained by acting with spacetime derivatives on the primary operators), where the primary operators are defined by requiring that they commute with certain generators of the Schrödinger symmetry. 3 For the isotropic case, the representation theory of the conformal symmetry, including the classification of operators into primary operators and their descendants, is a key tool in the conformal bootstrap program. The analogous representation theory of the Schrödinger symmetry relevant for the classification of operators can be found in [30,31] and references therein. Constraints on the correlation functions imposed by the Schrödinger symmetry, analogous but less restrictive compared to the isotropic case, are derived by Henkel [32,33].
Somewhat surprisingly, the study of OPE for theories with anisotropic scale invariance started only relatively recently [34,35]. We expect the OPE for anisotropic theories to present new features since the short-distance behaviours of isotropic and anisotropic theories are markedly different. For example, the behaviour of the two-point function of scalar primary operators in z = 1 conformal field theory (CFT) is whereas in z = 2 Schrödinger invariant theory, it is .
(1.4)
Here,Ō is the complex conjugate of the operator O, and NŌ > 0 is the U(1) 2 See [28] for a review of the criteria for symmetry enhancement in the isotropic case. The general criteria for the enhancement of z = 2 scale invariance to the Schrödinger symmetry are not understood. Discussion of this issue for a class of models can be found in [28,29]. 3 To be precise, a primary operator O(0, 0) can be characterised by the conditions [C, O(0, 0)] = 0 and [K i , O(0, 0)] = 0 in the notation explained in appendix A. We note that, in principle, one can define the concept of primary operators indirectly even if both the conformal and Schrödinger symmetries are absent (thus even if z = 1 and z = 2) by the condition that a primary operator can never be obtained as a spacetime derivative of other fields. charge, which is contained in the Schrödinger symmetry 4 , of the operator O. Thus, the behaviour in the limit t → 0, x → 0 in the anisotropic theory depends heavily on the precise manner of taking the limit and is more involved compared to the isotropic case.
Because of this difference, it is important to understand general questions regarding the OPE in the anisotropic theories such as "What are the convergence properties of the OPEs?" and "Does the operator associativity hold?".
For this purpose, it would be useful to have exactly solvable yet nontrivial examples of quantum field theory models. For the isotropic case, the twodimensional Ising model and the massless Thirring model (which is equivalent to the compactified free-boson CFT via bosonisation) played an instrumental role when the ideas of OPE and the anomalous dimensions were established [18,19,[36][37][38]. Exactly solvable models also gave substantial support to the development of two-dimensional conformal field theory [22]. We may hope that study of exactly solvable anisotropic models may play a similar role in the understanding of the z = 1 fixed points of the renormalisation group.
In this paper, we identify an interacting yet highly tractable model with anisotropic z = 2 scale invariance and its extension to the Schrödinger symmetry. 5 The model is the well-known Calogero model [39][40][41][42][43] considered as a quantum field theory in one space and one time dimension via the second quantisation.
We exactly compute the nontrivial four-point function of the fundamental fields of the theory, Ψ(t 4 , x 4 )Ψ(t 3 , x 3 )Ψ(t 2 , x 2 )Ψ(t 1 , x 1 ) . The result takes a particularly simple form when t 1 = t 2 = 0 and t 3 = t 4 = t. It is expressed in terms of the modified Bessel function. We call these special four-point functions "pairwise equal-time". For the generic case, we give an expression of the fourpoint function in terms of a double convolution integral involving the pairwise equal-time four-point function and the propagator of non-relativistic free particles. The double convolution integral can also be evaluated using a generalised hypergeometric function.
We decompose the pairwise equal-time four-point function in two different ways via OPE, thereby explicitly verifying the associativity of the OPE for the first time for an interacting quantum field theory with anisotropic scale invariance. From the decomposition, one can read off the OPE coefficients and the scaling dimensions of the operators appearing in the intermediate channel.
One of the decompositions is obtained by expanding the pairwise equal-time four-point function by the parameter x 2 t , where x refers collectively to . This decomposition arises from the OPE of Ψ(0, x 2 )Ψ(0, x 1 ) and of Ψ(t, x 4 )Ψ(t, x 3 ). The decomposition can be schematically represented as where the subscripts of Ψ and Ψ are the labels of the spacetime points. We will call this expansion the "s-channel" decomposition of the four-point function. The expansion is convergent. Only one primary operator (together with its descendants) appears in the intermediate channel. Thus the four-point function is the analogue of the conformal block which plays an important role in the conformal bootstrap program. The primary operator has U(1) charge 2. The scaling dimension of the primary operator depends on the coupling constant of the theory. The result is consistent with the well-known energy spectrum of the Calogero model, combined with the so-called state-operator map [44,45], a relation between the scaling dimensions of the operators of a system with the Schrödinger symmetry and the energy spectrum of the theory put in an external harmonic oscillator potential.
The other decomposition is the expansion of the four-point function by t x 2 . This decomposition corresponds to the OPE of Ψ(t, are assumed) and we call it the "tchannel" decomposition, We found that this decomposition is an asymptotic expansion. The asymptotic nature may be understood intuitively as follows. As can be seen, for example, in (1.4), correlation functions in a Schrödinger invariant theory generically involve exponential factors of the form e −a x 2 t , where a is a numerical constant. These exponential factors play the role of the "instanton effect" if we think about t x 2 as the "coupling constant". As is well-known, the asymptotic nature of a perturbation series is inherently related to the existence of the non-perturbative "instanton effect". 6 (See, for example, [51].) Thus, one could have anticipated the asymptotic nature of the expansion in t x 2 from the presence of the factors e −a x 2 t in Schrödinger invariant theories. The operators appearing in the intermediate channel of the "t-channel" decomposition have vanishing U(1) charges. The charge-zero operators are important in particular because they include currents associated with any internal symmetry (including the U(1) symmetry in the Schrödinger symmetry) and the 6 For the isotropic case, scale-invariant theories have convergent OPEs [46][47][48] whereas for general quantum field theories without scale invariance OPEs are asymptotic [49]. One explanation of this is as follows. (See the discussion below (2.11) of [50].) If the OPE (which is an expansion in terms of x) is asymptotic, it would imply the existence of the non-perturbative "instanton" effect of the form x 2 where l is a length scale. This is impossible for scale-invariant theories, hence OPEs cannot be asymptotic for these theories whereas theories without the scale invariance have asymptotic OPEs. Our intuitive understanding of the asymptotic nature of the "t-channel" decomposition is reminiscent of this explanation. energy-momentum tensor. But they are elusive since the technique of the stateoperator map is not applicable for them. The charge-zero sector is studied from the perspective of Schrödinger symmetry (and its infinite extension for specific models, the fermion at unitarity) in [52] and [53]. We study these charge-zero operators directly via the decomposition of the four-point function. For example, we will show that some charge-zero operators have non-vanishing two-point functions only if they are put on the same time slice.
The asymptotic expansion comes with exponentially small correction terms, which also can be interpreted naturally in terms of OPE: we found that the exponentially small terms are inherently related to the "u-channel" contributions arising from OPEs of Ψ(t, x 3 )Ψ(0, x 2 ) and of Ψ(t, x 4 )Ψ(0, x 1 ). These terms can be schematically represented as (1.7) Some general properties of the OPE in the z = 2 Schrödinger invariant theory have been uncovered in recent years [45,52,54]. Golkar and Son pointed out in [52], among other important results, that the restrictions imposed by the symmetry on the correlation functions become much stronger if one of the operators saturates the unitarity bound. Goldberger, Khandker and Prabhu proved the convergence of the OPE for the case when the operators in the intermediate channel have nonzero U(1) charges [45]. Pal studied Schrödinger invariant field theories focusing on the SL(2, R) subgroup of the Schrödinger symmetry and uncovered properties of correlation functions of operators which are aligned on a timelike line [54]. In particular, it was shown that the OPE relevant for these correlation functions converges even when the OPE involves charge-zero operators.
The results in this paper obtained for a particular solvable model confirm and supplement these general results. We compute the explicit OPE coefficients and show that the OPE converges for the "s-channel" OPE decomposition associated with charge-two operators in the intermediate channel. This is consistent with the results in [45]. The spacetime dependence of the three-point function we compute by pinching two insertions in the four-point function agrees with the result of [52] based on the Schrödinger symmetry.
On the other hand, we found novel features which presumably are shared by general Schrödinger invariant theories. The OPE decomposition associated with the "t-channel" OPE (involving charge-0 operators) is asymptotic, rather than convergent. This does not contradict the results of Pal [54]. We are studying different correlation functions: we consider the case where the operators are spatially separated, whereas in [54] the operators are separated only in the timelike direction.
The organisation of this paper is as follows. In section 2, we discuss the model and establish the notation. In section 3, we describe the computation of the fourpoint functions of fundamental fields in the model. Section 4 is devoted to what can be read off from the four-point function. We will decompose the four-point function via OPE in two ways (the "s-channel" and "t-channel" decompositions). We examine the detailed properties of these decompositions, including the identification of the unique primary operator (whose scaling dimension depends on the coupling constant) and the computation of the OPE coefficients in the "schannel" decomposition. We discuss the asymptotic nature of the "t-channel" decomposition and the exponentially small corrections for the asymptotic series, which can be interpreted as the "u-channel" contributions. We also compute a three-point function by starting from the four-point function using OPE. Section 5 contains final comments. Several appendices give auxiliary results.
The model
The Hamiltonian of the Calogero model (or the Calogero-Marchioro model) [39][40][41][42][43] in the first quantised formulation is 7 (2.1) We work in the convention where the mass of the particle is set to unity. For x j − x i → 0, the solution to the Schrödinger equation behaves as Ψ ∼ |x j − x i | λ where g = λ(λ − 1). The coupling constant g should satisfy g ≥ − 1 4 in order that the energy spectrum be bounded below [59, section 35]. Solutions with 0 ≤ λ are considered as acceptable. In the regime − 1 4 < g < 0, there are two solutions satisfying λ > 0 for given g. Corresponding to these two possible boundary conditions, we have two different theories. 8 Thus λ ≥ 0 provides a good parametrisation of the interacting theory. For λ = 0 and λ = 1, the pair potential vanishes. These points are equivalent to the free bosons and free fermions (or equivalently, bosons interacting with the infinitely large repulsive δ-function potential), respectively. It is also convenient (to conform with the convention used for the Bessel functions) to use another parameter ν defined by In the second quantised formulation, the action is The term "Calogero model" often refers to particles interacting via a pairwise potential of the form V (r) = g/r 2 + ar 2 , or equivalently, particles interacting via a pairwise potential V (r) = g/r 2 put in an external harmonic oscillator potential. The model we consider can also be considered as the infinite volume limit (with the total number of particles, not the density, fixed) of the Sutherland model [55][56][57][58], particles on a circle (with radius R) interacting with a pairwise potential of the form V (r) = g/(R 2 sin 2 r R ). 8 The possibility of considering the branch with the smaller value of λ was discussed already in [55]. For a review of the Calogero and related models containing an explanation of this point, see [60].
We consider the Euclidean statistical field theory in this paper. The canonical (anti-)commutation relations are, The signs here are chosen according to whether we consider the bosonic or the fermionic model. We wish to note however that, as is well known, in the Calogero model, the difference between the bosonic and the fermionic theory is not important in the following sense. 9 One can solve the Schrödinger equation of the model in the n-particle sector with the restriction x 1 < x 2 < · · · < x n , imposing the correct boundary condition Ψ ∼ (x i+1 − x i ) λ when x i+1 − x i → +0. This is sufficient for the understanding of the properties of the Calogero model. Note that the boundary condition on Ψ implies that there is no tunnelling amplitude of the particle (for λ > 0), say, 1 from the region x 1 < x 2 to the region x 1 > x 2 ; the wave function vanishes at x 1 = x 2 . One can define the wave function for the regions where the condition x 1 < x 2 < · · · < x n is not satisfied, by complete symmetrisation or anti-symmetrisation for bosons or fermions, respectively. Whether one is dealing with bosons or fermions does not affect physical observables such as the energy levels (when an external harmonic oscillator potential is present) of the system. In our analysis, we also found that, for example, the four-point functions are the same for fermions and bosons, provided that the ordering of the particles are properly specified. We will work both for the bosonic and fermionic models throughout this paper, except when otherwise explicitly stated.
The Calogero model possesses the Schrödinger symmetry as first shown in [62]. Thus the model constitutes a fixed line of the renormalisation group parametrised by ν ≥ − 1 2 . The special significance of the potential energy 1/r 2 regarding scale invariance was noted also in [63,64]. The U(1) charge in the Schrödinger symmetry is given by N = ΨΨdx, (2.8) and coincides with the particle number. Although the Lagrangian (2.4) is non-local, we will show that this theory has local OPEs. This is not too surprising; there are examples of quantum field theories with non-local interaction, which nonetheless exhibit critical properties described by a fixed point of the renormalisation group and can be studied by OPE and the conformal bootstrap such as systems with a non-local dipole-dipole interaction [65,66] and Ising models with a non-local interaction term [67][68][69][70].
We will study the correlation functions of the model around the true vacuum, i.e. the state in which no particles are present. The simplest of such correlation functions is the two-point function of the fundamental fields, . (2.9) The U(1) charges of the fundamental fields are N Ψ = −1, N Ψ = +1. The twopoint function (2.9) is not renormalised, i.e. agrees with the free-theory result.
In particular, the fields Ψ, Ψ have scaling dimension 1 2 . See (1.4). 10 This nonrenormalisation is a consequence of the fact that the two-point functions are associated only with one-particle states, and one-particle states by construction are not affected by the interaction term. (There are no amplitudes to create virtual particles starting from the one-particle states in the model. Also, there are no vacuum polarisation effects.) General correlation functions around the true vacuum are, of course, nontrivial and contain dynamical information of the model as we will see in later sections of this paper.
Correlation functions around the true vacuum are different from the correlation functions around the "finite-density vacuum" (the ground state with a constant finite density of particles) of the Calogero model 11 , which have been extensively studied. See, for example, [71] and references therein. The reason we study the correlation functions around the true vacuum in this paper is that we are interested in the z = 2 scale-invariant correlation functions; the presence of the nonzero density breaks the z = 2 scale invariance spontaneously.
Four-point function
In this section, we will compute the nontrivial four-point function of the fundamental fields of the model described in the previous section.
Pairwise equal-time four-point function
The four-point function can be easily computed for the special, pairwise equaltime case, i.e. when If t < 0 the four-point function trivially vanishes since the operator Ψ(t, x) annihilates the vacuum. The key observation is that the pairwise equaltime correlation function is equivalent to the two-particle Feynman propagator K (2) (x 3 , x 4 ; x 1 , x 2 ; t) in the first quantised formulation, i.e. the transition amplitude of two particles starting at x 1 , x 2 arriving at x 3 , x 4 after time t passes, 12 The propagator K (2) is a solution of the two-body Schrödinger equation, with the initial condition where we assume for simplicity The relation (3.4) follows from the basic feature of the second quantisation. (See, for example, sections 64 and 65 of [59].) Let us recall that the state, in the second quantised formulation, where we use Ψ(x) to denote the creation operator in the x-representation, is a two-particle state, specified by the wave function, in the first quantised formulation. (The sign ± above refers to the bosonic and the fermionic model, respectively.) The equivalence of the pairwise equal-time four-point function and the propagator (3.4) immediately follows. By separating out the centre of mass motion, the computation of the twoparticle propagator reduces to that of the propagator of a particle in an external potential of the form 1/r 2 . Defining the relative position r = x 2 − x 1 ≡ x 21 , the relevant Hamiltonian is We can focus on the region r > 0. The propagator for this potential was first computed by Peak and Inomata [72], where ν = λ − 1 2 . The boundary condition is such that the wave function behaves as r λ at r → 0. For completeness, we will present a derivation of this result in appendix B.
The centre of mass contribution to the four-point function is where X is the change of the centre of mass from the initial to the final state, Hence the full four-point function is 13 (3.13) Here t > 0 is assumed; if t < 0 the correlation function trivially vanishes. Also, the conditions 14) x 43 > 0, (3.15) are assumed, which come from the assumption that the relative position r is positive. The expression (3.13) is valid for both the bosonic and fermionic cases under these conditions. It is easy to obtain the four-point function for the generic case. The results for the bosonic and the fermionic theory differ by a sign factor. For the bosonic theory, we have and, for the fermionic theory, we have where sgn(x) = x |x| .
Double integral formula for the general four-point function
The four-point function in a generic position can be computed by a convolution integral of the free particle propagator and the pairwise equal-time correlation function computed in the previous subsection. We consider the four-point function, assuming t 1 < t 2 < t 3 < t 4 without loss of generality. If t 1 < t 3 < t 2 < t 4 , for example, then the four-point function trivially factorises into a product of two-point functions.
The double integral formula is where K(x; y; t) is the free one-particle propagator, , (3.20) and the pairwise four-point function or equivalently the two-particle propagator, is given by (3.16) or (3.17) according to whether the theory is bosonic or fermionic. The formula (3.19) holds because in the intervals t 1 < t < t 2 and t 3 < t < t 4 there is only a single particle as shown in Fig. 1.
There is, of course, also a simpler integral formula to compute a four-point function where only Ψ's are inserted on the same time slice as shown in Fig. 2, where 0 = t 1 = t 2 < t 3 < t 4 . This formula will be used later in section 4.3.2 when we compute a three-point function.
Four-point function in general position via generalised hypergeometric function
The double integral (3.19) representing the four-point function the formula and its consequences will be discussed in a separate publication. In this paper, we give the expression and discuss a few of its basic properties. We focus on the bosonic theory. We consider the nontrivial case, The result is where the function F is a generalised hypergeometric function with three variables defined by the following triple series expansion, with the coefficients Λ (j) pmn (j = 0, 1) given by . The quantities τ and v are Schrödinger invariant quantities defined by They may be considered as the analogue of the cross-ratios, quantities invariant under the conformal symmetry, in usual CFT. We note that an ansatz for Schrödinger invariant four-point functions is given in [73] which contains an arbitrary function of four Schrödinger invariant "crossratios" (τ and three v's). For one space and one time dimension, the number of independent Schrödinger invariant cross-ratios is decreased by one 14 , so that there are three independent cross-ratios (τ and two v's appearing in (3.23)).
The function (3.24) is symmetric in v 123 and v 234 . The invariant quantity v, say, v 123 is proportional to the squared "area" of the triangle spanned by the space-time points 1, 2, 3. Thus v may be considered as measuring the degree of "non-collinearity" of the three space-time points. For instance, if the points 1-2-3 are collinear, one has v 123 = 0. In this case, (3.24) reduces to a double hypergeometric function, (3.28)
OPE decomposition of the four-point function
By taking various limits of the four-point function we have computed in the previous section, one can extract the information of OPE coefficients and a threepoint function.
In section 4.1 we consider two different decompositions of the pairwise equaltime four-point function (3.13) using OPE. One of the decompositions, the "schannel" decomposition, arises from ΨΨ and Ψ Ψ OPEs and is represented by a convergent series. The operators appearing in the intermediate channel have anomalous dimensions, i.e. their scaling dimensions depend on the coupling constant. The other decomposition, the "t-channel" decomposition, arises from two ΨΨ OPEs and is represented by an asymptotic series. By representing the same four-point function in two ways by OPE, we prove the operator associativity of the model, for this particular four-point function. The asymptotic series for the "t-channel" decomposition has also exponentially small correction terms, which also have an interpretation via OPE.
In section 4.2 we study in detail the "s-channel" decomposition and show that only one primary operator Φ appears in the ΨΨ OPE. We fix the forms of the descendant operators of Φ appearing in the OPE and compute all the relevant OPE coefficients. In section 4.3 we compute the three-point function ΨΨΦ from the four-point function. In section 4.4 we discuss a peculiar property of the two-point functions between operators arising in the ΨΨ-OPE.
Decomposition of pairwise equal-time four-point function 4.1.1 "s-channel" decomposition
We consider the expansion of the pairwise equal-time four-point function (3.13) in the parameter x 2 /t, where x refers to both x 21 and x 43 . As it turns out, this expansion has an infinite convergence radius, so that the expansion is valid for an arbitrarily large value of x 2 t . The expansion becomes more useful when 1 since the first few terms will then dominate the series. The smaller the value of x 2 t , the closer are the operators Ψ(t, x 4 ), Ψ(t, x 3 ) and Ψ(0, x 2 ), Ψ(0, x 1 ) respectively. Therefore, considering this expansion should amount to considering the OPE between Ψ(t, . This series is convergent for any value of the argument z. Substituting (4.1) into (3.13), we obtain, . This decomposition of the four-point function emerges from the equal-time OPEs Here C k 's are the OPE coefficients, 15 and ∆ k 's are dimensions of the operator Φ k (and Φ k ). The operators Φ k are charge-2 operators, N Φ k = 2. The powers of x 21 and x 43 in (4.3) and (4.4) are fixed by scale invariance. For simplicity, we will set (x 1 + x 2 )/2 = 0, (x 3 + x 4 )/2 = X, using translational invariance. Equation (4.2) is valid provided x 21 > 0, x 43 > 0 for both the bosonic and fermionic models. We assume in this subsubsection, without loss of generality, that these conditions are met. In (4.3) and (4.4), we are not distinguishing primary and descendant operators. We will see that Φ 0 ≡ Φ is the only primary operator appearing in the OPE and all other operators Φ k (k = 1, 2, · · · ) are its descendants in section 4.2, where we also compute all OPE coefficients C k 's.
It is easy to read off the dimensions ∆ k by comparing the powers of x 21 and x 43 in the formulae (4.3) and (4.4) with (4.2). We obtain, We see that the scaling dimensions depend on the coupling constant ν; the operators Φ k 's have anomalous dimensions. Let us consider the leading order contribution from the lowest-dimension operator Φ with dimension ∆ 0 = 3 2 + ν. From (4.3) and (4.4) we obtain (4.6) 15 We take C k to be real by choosing the phases of Φ k 's appropriately.
Comparing this to the leading order term (both in the expansion by x 43 and x 21 ) of (4.2), we can read off the leading OPE coefficient to be together with the two-point function The spacetime dependence agrees with the general form of the two-point function of the primary operator (1.4) with N Φ = 2 and ∆ = ∆ 0 = 3 2 + ν. We fixed the normalisation of Φ by (4.9). We will see presently that Φ is indeed a primary operator. The differences in scaling dimensions of Φ k (k > 0) and Φ are integers. This suggests that Φ k (k > 0) are descendants of Φ. We will see later in section 4.2 that this is the case.
In section 3.1, we computed the four-point function by identifying it with the two-particle propagator. In this identification, the expansion parameters x 43 and x 21 are the relative coordinates, and X appearing above is the difference in the final and the initial centre of mass position. Thus, the "s-channel" OPE decomposition described here may be interpreted as representing the separation of the centre of mass and relative motions.
The state-operator map The spectrum of operators (4.5) is consistent with the state-operator map introduced by Nishida and Son for Schrödinger invariant theories [44]. (See also [45].) The state-operator map is a one-to-one correspondence between an operatorŌ (with positive U(1) charge NŌ > 0) of a Schrödinger invariant theory and a state |Ō of the model which is obtained by adding an external harmonic oscillator potential to the theory. 16 For our model, the extra harmonic oscillator term can be represented as an additional contribution to the action (2.4). The state-operator map has the property that the scaling dimension ofŌ equals the energy of the state |Ō in the deformed model. The map also preserves the U(1) charge: the state |Ō is an NŌ-particle state in the deformed model. The energy spectrum of the deformed model can be exactly computed. This is the celebrated result by Calogero [42]. For N -particle state, it is, where n i (i = 1, · · · , N ) are integers satisfying The lowest dimension operator necessarily is a primary operator, which is nothing but the fundamental field Ψ. (Recall that the scaling dimension of the fundamental field is protected and equals 1 2 . See the explanation below (2.9).) The operators with n > 0 are descendants of the fundamental field, ∂ n x Ψ. It is not necessary to consider descendants produced by acting with ∂ t 's on Ψ. This is because it is redundant to consider the null operator ∂ t − 1 2 ∂ 2 x Ψ in the OPE. See [30,31,52].
For the N = 2 case, which is relevant for the "s-channel" OPE we are considering, (4.11) becomes ∆ = 3 2 + ν + n 1 + n 2 , (4.14) with 0 ≤ n 1 ≤ n 2 . 18 This includes the spectrum found from the OPE, (4.5), consistently with the correct coupling constant dependence of the scaling dimensions. We now see that the operator Φ with dimension ∆ 0 = 3 2 + ν has the lowest scaling dimension in the charge-2 sector, and must therefore be a primary operator. (In (4.5) the scaling dimensions are separated by even integers whereas in (4.14) the separations are general integers. This difference arises because we are defining the OPE at the symmetric points x 1 +x 2 2 , x 3 +x 4 2 in (4.3) and (4.4).) 17 Of course, negatively charged operators are also classified since they are the complex conjugates of positively charged operators. The state-operator map fails to capture, importantly, charge-zero operators (operators with N O = 0). 18 For the free-field theory case, ν = − 1 2 , one can understand this spectrum as that of the operators ∂ n1 x Ψ∂ n2 x Ψ.
One can also characterise primary operators using the state-operator map: they correspond to the states annihilated by the charges C, K. 19 It should be possible to directly study this condition in the Calogero model (with the harmonic oscillator external potential). This will lead to a complete classification of the primary operators (with nonzero charge, since the zero-charge sector defies the use of the state-operator map). The operator technique developed in [77] seems to be well-suited for this purpose. This problem will be addressed in a separate publication.
"t-channel" decomposition
Next, we consider the pairwise equal-time four-point function x refers collectively to x 21 > 0 and x 43 > 0. In this regime, the spacetime points 2, 4 and 1, 3 can be made close to each other, respectively. Hence we expect that this regime should be understood from the OPEs Ψ(t, x 4 )Ψ(0, x 2 ) and Ψ(t, where a 0 (ν) = 1 and The asymptotic expansion is valid in the limit z → ∞ for | ph z| ≤ π 2 − , which includes z > 0 relevant for us. (ph z is the argument of z and is a positive infinitesimal quantity.) Substituting (4.15) to (3.13), we obtain It is essential that the exponential factors in this formula combine to yield We obtain (4.20) 19 For our notation about the Schrödinger algebra, see appendix A.
To clarify the connection to the OPE, we define Then we have, using This "t-channel" decomposition of the four-point function is valid for x 21 > 0, x 43 > 0 for both the bosonic and fermionic models. Hereafter in this subsubsection, to be specific, we consider the bosonic model. The decomposition (4.22) indeed has the form which arises from the OPEs where J k (with scaling dimension∆ k ) are the operators in the intermediate channel.
In the second line, we have used the scale invariance to constrain the OPE coefficientsC k . The two-point functions of J k 's are Some of the coefficients D kk can be absorbed into the normalisation of the operators J k . Here, we are not specifying whether J k is a primary or a descendant operator. (They can be a linear combination of primary and descendant operators in general.) The operators J k have vanishing U(1) charges. Note that the primary and descendant operators for the charge-zero sector behave differently from those in other sectors [52]. This is because the generators K and P , which act as the "ladder operators" for the sectors with the nonzero U(1) charge, commute for the charge-zero sectors. (The commutation relations of the Schrödinger algebra are given in appendix A.) From the OPE (4.23) and the two-point functions (4.24), we obtain The powers of X , x 21 , and x 43 are all integers in (4.20). Comparing (4.20) and (4.25), we see that this strongly suggests that∆ k 's are also integers. There are ambiguities, however, which stem from the fact that one can insert where δ is an arbitrary number, into (4.25). This leads to a redefinition of f m and f n and a shift of the dimensions∆ m and∆ n by ±2δ. We will return to a possible resolution of this ambiguity towards the end of this subsubsection.
Although it seems unlikely that a set of consistent OPE coefficients exist with non-integer valued∆ m > 0, we have not succeeded in ruling this possibility out. We hereafter assume that the scaling dimensions are integers and writẽ We can fix the first two OPE coefficients, f 0 and f 1 , by comparing (4.20) and (4.25) under this assumption. Firstly, we see that the lowest dimension operator J 0 has scaling dimension 0, 20 and hence should be identified with the identity operator J 0 = 1. This implies D 00 = 1 and D 0n = 0 (n > 0) since the onepoint function of any operator with nonzero dimension vanishes because of scale invariance. The OPE coefficient is and hence the leading order term in the ΨΨ OPE is This is an expected result in view of the two-point function (2.9). The subleading OPE coefficient can also be read off. We obtain where a 1 (ν) is given by (4.16). The operator J 1 has dimension 1 and it is natural to identify it with the density of the U(1) charge. We have not fixed the normalisation of J 1 ; this is the reason why both f 1 and D 11 appear in the above formula. By redefining the operator J 1 appropriately by multiplying it by a phase factor, we can choose f 1 to be real and J 1 to be hermitian. Note that a 1 (ν) flips its sign at ν = 1 2 . The coefficient of the two-point function J 1 (0, x)J 1 (0, 0) , D 11 , is positive for ν < 1 2 and negative for ν > 1 2 . This does not contradict the unitarity of the theory. (The unitarity of the theory is guaranteed as the Hamiltonian of the model is hermitian.) We recall that, for the isotropic case, the positivity of the two-point function in a unitary CFT is proven via the state-operator map. The analogous state-operator map in z = 2 Schrödinger invariant theory is not applicable to the charge-zero sector. Furthermore, one cannot invoke the positivity of the norm of the Hilbert space via J m (t, x)J n (0, 0) = 0|J m (t, x)J n (0, 0)|0 , since one can show that this equation holds for t > 0 but not for t = 0, which is the case of interest here. This point will be explained in section 4.4.
The special point ν = 1 2 , where f 2 1 D 11 = 0, corresponds to the point at which the Calogero model coincides with the system of free fermions; The asymptotic series (4.15) truncates at that point. Similar truncations of the asymptotic series occur also at ν = − 1 2 and at ν = 3 2 , 5 2 , · · · . The ΨΨ-OPE appears to be degenerate for these special points.
It is not possible to fix the higher OPE coefficients f n (n = 2, 3, · · · ) unambiguously from the pairwise equal-time four-point function. This is because of the ambiguity associated with (4.26) (where δ is chosen to be an integer). Starting from the pairwise equal-time point-function, we are forced to take the coincident limit of both pairs of the spacetime points (1, 3) and (2,4). (Note that we have to take the limit (t 31 , x 2 42 t 42 in order to have a well-defined ΨΨ OPE.) We should obtain more information on OPE coefficients from the general four-point function discussed in sections 3.2 and 3.3, since then we can, say, pinch the spacetime points (1, 3) while keeping (2, 4) un-pinched. By studying this type of limit, we expect to get the complete OPE coefficients and understanding of the primary/descendant structure of operators J k . This will be left as a future problem.
It is possible to read off some properties of the operators J k without going into the details of the expression of the general four-point function. In particular, we find that the operators J k (k ≥ 1) have a rather unusual property, namely, that their two-point functions vanish unless the two operators are inserted on the same time slice. This will be shown in section 4.4.
To summarise this subsubsection, we have shown that the pairwise equaltime four-point function (3.13) can be decomposed by using two ΨΨ OPEs. The decomposition (4.22) is represented by an asymptotic series rather than a convergent series. The charge-zero operators J m appearing in the intermediate channels appear to have integer-valued scalar dimensions. J 0 is the identity operator. We have computed the leading and the next-to-leading OPE coefficients associated with J 0 and J 1 .
Since we have represented the same four-point function now in two ways as "the s-channel" decomposition (in section 4.1.1) and "the t-channel" decomposition here, we have thereby shown the operator associativity for this model, for the particular four-point function. Schematically, we have shown (4.31) The second equality has to be understood as the representation of a function by an asymptotic series.
The exponentially small corrections and "u-channel" contributions
The asymptotic expansion of the modified Bessel function (4.15), and hence the "t-channel" decomposition of the four-point function, comes with exponentially small contributions. Here we will show that these correction terms can also be interpreted using the OPE. As we shall see below, it is necessary to analytically continue the time variable t. We write where α ∈ R and t > 0. We will only consider the regime 0 ≤ α ≤ π 2 . We consider the analytically continued pairwise equal-time four-point function defined by where the operator in the Heisenberg picture, Ψ(t, x), is given by where we assume x 21 > 0, x 43 > 0. By gradually increasing α from 0 to π 2 , we have t = it and we obtain the four-point function of the theory with the "Minkowski" time: the analytically continued four-point function, considered as a function of t , coincides with the four-point function of the theory with the "Minkowski" time t . Let us write explicitly the four-point function for this case, (4.36) Here, the subscript M refers to the theory in the "Minkowski" signature and the Heisenberg operator assumes the usual quantum mechanical form, Ψ M (t , x 4 ) = e iHt Ψ(0, x)e −iHt . We used I ν (z) = e − iπν 2 J ν e i π 2 z [78, (10.27.6)]. The expression (4.36), of course, can also be obtained directly for the Minkowski theory without relying on the analytic continuation.
In the regime 0 < α ≤ π 2 , the asymptotic expansion accompanied with exponentially small correction terms [78, (10.40.5)], captures the modified Bessel function I ν (z) accurately. The coefficients a p (ν) are defined in (4.16). The first term in (4.37) coincides with the asymptotic expansion (4.15) used for the "t-channel" decomposition in section 4.1.2.
The expression (4.37) is not valid for the Euclidean theory (α = 0). The reason is that α = 0 corresponds to a Stokes line of the modified Bessel function I ν (z), where the first term in (4.37) is maximally dominating over the second term. 21 As is well known, across the Stokes line, the coefficients of the smaller terms change almost discontinuously albeit in a controlled manner [79]. One needs to use a specially tailored expansion formula to study the behaviour of a function exactly on the Stokes line. Such a formula for I ν (z) was derived in [80]. We found a natural interpretation in terms of OPE for the formula (4.37) rather than the expansion valid exactly on the Stokes line given in [80]. This may suggest that it is useful to define the Euclidean theory not exactly at α = 0 but rather using the limit α → 0. Note that, for large |z|, one needs only small α > 0 to make the expansion (4.37) accurate.
Substituting (4.37) into (3.13), we find that the exponentially small corrections to the four-point function Ψ(t, This formula is valid for x 21 > 0, x 43 > 0 for both the bosonic and fermionic models. Hereafter in this subsubsection, we focus on the bosonic model. The similarity of (4.38) with the "t-channel" decomposition (4.20) is clear. In particular, the exponential factor in (4.37) and (3.13) combines in a similar manner to (4.19) and yields the exponential factor e − , in (4.20), we find that the roles of spacetime points 1 and 2 (or equivalently 3 and 4) are interchanged. This leads us to identify the exponentially small contributions (4.38) as arising from the OPEs Ψ(t, x 3 )Ψ(0, x 2 ) and Ψ(t, x 4 )Ψ(0, x 1 ). Schematically these contributions can be represented as, (4.39) This interpretation can be made more precise. To clarify the connection to the OPE, we define Then (4.38) becomes, using Note that an extra factor (−1) p appeared in the summand, compared to (4.38), due to the rewriting in terms of the variable X . Now we see that (4.41) have precisely the same form, except for the overall phase factor e −iπ(ν+ 1 2 ) , to the "tchannel" decomposition (4.22). This is natural since both terms originate from the ΨΨ-OPE.
The overall phase factor has a natural interpretation within the framework of the generalised statistics [60,61] for the Calogero model. The generalised statistics is an interesting way of understanding various properties of the Calogero model as a consequence of the phase factor e −iπ(ν+ 1 2 ) associated with each exchange of two particles. We indeed see that the "u-channel" terms which are obtained by the exchange of, say, the two particles at the spacetime points 1 and 2, acquire precisely that phase factor relative to the "t-channel" terms.
The successful interpretation of the exponentially small terms as the "uchannel" contributions relies on the fact that the coefficients of the first and the second terms of (4.37) are closely related. (Both are given in terms of a p (ν) defined by (4.16).) This connection is an example of the so-called resurgence phenomenon. (See, for example, [81].) Thus the resurgence property of the modified Bessel function represents the fact that both "t-channel" and "u-channel" contributions arise from the ΨΨ OPE.
There is another way of understanding the necessity of the resurgence property and the role of the "u-channel" terms from the point of view of the OPE. When ν is a half-odd integer (i.e. ν = − 1 2 , 1 2 , 3 2 , 5 2 , · · · ), the asymptotic series (4.37) truncates and becomes exact. (The Stokes phenomenon does not occur for these values of ν.) The four-point function (in Euclidean time) becomes, writing ν = n + 1 2 with n = 0, 1, · · · , 22 where i The first and the second finite sum in (4.43) correspond to the exponentially large and small contributions in (4.37), respectively. Since these formulae are valid for all z = x 43 x 21 2t , one can in particular consider the limit z → 0. This limit corresponds to the limit where spacetime points (1, 2) or (3, 4) become coincident (related to the "s-channel" decomposition studied in section 4.1.1). Although each term in the first and second sum in (4.43) diverges, there are cancellations between these terms such that i (1) n (z) ∼ z n for z → 0. This must be the case. Consider, say, the limit x 43 → 0, in which z also goes to zero, z ∼ x. In this limit, the four-point function is controlled by the OPE (4.4), Ψ(0, x)Ψ(0, 0) ∼ x n+1 Φ. (Note that the scaling dimensions of the operators Ψ and Φ are 1 2 and n + 2, respectively.) Hence the four-point function behaves as x n+1 43 . This agrees with (4.42) and (4.43) together with i (1) n (z) ∼ z n . The consistency of the fourpoint function with the ΨΨ OPE relies on the cancellations, which in turn occur because of the resurgence relations, i.e. the relations between the coefficients of the exponentially small and large terms of (4.37).
It is intriguing that the interpretation of the exponentially small terms as the "u-channel" contributions means that the operator associativity relation (4.31) can be made more accurate by including "u-channel" contributions. Schematically, we have, (4.45)
Detailed analysis of "s-channel" decomposition
In this subsection, we take a closer look into the "s-channel" OPE decomposition of the pairwise equal-time four-point function (3.13) which arises when we consider the OPE of Ψ(0, x 2 )Ψ(0, x 1 ) and of Ψ(t, x 4 )Ψ(t, x 3 ). In section 4.1.1, we have seen that the operators Φ k appearing in the ΨΨ OPE, (4.4), have dimensions, and the lowest dimension operator Φ = Φ 0 is a primary operator. We also obtained the leading OPE coefficient (4.8) involving Φ.
We will now study the subleading operators Φ k (k = 1, 2, · · · ) in the ΨΨ OPE and show that they coincide with the following special descendants of the primary operator Φ, The corresponding special descendants of Φ are Thus the ΨΨ OPE involves only one primary operator Φ. This will be shown in the following steps. Firstly, in section 4.2.1, we fix the form of the special descendants Φ (k) appearing in the OPE by studying a part of the decomposition of the four-point function (4.2). Next, we compute the coefficients of the ΨΨ OPE involving the Φ (k) 's. Finally, we show that there are no subleading operators other than Φ (k) appearing in the ΨΨ OPE. (For example, a primary operator Φ with dimension 3 2 + ν + 2n, where n is a positive integer, could appear on the RHS of (4.4). We have to exclude this type of possibilities.) This is done in section 4.2.2 by completely reproducing the full pairwise equal-time four-point function (3.13) just by summing up contributions from the primary operator Φ together with Φ (k) . This shows in particular that the ΨΨ OPE is exhausted by the primary operator Φ and its special descendants Φ (k) . (In other words, one can put Φ k = Φ (k) in (4.4).) Throughout section 4.2 we will assume x 21 > 0, x 43 > 0 without loss of generality. Under this assumption, all formulae are valid for both the bosonic and fermionic theories.
The contribution from the descendants of Φ, Φ
We will fix the descendants of Φ, Φ appearing in the OPE (4.3) and (4.4). We will see that the following observation is essential: each term in the "s-channel" decomposition (4.2) of the four-point function contains X only in the exponent and not in the prefactor of the exponential factor e − X 2 t . We consider a part of the "s-channel" decomposition (4.2), namely, the leading order terms in the expansion in terms of x 21 (keeping all subleading terms in the expansion by x 43 ), .
(4.49)
These terms should arise from the lowest dimension operator Φ in the Ψ(0, x 2 )Ψ(0, x 1 ) OPE. Each term in this series corresponds to each operator contained in the Ψ(t, x 4 )Ψ(t, x 3 ) OPE. Now, in a theory with z = 2 Schrödinger symmetry, primary operators with different scaling dimensions have vanishing two-point functions [32]. This means that k ≥ 1 terms in (4.49) must all come from the descendants of Φ appearing in the ΨΨ OPE.
In order to obtain the expression for these descendant operators, we need to know the two-point functions between a primary operator and its descendants. We will set (x 1 + x 2 )/2 = 0, (x 3 + x 4 )/2 = X, for simplicity. Let us first consider the k = 1 case. The relevant descendant operators should have dimension ∆ 0 +2; they are ∂ 2 x Φ and ∂ t Φ. (We recall that ∆ 0 = 3 2 +ν.) Taking spacetime derivatives of the two-point function (4.9), we obtain Notice that each of the expressions contains X in the prefactor of e − X 2 t . However, we see that the k = 1 term (in fact, all terms) in (4.49) does not contain X in the prefactor of e − X 2 t . Therefore, the special linear combination of the descendant operators ∂ 2 x Φ and ∂ t Φ, must be responsible for the k = 1 term in (4.49). The linear combination Φ (1) is constructed so that the two-point function does not contain X in the prefactor of e − X 2 t . Thus, the first subleading term in the ΨΨ OPE should contain descendants of Φ only in the form of Φ (1) .
One can repeat this process of forming linear combinations of descendant operators further to construct special descendant operators Φ (k) ; we observe that the necessary computations are the same, except that ∆ 0 should be replaced by ∆ 0 + 1 and then by ∆ 0 + 2, and so forth. 23 We obtain the special descendant operators with the two-point functions which do not contain X in the prefactor of e − X 2 t . (We used ∆ 0 = 3 2 + ν above.) The coefficients before Φ (k) in the ΨΨ OPE can be read off from (4.49) using (4.55) and (4.8). We obtain, (4.56) 23 To construct the special descendant operators by linear combinations, the operators arising at each step by applying ∂ 2 x and ∂ t should be linearly independent. This is assured for ∆ 0 > 1 2 .
The term with k = 0 of course is the leading order term in the OPE we have already seen in (4.3) and (4.8), Φ = Φ (0) . We have shown that the descendants of Φ should appear in the ΨΨ OPE in the way given in (4.56). However, there could be another primary operator, say, Φ with dimension ∆ 0 + 2n where n is a non-negative integer, which enters the ΨΨ OPE together with its descendants. (In other words, Φ k in (4.4) may be a linear combination of Φ (k) and Φ itself or its descendants.) We will exclude this possibility in section 4.2.2. Once this is done, we can conclude that (4.56) is complete and coincides with (4.4) with Φ k = Φ (k) and . (4.57) By repeating the same argument starting from the leading order terms in x 43 of (4.2), we obtain similar results for the Ψ Ψ OPE. Thus, descendants of Φ must enter the Ψ Ψ OPE in the following special linear combinations, 24 which are constructed so that the two-point functions do not contain X in the prefactor of e − X 2 t . The Ψ Ψ OPE becomes (4.60) Again we will see in section 4.2.2 that (4.60) is complete and coincides with (4.3) with Φ k = Φ (k) and (4.57).
The important property of the special descendants Φ (m) , Φ (n) is that their mutual two-point functions do not contain X in the prefactor of e − X 2 t . This reflects the absence of X in the prefactor of e − X 2 t for all terms contained in (4.2). 24 We note that the sign flip before ∂ t of Φ (k) compared to Φ (k) is due to our use of Euclidean time,
Reproducing full four-point function from OPE
Here we shall prove that the OPEs (4.56) and (4.60) are complete by showing that they fully reproduce the pairwise equal-time four-point function (3.13). From the OPEs (4.56) and (4.60) and the two-point functions (4.61), we obtain (4.62) We wish to show this formula agrees with the "s-channel" decomposition (4.2). Factoring out the common factor, the identity we have to show is which is equivalent to 1 m!n! Γ (ν + m + n + 1) Γ (ν + m + 1) Γ (ν + n + 1) = min(m,n) (4.64) Now, using the Pochhammer symbol, we have where we used a well-known identity for hypergeometric functions, .
Three-point function ΨΨΦ
We have seen in section 4.2 that there is only one primary operator, Φ, involved in the OPE Ψ(x)Ψ(0). By pinching the two insertion points of Ψ of the fourpoint functions obtained in section 3, we can compute the three-point function ΨΨΦ .
In [52], Golkar and Son showed that the constraint from Schrödinger symmetry alone fixes the spacetime dependence of the three-point function (except, of course, the overall coefficient which contains the dynamical information of the theory considered) when one of the operators involved saturates the unitarity bound, which, in one space dimension, is ∆ ≥ 1 2 . The field Ψ saturates the unitarity bound. The form of the three-point function we obtained is consistent with Golkar and Son's analysis. Since their analysis is done in Minkowski signature, and the continuation to Euclidean signature is not entirely trivial, we give the analysis done for theories with Euclidean time in appendix D. The appendix also contains a discussion of the boundary conditions necessary to fix the spacetime dependence. We point out that the boundary conditions give different constraints for one space dimension compared to other cases.
Three-point functions with two operators at equal-time
First, we consider the case in which two Ψ's are inserted at the same time, (4.67) We will consider the case t > 0; If t < 0 the three-point function vanishes trivially since the operator Ψ(x) annihilates the vacuum. We keep the leading order term in the expansion in x 21 = x 2 − x 1 in the pairwise equal-time four-point function (3.13) using (4.1). Putting x 1 +x 2 2 = 0, we obtain This is valid when x 43 > 0, x 21 > 0 for both the bosonic and fermionic models. Comparing this with the leading term of the OPE (4.60) we obtain, after relabelling, . (4.70) This expression is valid when x 32 > 0 for both the bosonic and fermionic models. The formula valid for x 32 < 0 can be obtained easily as is done for the equaltime four-point function, (3.16) and (3.17). The normalisation conditions for the operators Ψ and Φ are fixed by the two-point functions, (2.9) and (4.9). The result (4.70) is, apart from an overall factor, the product of two free dressed with a factor (depending on x 32 and t) x
Three-point functions in general position
Integral representation We consider the three-point function in general position, Here we will only consider the bosonic model. We set x 1 = 0, t 1 = 0 using translational invariance. We assume t 3 > t 2 without loss of generality. We consider the case t 2 > t 1 since otherwise the three-point function vanishes trivially, the operator Ψ(t 2 , x 2 ) annihilating the vacuum. We begin with the integral representation (3.22) for the four-point function (The labels 3, 4 will be replaced respectively by 2, 3 later.) We consider the limit x 2 → x 1 and keep the leading order term in the expansion in terms of x 21 . Writing t 3 = t, t 43 = t and setting x 1 +x 2 2 = 0, we obtain, using the leading order term of (4.1) and (3.16), (4.72) Note that we used (3.16) valid for the bosonic model and applicable for both positive and negative x 4 3 . Comparing (4.72) with the leading order term in the OPE (4.60), we obtain an integral representation of the three-point function in general position, Three-point function in general position This integral can be worked out, separating contributions from x 4 3 > 0 and x 4 3 < 0. The result can be expressed in terms of the parabolic cylinder functions or the confluent hypergeometric functions. The details of the computation, including the comparison to the generic form of the three-point function found by Henkel [32,33], are given in appendix C. The final result expressed via the confluent hypergeometric function M (a, b, z) (in the notation of [78]) is, where w is a quantity which is invariant under the Schrödinger symmetry, We have chosen t 3 > t 2 > t 1 and hence w ∈ R. (In the notation of section 3.3, The spacetime dependence is consistent with the analysis based on the Schrödinger symmetry by Golkar and Son [52]. See appendix D for a detailed comparison. By taking the limit t 32 → 0 of (4.75), we recover the result of section 4.3.1. See appendix C.5. The special case ν = − 1 2 agrees with the result for the freeboson. See appendix E.2.
Two-point function of the charge-zero operators appearing in the "t-channel" decomposition
In this subsection, we will deduce a peculiar property of the charge-zero operators J m (m = 1, 2, · · · ) appearing in the ΨΨ OPE. Namely, we will show that the two-point functions Note that the two-point functions are non-vanishing and finite in general for t 1 = t 2 , (4.24). Our argument is fairly general and is not restricted to the Calogero model. The assumptions are the existence of the OPE, the scale invariance, the U(1) symmetry, and the uniqueness of the vacuum. Hence the argument will apply in particular to any theory with Schrödinger symmetry and with a unique vacuum. The basis of the argument is the following property of the four-point function Ψ(t 4 , x 4 )Ψ(t 3 , x 3 )Ψ(t 2 , x 2 )Ψ(t 1 , x 1 ) . Depending on the time-order of the operators, the four-point function (i) has a nontrivial form, (ii) factorises into a product of two-point functions, or (iii) vanishes. The first possibility occurs when the time-ordered product of the operators has the form ΨΨΨ Ψ, i.e. when t 4 > t 1 , t 4 > t 2 , t 3 > t 2 , and t 3 > t 1 hold. The pairwise equal-time four-point function derived in section 3.1 is a particular case of this possibility. The second possibility occurs when the time-ordered product has the form ΨΨΨΨ, i.e. when hold. The third possibility occurs when the operator with the smallest time is Ψ or the largest time is Ψ (because of Ψ|0 = 0 and 0|Ψ = 0).
Let us consider the second possibility; to be specific, we focus on the case t 4 > t 2 > t 3 > t 1 . Then we have In going from the third to the fourth line, we inserted a complete set of eigenstates between Ψ(t 2 , x 2 ) and Ψ(t 3 , x 3 ). Then we used the fact that the state Ψ(t 3 , x 3 )Ψ(t 1 , x 1 )|0 has vanishing U(1) charge and hence should coincide with the vacuum |0 up to a constant factor.
Let us consider the limit where both pairs of spacetime points (4, 2) and (3, 1) become coincident. (More precisely, the limit x 31 → 0, t 31 → 0 with fixed x 2 31 t 31 , and the similar coincident limit for the points (4, 2) should be taken.) In this limit, we can use the OPE (4.23) of Ψ(t 4 , x 4 )Ψ(t 2 , x 2 ) and of Ψ(t 3 , x 3 )Ψ(t 1 , x 1 ) which we have studied in section 4.1.2, We recall that J 0 is the identity operator and x −1 f 0 Because of the factorisation property (4.78), the four-point function depends on (t 31 , x 31 ) and (t 42 , x 42 ) but not on the relative position between (t 2 , x 2 ) ≈ (t 4 , x 4 ) and (t 1 , x 1 ) ≈ (t 3 , x 3 ). This implies the vanishing of the two-point functions, for t 2 > t 1 , except for the case when both the operators are the identity operator (m = 0, n = 0). Indeed, the identity operator appearing in the ΨΨ OPE completely reproduces the factorised four-point function.
The situation is quite different from that considered in section 4.1.2, where we need infinitely many nonzero equal-time two-point functions J m (t, x)J n (t, x ) = Dmn (x−x ) m+n , in order to reproduce the pairwise equal-time four-point function (except for the special cases where ν is a half odd integer).
We can make this argument more precise by considering three-point functions ΨJ k Ψ . We start from the non-factorised four-point function, t 4 > t 3 > t 2 > t 1 . We then take the coincident limit of the spacetime points (3,2) and use the OPE Ψ(t 3 , x 3 )Ψ(t 2 , x 2 ). Focusing on each term in the OPE expansion, one obtains the three-point functions where Ψ is inserted at the spacetime point 4, J k is inserted at 3 = 2, and Ψ at 1. Now, if we start with a different time-ordering, say, t 3 > t 2 > t 4 > t 1 , 25 the factorisation (4.78) implies that the three-point function vanishes except for the special case where the operator appearing from the ΨΨ OPE is the identity operator. Thus, the three-point function is non-vanishing only when t 3 > t 2 > t 1 . We then take the coincident limit of the points 3 and 1. Unless we maintain the time-ordering t 3 > t 2 > t 1 during the coincident limit, the result vanishes (when J m is not an identity operator). We again reach the conclusion that the two-point function can have a nonzero value only if t 1 = t 2 .
The vanishing of the two-point functions can be deduced from the following more formal argument. One may rewrite the two-point function as the vacuum expectation value of the time-ordered product, (4.84) Assume that t 1 > t 2 . Then we have We can insert a complete set of states between J m (t 1 , x 1 ) and J m (t 2 , x 2 ). Since J n (t 2 , x 2 )|0 has vanishing U(1) charge and the only state with zero charge is the vacuum, we obtain By the scale invariance, the one-point function of any operator should vanish, unless the operator is an identity operator. Therefore we see that the two-point function J m (t 1 , x 1 )J n (t 2 , x 2 ) should vanish unless m = 0, n = 0. This argument illustrates the subtlety involved in the two-point function for the case t 1 = t 2 . In order to have the finite equal-time two-point functions (4.24), which are required by the "t-channel" decomposition of the nontrivial four-point function and the ΨΨ OPE discussed in section 4.1.2, one must conclude that (4.85) does not hold when t 1 = t 2 , as otherwise the equal-time two-point functions would also vanish by the same argument.
In this subsection, we deduced features of charge-0 operators arising from the ΨΨ OPE. It is clearly important to pursue this direction further. For example, by using the four-point function for general positions derived in section 3.3, one should be able to compute the three-point function ΨJ m Ψ . This will in turn give us information about the J m Ψ OPE. This is important in understanding the nature of the operators J m . We expect them to include the energy-momentum tensor and the symmetry currents. The J m Ψ OPE should tell us what kind of symmetries, if any, are associated with the operators J m .
Conclusion and Discussion
In this paper, we have pointed out that the Calogero model considered as a quantum field theory in one space and one time dimension via the second quantisation is a tractable yet nontrivial example of z = 2 anisotropic scale invariant theory. We obtained the expression of the four-point function of the elementary fields for the special pairwise equal-time case (3.13). The general four-point function can also be expressed either in terms of a double convolution integral (3.19) or of a generalised hypergeometric function (3.23).
We have obtained new insights into the z = 2 theories, exploiting the exact expression of the four-point function. We decomposed it in two different ways (the "s-channel" and "t-channel" decompositions studied in sections 4.1.1 and 4.1.2), corresponding to two different ways of applying the OPE. In this way, we have verified the OPE associativity for the model in the case of the particular four-point function. The "t-channel" decomposition is asymptotic rather than convergent. The exponentially small corrections to the asymptotic series also can be interpreted using the OPE (section 4.1.3). The asymptotic nature is inherently connected to the presence of the terms behaving as e −ax 2 /t in Schrödinger invariant theories. This makes us suspect that the asymptotic nature of the "t-channel" decomposition and the interpretation of the exponentially small correction terms by the OPE are universal features of Schrödinger invariant theories rather than being specific to our model.
Our analysis suggests the importance of the equal-time observables (e.g., the pair-wise equal-time four-point function). They have particularly simple forms but yet contain interesting dynamical information of the model such as the scaling dimensions and the OPE coefficients depending on the coupling constant.
The "s-channel" decomposition turns out to involve only one primary operator (section 4.2). Thus we have obtained an analogue of the conformal block in isotropic theories. We may call it the "Schrödinger block". We have obtained a special case of the Schrödinger block (in two spacetime dimensions). It is special in that it is restricted to the four-point functions of operators with ∆ = 1 2 . The scaling dimension of the operator running in the intermediate channel can be controlled by tuning the coupling constant ν. We hope this result may serve as a building block in the bootstrap program of z = 2 Schrödinger invariant theories.
By taking a certain limit of the four-point function we have computed a threepoint function (section 4.3) and have found peculiar properties of correlation functions involving certain charge-zero operators (section 4.4).
The reason we are able to uncover these new features is because our model allows us to explicitly compute the four-point function. Previously obtained exact results for genuine interacting Schrödinger invariant field theories are restricted, to our knowledge, to computations of three-point functions and the associated OPE. 26 These include the exact computation of a three-point function [82] and OPEs [45] for the fermion at unitarity (and for a related bosonic theory) in general space dimensions. For the computation of the OPE in systems with contact interactions in one space and one time dimension(which are generally not scale invariant), see, for example, [83] and references therein. For a review of computations of observables related to the three-point correlation functions with Schrödinger symmetry in statistical models, see [84]. The exact expression is worth further investigation. Firstly, by studying a certain limit of the expression, we should get a better understanding of the "tchannel" OPE and hence of the important charge-zero operators.
Four-point function in general position
Secondly, the generalised hypergeometric function should obey certain connection formulae, analogous to those satisfied by the ordinary Gaussian hypergeometric functions. The connection formulae relate different expansions of the function valid for different limits one can take in their arguments. These different limits should correspond to the various ways of decomposing the four-point function by the OPE. Hence, the connection formulae should be a rather direct manifestation of the OPE associativity. A good example which shows the relevance of the hypergeometric functions and their connection formulae in the conformal bootstrap program is the Liouville CFT. A four-point function in the Liouville CFT is written directly in terms of the Gaussian hypergeometric function of the cross-ratio, and a connection formula between the hypergeometric functions indeed represents the OPE associativity [85].
Finally, we have seen that Schrödinger invariant theories have an intricate structure: if looked at from a certain perspective they are described by functions analogous to the confluent hypergeometric function (which can be represented by an asymptotic series when its argument goes to infinity), and from another perspective, they are described by functions analogous to the hypergeometric function (which can be expanded everywhere, even including the point at infinity, and represented as a convergent series). On the one hand, the pairwise equaltime four-point function is given in terms of a modified Bessel function, which is a special case of the confluent hypergeometric function. Also, the three-point function obtained as a limit of the four-point function is written by a confluent hypergeometric function. On the other hand, if considered as a function of one of the Schrödinger invariant "cross-ratios" (3.26), τ = t 21 t 43 t 31 t 42 , the four-point function should have features analogous to the hypergeometric function, consistent with the SL(2,R) subgroup of the Schrödinger symmetry discussed in [54]. The expression of the four-point function via a generalised hypergeometric function should embody this mixed feature. Expressed as a multiple series of a certain set of combinations of the variables valid for certain limits, the series should be of hypergeometric type. When another set of combinations of its variables is used, the series should have degenerate parameters, and have properties closer to the confluent hypergeometric functions rather than the hypergeometric functions.
Analogy to 2D CFT and the sine-Gordon model The model we have considered in this paper, the Calogero model in the second-quantised formulation, has features analogous to the compactified free-boson CFT in two spacetime dimensions. Both the Calogero model and the compactified free-boson CFT are theories parametrised by a single parameter (the coupling constant and the compactification radius R respectively). The scaling dimensions of the charged operators are dependent on that single parameter, e. g. the operator Φ (arising from the ΨΨ OPE) in the Calogero model and e i 1 R X of the compactified freeboson CFT, where X is the fundamental scalar field. That the ΨΨ OPE involves only one primary operator Φ is reminiscent of the fact that the OPE of e i 1 R X e i 1 R X involves only one primary operator, e i 2 R X . This analogy may be more than superficial: both the Calogero model and the compactified free-boson CFT can be embedded into the sine-Gordon model. As is well-known, the IR limit of the sine-Gordon model (for a range of the coupling constant) is described by the compactified free-boson CFT. (See, for example, [86] and references therein.) On the other hand, one can first take the non-relativistic limit of the sine-Gordon model [87,88] 27 to obtain a model of two kinds of interacting non-relativistic particles (the solitons and the anti-solitons of the original sine-Gordon model). The pair potential between solitons (or antisolitons) in this limit has the form ∼ 1/ sinh 2 (r/r 0 ), and that between a soliton and an anti-soliton has the form ∼ −1/ cosh 2 (r/r 0 ). By taking a further limit where the length scale of the non-relativistic model vanishes, one finds that the solitons and anti-solitons decouple from each other, and the interactions among each of them are described by the Calogero model. Thus, the Calogero model and the compactified free-boson CFT can be realised as different limits of the sine-Gordon model.
As is well known, it is possible to compute correlation functions of minimal model CFTs, applying a certain projection to the compactified free-boson CFT. (See for example chapter 9 of [90].) In particular, correlation functions of the critical two-dimensional Ising model can be calculated by taking the "square root" of the compactified free-boson CFT with a special coupling [91][92][93][94][95][96][97][98][99]. It may be possible to obtain correlation functions of various z = 2 Schrödinger invariant theories starting from the correlation functions of the Calogero model. In particular, correlation functions of the Glauber model [2], a model describing the dynamical critical behaviour of the Ising model, in one space and one time dimension at criticality may be computed starting from the Calogero model. The Glauber model in d = 1 + 1 at criticality has the z = 2 scale invariant behaviour. (See for example section 10.2 of [11].) The model is exactly solvable in the sense that its partition function can be computed [100] via the mapping to free fermions. This is analogous to the mapping of the two-dimensional Ising model to Majorana fermions [101]. For the Ising model, one can calculate the correlation functions by further rewriting the Majorana fermions as the "square root" of massless Dirac fermions, which in turn is equivalent to the compactified free-boson CFT (with a specific coupling constant) via bosonisation. One may be able to compute general correlation functions of the Glauber model in a similar manner using the Calogero model. We note that the two-point functions of the fundamental spin operator of the Glauber model in 1 + 1-dimension has been computed [2] and verified to have the form dictated by the Schrödinger symmetry at criticality [32]. Some correlation functions related to the three-point functions were computed and it was found that there exists an operator with dimension ∆ = 3 (in addition to the fundamental spin operator with dimension ∆ = 1 2 ) [84,102]. It is tempting to conjecture that the Calogero model with ν = 3 2 , in which case the dimension of Φ becomes ∆ = 3 2 + ν = 3, is relevant for the Glauber model, just like the compactified free-boson CFT with a specific compactification radius is relevant for the two-dimensional Ising model.
Note that ν = 3 2 is one of the special "degenerate" cases of the Calogero model (ν = − 1 2 , 1 2 , 3 2 , . . . ) in which the asymptotic series associated with the "t-channel" decomposition truncates. The relation of the Calogero model to the sine-Gordon model (and the system of particles interacting with a 1/ cosh 2 r pair potential) may shed light on these special points and the spectrum of zero-charge operators. As is well-known, in the sine-Gordon model, a soliton and an anti-soliton can form a bound state. The number of bound states takes the value n = 0, 1, 2, . . . , depending on the parameter of the sine-Gordon theory. At the special values of the parameter where the number of bound states changes discontinuously, the reflection coefficients between a soliton and an anti-soliton vanish. These special values of the parameters are reminiscent of the special cases, ν = − 1 2 , 1 2 , 3 2 , . . . , of the Calogero model where the asymptotic series truncates. We speculate that at these special points the "multiplicity" of the zero-charge operators also change discontinuously.
Generalisations We computed the four-point function by reducing it to the two-particle problem. The integrability of the Calogero model means that one has a certain analytic control over the three-(or more) particle sector. Exploiting the integrability, therefore, it should be possible to calculate six-point functions of the fundamental fields (more precisely the correlation functions with three Ψ's and three Ψ's), and to study the ΨΦ OPE.
One can introduce a three-body interaction to the Calogero model without destroying the integrability in the three-particle problem [103,104]. Studying such a deformation would be interesting. The deformation will not affect the physics of the two-particle sector, and hence the results of our paper. However, the six-point functions of the fundamental fields and hence the ΨΦ OPE will be deformed.
Another interesting variant of the Calogero model is the so-called B N -type Calogero model. See, for reviews, [105,106]. The model can be considered as the Calogero model put on a semi-infinite line with an appropriate boundary condition, which preserves the integrability of the model. We expect that the B Ntype Calogero model (around the true vacuum) will exhibit a z = 2 anisotropic surface critical behaviour and provide a nontrivial yet tractable example of the z = 2 analogue of a CFT with a boundary.
The integrability of the Calogero model allows one to compute the correlation functions around the finite-density vacuum. (See, for example, [71] and references therein.) The finite-density vacuum breaks the z = 2 scale invariance spontaneously. It would be interesting to study the finite-density correlation functions from the point of view of the broken z = 2 scale invariance and Schrödinger invariance. (For a review of spontaneous breaking of the Schrödinger symmetry, see [107].) The IR limit of the Calogero model at finite-density is described by a c = 1 CFT [108][109][110][111][112][113]. Thus the finite-density correlation functions of the Calogero model should interpolate between the z = 2 scale invariant correlation functions studied in this paper in the UV limit and the z = 1, c = 1 CFT in the IR limit.
The Calogero model is inherently related to a system of anyons, which is a z = 2 Schrödinger invariant model in one time and two space dimensions. (See, for example, [44] and references therein). In particular, the Calogero model is equivalent to a system of anyons restricted to the lowest Landau levels [110,[114][115][116][117][118][119][120][121]. It would be interesting to study the implications of our exact four-point function for the system of anyons.
In this paper we focused on the case with one space dimension. However, the Schrödinger symmetry exists for any number of space dimensions when nonrelativistic particles are interacting with a pair potential of the form 1/r 2 [62]. We do not expect these models in general to be integrable in the conventional sense. However, since our analysis of the four-point function is associated only with the two-particle sector of the model, the computation of the four-point function in higher space dimensions appears feasible. It would be interesting to consider the properties of the OPE, including the OPE associativity, for this higher dimensional system with the Schrödinger symmetry.
Finally, finding an anisotropic scale invariant quantum field theory model with z = 2 but with exactly computable OPEs is an interesting open problem.
We hope that our analysis provides a starting point of better understanding of fixed points of the renormalisation group for anisotropic theories, and of uncovering a rich structure of solvable models with z = 2 scale invariance.
A Schrödinger symmetry
We list here all nonzero commutators in the algebra associated with the Schrödinger symmetry. The members of the algebra are the time translation, the space translations, the angular momenta, a U(1) charge, the dilation, and the spacelike and timelike "special conformal The nonzero commutators involving M ij show the transformation properties of the generators under the spatial rotation, The remaining non-vanishing commutation relations are B Propagator in 1/r 2 potential In this appendix, we compute the propagator for the Hamiltonian, (2.1) corresponding to a particle in an external potential 1/r 2 where r > 0. The boundary condition for r → 0 is Ψ ∼ r λ . The Schrödinger equation for an energy eigenstate Ψ(r) with energy E = k 2 (k > 0) is For r → +∞, Ψ(r) asymptotes to a linear combination of e ±ikr . A simple redefinition leads to which is Bessel's equation [ We shall use the bra-ket notation, where k > 0. The normalisation constant N k is fixed by the requirement that |k 's should give a complete orthonormal basis (with the correct boundary condition) Using an integral formula [78, (10.22.67)] we see that vanishes for k = k and is IR divergent for k = k . A natural IR cut-off can be introduced: where we are interested in the limit α → 0 in the end. This equals using (B.9) and (4.15). Thus we obtain Finally, the propagator in 1/r 2 potential is, which coincides with the free particle propagator with the reduced mass m = 1 2 , as it should be.
C Details of the computation of the threepoint function ΨΨΦ
In this appendix, we supply the details of the computation of the three-point function ΨΨΦ in general position. We also give the comparison to the generic form of the three-point function derived by Henkel [32,33] and elaborate on the properties of the Schrödinger invariant quantity which we denote w.
C.1 Evaluation of the integral representation
We begin with the integral representation of the three-point function (4.74), We evaluate this integral as follows. First, we extract the dependence of the integrand on x = x 34 , Separating the contribution from x > 0 and x < 0, we get,
C.2 Relabelling and properties of w
We will write the integrals in the last line of (C.2) using the parabolic cylinder functions [78, section 12], which in turn can be expressed using the confluent hypergeometric functions. Before doing so, we will relabel the x, t coordinates and check whether the result obtained is consistent with the general form [32,33] of three-point functions dictated by the Schrödinger symmetry.
To do this we slightly modify our notation to bring the three-point function x 1 ) . Thus, we relabel as, The result is, The integral converges since ν ≥ − 1 2 , (2.2). We recall that we chose t 3 > t 2 > t 1 and hence w ∈ R. It is worthwhile to discuss some properties of the "cross-ratio" w which is invariant under the Schrödinger symmetry. For t 3 > t 2 > t 1 we have w 2 ≥ 0, and w 2 = 0 holds if and only if the spacetime points 1, 2, 3 are aligned on a straight line. It is easy to show the identity by direct computation. It is amusing to note that the quantity in (C.7) is twice the "area" of a triangle spanned by the spacetime points 1, 2, 3 up to sign. It is completely anti-symmetric in the labels 1, 2, 3. It follows that We note that w 2 again is completely anti-symmetric in the labels 1, 2, 3. In section 3.3, we used the notation v = 1 2 w 2 .
C.3 Comparison to the general form of three-point functions dictated by Schrödinger symmetry
The standard form of the three-point function in a Schrödinger invariant theory is, 28 where F 123 is an arbitrary scaling function which generically is not fixed by the Schrödinger symmetry alone. The quantum numbers N i (i = 1, 2, 3) are the charges of the operators O i associated with a U(1)-symmetry present for theory with the Schrödinger symmetry. They satisfy N 1 + N 2 + N 3 = 0, and N 1 > 0, N 2 < 0, N 3 < 0. For the three-point function studied here, we have To compare with the standard form, it is convenient to rewrite (C.5) using (C.8), using the parabolic cylinder function U (a, z). Thus we obtain × e − 1 4 w 2 (U (ν + 1, −w) + U (ν + 1, w)) .
(C.12)
We observe that the last line is even in w. We rewrite the above formula in terms of a parabolic hyperbolic function, u 1 in the notation of [78], which is even in w. We will then rewrite the formula in terms of the confluent hypergeometric functions. This will be useful to check against the result by Golkar and Son [52], and also to study simplifying limits, namely the free boson limit (ν = − 1 2 , appendix E.2), and the limit t 32 → 0, which we already computed in
D Golkar and Son's analysis in Euclidean signature
Golkar and Son showed [52] that the form of the scaling function appearing in the three-point function in a Schrödinger invariant theory is severely restricted when the scaling dimension of one of the operators equals the special value, ∆ = d 2 , where d is the number of spacelike dimensions. The scaling function satisfies (except for a simple prefactor) the confluent hypergeometric equation. Their analysis was done in Minkowski signature. Since how the analysis takes over to Euclidean signature is not entirely trivial, in this appendix we give the Euclidean version of the analysis of Golkar and Son. 30 In this appendix d is arbitrary and we write x = (t, x).
The solution to the differential equation contains two arbitrary parameters. In [52] it was advocated that one of the parameters vanishes due to the regularity conditions of the OPE, acting as the boundary conditions of the differential equation. We will also give below a careful discussion of the regularity conditions, in particular, for the case d = 1. We will see that for that case, the regularity conditions are weaker and do not imply the vanishing of the parameter. 30 We note that the notation used in [52] is slightly unusual. They call what is usually called (up to constant multiplication) the confluent hypergeometric function M (a, b, x) (in the notation of [78]) as "a generalised Laguerre polynomial" L n α (x), with n = −a, b = α + 1. The function is not a polynomial unless n = −a is a non-negative integer. As shown in [52] the parameter a is related to the scaling dimensions of the operators (see (D.22)), and is not an integer, in general. It is straightforward to verify
D.2 OPE coefficients
We consider general constraints on the OPE coefficients imposed by the Schrödinger symmetry. We consider the OPE O 2 O 1 and focus on the part proportional to O 3 , where O i are scalar primary operators with nonzero U(1) charges. We consider the special case, ∆ 3 = d 2 .
We write down explicitly the first few descendants of O 3 , (D.14) where t > 0 is assumed. By taking the commutators of K i , C with the LHS and RHS, we obtain where we write N i ≡ N O i , (i = 1, 2, 3). Generically, these equations express differential operators acting on C 0 to give C 1 , C 2 , · · · . For the special case ∆ 3 = d 2 , (D.15)-(D.17) imply a differential equation on the coefficient C 0 : The scale and SO(d) invariance require C 0 to have the form Substituting this to (D.18), we obtain, using N 3 = N 2 + N 1 , 0 =4y d 2 f dy 2 + 2d df dy + 2(N 1 − N 2 )y df dy 20) where y = x 2 t . This differential equation becomes the confluent hypergeometric equation We assume, for simplicity, that a is not a negative integer. (This can always be met for example by replacing (O 1 , O 2 ) with (Ō 2 ,Ō 1 ) so that ∆ 1 > ∆ 2 .) The standard confluent hypergeometric functions M (a, b, z) and U (a, b, z) (in the notation of [78]) are then linearly independent. Hence any solution can be written v = AM (a, b, z) + BU (a, b, z), (D. 25) where A, B are constants.
In [52], it was advocated that appropriate regularity conditions on the OPE coefficient imply (D.26) B are not constrained. (If we require further that the OPE be non-vanishing then we get A = 0.) Let us next examine the behaviour at z → +∞, which corresponds to t → 0+ with fixed x, i.e. to the limit of the equal-time OPE. We shall see in fact that the OPE coefficient C 0 in this limit either diverges or goes to zero. This is not surprising: Consider, in the free-field theory, the part of the OPE Ψ(t, x)Ψ Ψ(0, 0) proportional to Ψ(0, 0). The OPE coefficient is essentially the two-point function 2t and is singular in the limit t → 0+ with fixed x. The general argument goes as follows. In the limit, z → ∞, we have Hence if N 2 > 0, C 0 diverges, and if N 2 < 0, C 0 goes to zero. (We only write in the above formula the leading exponential behaviour.) For the B-type solution, we have (D.34) using ∆ 3 = d 2 . Again for N 1 = 0, C 0 either goes to 0 or diverges. Hence, for operators with nonzero charges and ∆ 3 = d 2 , the equal-time OPE either diverges or vanishes.
To summarise this subsection, the limit x 2 t → +∞ (the equal-time OPE) is singular (the OPE coefficient either diverging or vanishing) and does not give constraints on the coefficients A, B. If we require the regularity of x 2 t → 0 (the equal-space OPE), we obtain B = 0 for d = 2, 3, · · · , but no constraints for d = 1.
D.4 Three-point function
We consider the general form of the three-point function of primary operators [32], where t 3 > t 2 > t 1 . Here w is the Schrödinger invariant spacetime cross-ratio defined by (C.8). To be specific, we consider the case, =K(x 4 , x 1 ; t)K(x 3 , x 2 ; t) + K(x 4 , x 2 ; t)K(x 3 , x 1 ; t), where K(x , x; t) in the last line is the free propagator (3.20). This is the expected result for free bosons. Here Φ = √ πΨ 2 for the free-field theory; the normalisation condition is fixed by the two-point function (4.9). This reproduces the free theory result.
|
2021-07-19T01:15:57.346Z
|
2021-07-16T00:00:00.000
|
{
"year": 2021,
"sha1": "ea0c6bcb2ca5de20d8add7fc2a23c3b74adb2593",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/JHEP10(2021)030.pdf",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "ea0c6bcb2ca5de20d8add7fc2a23c3b74adb2593",
"s2fieldsofstudy": [
"Mathematics",
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Mathematics"
]
}
|
42271144
|
pes2o/s2orc
|
v3-fos-license
|
Analogue and Mixed-Signal Production Test Speed-Up by Means of Fault List Compression
Accurate test effectiveness estimation for analogue and mixed-signal Systems on a Chip (SoCs) is currently prohibitive in the design environment. One of the factors that sky rockets fault simulation costs is the number of structural faults which need to be simulated at circuit-level. The purpose of this paper is to propose a novel fault list compression technique by defining a stratified fault list, build with a set of “representative” faults, one per stratum. Criteria to partition the fault list in strata, and to identify representative faults are presented and discussed. A fault representativeness metric is proposed, based on an error probability. The proposed methodology allows different tradeoffs between fault list compression and fault representation accuracy. These tradeoffs may be optimized for each test preparation phase. The fault representativeness vs. fault list compression tradeoff is evaluated with an industrial case study—a DC-DC (switched buck converter). Although the methodology is presented in this paper using a very simple fault model, it may be easily extended to be used with more elaborate fault models. The proposed technique is a significant contribution to make mixed-signal fault simulation cost-effective as part of the production test preparation.
Introduction
Digital testing is a mature field, where many efficient methodologies and tools exist for production test preparation.However, in the analog and mixed-signal (AMS) testing domain, although reasonable approaches have been developed, they typically can only be applied to very simple case study circuits.Still, computational costs are prohibitive for real SoCs that are produced nowadays.Hence, there is a huge gap between what is published by research groups and what industry really needs in terms of AMS testing.The problem can be stated as follows: how can we demonstrate to a customer that the proposed test of his (her) AMS IP core verifies it and ensures a zero defect level product?The trend of increasing complexity, performance and speed of AMS devices poses significant challenges for test from a test yield and cost standpoint [1].AMS circuits account for 70% of SoC test cost and 45% of test-development time, even though AMS functionality makes up a small fraction of the chip complexity [2].The reasons for this situation may be summarized as follows: 1) there are no practical fault models (structurally-oriented), consequently 2) there are no Automatic Test Pattern Generators for AMS circuits, and 3) Design For Test (DFT) and Built-in Self-Test (BIST) solutions for AMS devices are still purely custom [2].While trying to understand the root causes for this we find that AMS fault simulation costs for structural faults are still very high and a methodology for test stimuli generation (TPG) that uncovers structural faults is not defined yet.Thus, there is a need to lower AMS test costs.Moreover, an additional challenge exists: AMS test is mainly functional test, not structural test.Hence, it is very difficult to accurately prove to a customer the test effectiveness of the production test, based on the functional test.Many approaches have been proposed to overcome these challenges.Supply current monitoring for a quiescent state is widely used for digital circuits testing (IDDQ testing) [3].The obvious advantage of IDDQ testing is that it does not require access to inner nodes of the structure.However, analog circuit IDDQ testing is significantly more difficult [3].Moreover, diagnostic resolution of IDDQ testing is poor.IDDQ is found suitable only for catastrophic faults as the power supply current may be distinguishable only when the fault causes a change of current larger than the expected parametric scattering [4].
As stated, AMS test is mainly functional test.Analog circuits have traditionally been tested for critical specifications like AC gain over a range of frequencies, commonmode rejection ratio or signal-to-noise ratio, due to the lack of simple structural fault models [5].However, the costly (time and resources) nature of AMS test preparation has motivated research into structural testing for analog circuits [6].Nevertheless, no widely accepted fault models exist in the analog domain.Moreover, there is no automated way of minimizing the number of faults required to uncover likely physical defects in the structure of the entire AMS circuit [2].Thus, no proven alternative to functionaloriented analog testing exists and more research in this area is needed [7].An efficient strategy to test complex circuits is needed and the most promising approach is the Holy Grail of AMS: structural test [2].Still, some difficulties arise: 1) the AMS cell definition, in a similar way to logic gate definition (in digital test environment); 2) a validation methodology to prove that a functional-oriented AMS test can also uncover structural faults.Therefore, there is a real need to develop a methodology for AMS SoCs testing, providing a methodology and tools for test preparation in the design phase.
In [8] a Computer-Aided-Test platform was developed to evaluate the test techniques for analogue and mixedsignal circuits.However, its operation is based on a statistical circuit performance analysis that accounts for process deviations.This is a good approach in a sense that it is useful to compare and improve different test techniques, by calculating analogue test metrics under process deviations.However, the lengthy Monte Carlo simulations process makes it less appealing.In [1] the authors developed a Mixed-Mode Fault Simulation approach for the analog fault coverage analysis at transistor-level, examining the faulty behavior in a structural way, along with the use of Hardware Description Language (HDL) models to perform the simulation in a manageable time.This approach is capable of uncovering structural faults, but the fact that it uses HDL models in some parts of the circuit reduces test accuracy that may not be tolerable, like in automotive applications.
In [9] a statistical approach is proposed for analog circuits under process variations.A hierarchical variability analysis for analog testing is performed, where both parametric and catastrophic faults are studied.A detectability metrics, based on the statistical distribution of each specification, is used to determine the best measurement sets that detect each fault.The circuit fault coverage (FC) is computed by summing all the detectabilities weighted by the occurrence probability of each fault.The test selection algorithm used results in appreciable reductions in the test time.In [10] the authors present an adaptive test strategy that adjusts the test sequence in order to cope with the properties of each individual instance of a circuit.It uses an initial test sequence ordering in order to boost the performance of the adaptive test elimination method.Test dropping is obtained by adapting the likelihoods of the unmeasured specifications passing the test, using the ongoing measurement results and a correlation matrix.This adaptive technique uses less resources with well-performing circuits, by passing them quickly, whereas more test time is used to test marginal devices (which do not fall near the nominal of the distribution space).An improvement in the test quality is obtained, for the same test time, with respect to other techniques.
In [11] the authors used a current measurement technique in order to improve the circuits fault coverage.This improvement is achieved by adding current measurements to the original specification based test program already present in the Defect Oriented Testing (DOT).An adaptive test strategy for the same DOT technique is presented in [12].The test is prepared using a fast analog simulation algorithm Fault Sensitivity Analysis (FSA) that obtains the detection status of catastrophic faults in a manageable time.The proposed framework guides an optimized test selection, using a Greedy algorithm that eliminates redundant functional tests.This technique showed a defect simulation speed-up between 100 and 500 times for a specific design, when compared with the estimated standard simulation time.Moreover, by using more tightened specification limits, the defect coverage was improved.In [13] the authors explored the concept of fault equivalence in AMS circuits, by substantially reducing the number of structural faults that needed to be simulated.However, fault "equivalence" can be defined only in the particular case when anaccurate fault representation (one fault representing others) is possible and a more general representation concept can be defined that allows to truly explore the tradeoff between accuracy of representation and fault list compression.
In this paper, a completely different methodology is proposed, as a first step to overcome the limitations pointed above.We suggest a novel fault list compression technique for AMS cells, allowing cost-effective AMS low-level fault simulation.The proposed approach uses structural test along with stratified fault grouping, leading to the identification of a reduced set of "representative" faults.The estimated value for fault coverage may then be used to evaluate the test quality and to appropriately drive DFT efforts [14].This technique was introduced in [15] but it was limited to single output cells (voltage signals).Now we extend its applicability to multiple output cells (voltage and/or current signals).An improvement in the selection of the test stimuli is also proposed to improve the fault compression rate for the same accuracy.The compressed structural fault list will be used to ascertain whether (or not) the production test is able to detect the structural faults (described at transistor level); if not, the designer may complement the production test set.
The paper is organized as follows: in Section 2, the new fault list compression technique is presented; an industrial DC-DC converter is studied in Section 3, where the fault compression technique is applied to the constituting blocks; Section 4 shows a fault representativeness evaluation for all the blocks in the DC-DC converter, that contain cells identified by our tool, as well as an improvement in the test quality of the DC-DC converter that contains two instances of a differential pair cell (DIFFpair); in Section 5 some conclusions are drawn.
Fault List Compression Technique
For digital circuits, structural testing has provided costefficient solutions that lead to high defects coverage, without functional test [8].However, AMS testing significantly differs from digital testing.The major difference comes from the need to consider continuous signals and parametric deviations, in addition to just catastrophic faults (opens and shorts) [16].In digital test, the impact of all internal physical defects at ate level is modeled by Boolean faults at input/output (I/O) nodes, e.g., using the single Line Stuck-At (LSA0, LSA1) fault model.Fault equivalence and collapsing are used to reduce the fault list.As only two voltage levels are relevant (V DD , V SS ), gate-level fault simulation can be performed.When a test vector activates a fault, a complementary, erroneous Boolean value occurs at the cell's I/O.
In analog test, analog cells can be identified, as they are extensively used (e.g., a differential pair, a common source amplifier, etc.).As a first step towards the search of representative faults, the analog cells, with basic functionality and appropriate granularity must be identified.
It is a well-known fact that non-catastrophic faults can also impact the performance and reliability of digital circuits.As a consequence, the traditional Line stuck-at fault model, used for the generation of a structural and static test, needs to be complemented with a performance evaluation test, which aims the detection of dynamic faults.Accordingly, in AMS circuits, a structural test that aims the detection of catastrophic/hard faults can be complemented by functional tests that evaluate the required performance.The proposed methodology aims the detection of hard faults and comprises five main steps that are explained bellow.Next, the methodology is applied to a DC-DC converter case study.
Analog Cell Definition
An initial question that arises when defining the analog cell boundaries refers to current mirror transistors.Shall these MOSFETs be included in a current mirror cell, or individually in each cell that uses a transistor as current source?A first analysis of a two stage comparator, with one current mirror transistor biasing the differential pair and another transistor of the same current mirror biasing the following common source stage, shows that a masking effect occurs for some faults that affect the current in both current mirror transistors.This masking effect has a parallel in the well-known difficulty found in the detection of faults in reconvergent fanout paths in digital circuits.Therefore, in the proposed technique, current mirror transistors are embedded in the analog cells they serve and no current mirror is defined as a primitive cell, except if it has a unique output current (when no masking effect is possible).
In digital circuits fault representativeness is obtained through fault equivalence and collapsing, analyzed at gate level.The granularity of a digital gate corresponds to the minimum functional block that can be analyzed at logic level with defined output values for all the possible input combinations.Faults are equivalent if, for all the possible input combinations, their impact on the output is the same.
Similarly, fault representativeness in analog circuits can be searched for the minimum size cells where functionality can be identified.Faults are grouped taking into account the similitude of their impact on the functionality when considering exhaustive input stimuli.
Structural Fault Model Selection
Faults should be defined in such a way that their detection allows the identification of permanent physical structure damage, due to a manufacturing defect (e.g., an open via).The structural test process will, thus, check the physical integrity of the manufactured IP core.In digital test, the LSA fault model, used to ensure the controllability and observability of the whole structure, is based on a static analysis.Parametric faults, not being properly modeled by the LSA model, need alternative test methods (like I DDQ or dynamic tests).Similarly, the proposed fault model aims at analyzing the structure integrity; therefore it is also based on catastrophic/hard faults.As in the digital test domain, the detection of some parametric faults is only possible using complementary test methods (e.g., functional test).In CMOS technology, cell branches are mainly composed of MOSFETs.Hence, the selected structural fault model is transistor stuck-on (TSON) (zero parallel resistance) and transistor stuck-off (TSOFF) (modeled by a 10 12 Ω series resistance).For an analogue cell with t MOSFETs, the fault list will contain (f = 2t) faults.
Cell Fault Simulation (FS)
In order to evaluate the cell functionality in the presence of each fault, a set of stimuli capable of fully exercise the cell must be defined.Two approaches were evaluated for the stimulus definition: • Operation-aware test: for the Fault-Free (FF) circuit and for all the faulty circuits (in each one, a single fault, F i , is injected), all combinations of MOSFET regions of operation are identified (off, sub-threshold, triode, saturation).For each such combination, a test stimulus is generated.The test set will comprise s test stimuli.Some of the stimuli impose values unrealistic in the normal cell operation.For example, the DIFFpair cell, with a differential input voltage range [−2, 2] V, requires s = 87 stimuli.• Sweep test: equally spaced test vectors, in the input range interval used in the cell normal functionality, are used.For the same DIFF pair cell, s = 33 stimuli are used for the input range [−340, 340] mV.This second approach proved to lead to the identification of better representative faults, by better exercising the cell functionality, as will be shown in the next sections.
Once the test set for cell fault simulation is defined, the values of the output variable (s) under observation (e.g., v O (t)) for each test stimulus and each simulation run (associated with the FF and single fault injection) are stored.Those values are combined to obtain several points, p i , in R s , each point corresponding to one circuit response, obtained during a simulation run.Hence, the total set of v Oi (t) values range from to where
Fault Stratification
With the data collected in the cell FS step, we need criteria to cluster the structural faults in strata, according to their resemblance, in terms of similar responses.The proposed criterion is based on the computation of the faulty and FF circuit responses distances.An optimization algorithm is used to obtain the minimum for the sum of distances between every pair of representative and represented fault responses.These distances have been computed for different definitions (Euclidean distance, Manhattan distance, etc.).Attending to the definition of the Minkowski distance of order p, the Manhattan and the Euclidean distances correspond to the first and second orders, respectively.The Manhattan distance has been selected as the one leading to more accurate results.The number of strata, L, is used to define the trade-off between accuracy and FS effort.For L = 2t, each stratum contains only one fault, i.e., there will be no fault list compression.For L < 2t, one fault per stratum is selected to represent the other faults.Each stratum will thus contain one Representative Fault (RF), and a number of represented faults (which can even be zero).One of the strata will cluster faults whose responses are very close to the FF response.Hence, there will be no need to fault simulate for this stratum.Therefore, the number of faults in the reduced fault set will be (L − 1).The compression rate (CR) is .2 Ultimately, the number of fault strata, L, will be user's defined, allowing to choose the desired accuracy for each test preparation phase.Criteria for this choice are provided by the proposed methodology.Increasing L will increase fault representativeness; however, it decreases the compression rate.Figure 1 shows a stratification example where faults that originate responses p 5 , p 6 and p 10 are the RF; the other faults are the represented faults, being this combination of fault results the one that minimizes (for L = 3) the sum of the distances, d.
Error Evaluation
One degree of freedom is L, the number of representative faults that are chosen, and it may vary between one and the total number of faults (L = 1, 2, ..., f).This section explains how the error probability achieved by using L strata is computed.By selecting L < 2t, fault representation will occur with an error, which probability must be evaluated.When comparing the FS results, if the detection (or not detection, ND) status of a represented fault coincide with the one of its representative fault, no error occurs.However, if they differ (e.g., the RF is detected, but the represented one is ND), the representation process introduced an error, which is the price paid to constrain the FS cost.Since the use of RF may introduce errors, more important than knowing the probability of one error it is important to quantify the probability of occurrence of n or more errors.In the proposed methodology, a metric for error evaluation is introduced, as follows: Definition: E n is the probability of occurrence of n or more errors.E n is computed as a function of L in order to quantify the error probability for different compression ratios.
To obtain the data to calculate the error probability, again a set of test stimuli for error estimation must be generated, and the corresponding simulations must be performed.The number of test stimuli is similar to s, the number of test stimuli used in the analog cell FS.To compute this error probability, we consider that the circuit connected at the output of the analog cell under analysis has an unknown threshold value that is used to decide if each fault is detected or not.Therefore, for each applied stimulus, there is an error if the output given the presence of the representative fault is higher/lower than the threshold value whereas the output given the presence of the represented fault is lower/higher than the threshold value.To quantify the error probability, obtained by representing the 2t faults by L -1 RF, the following method is proposed: First, the output value range is partitioned in k equal intervals that we refer as bins.Hence, a total of k + 1 possible output value levels are obtained (Figure 2).Then, for each stimulus, the corresponding output values for one pair of (representative and represented) faults are approximated to the nearest levels.
Finally, there is an error if the threshold value lies between those two levels.In this situation one fault is regarded as detected and the other one as not detected, thus, an error occurs.In the example shown on Figure 2, bins 2 and 3 contain one error each, since the RF output is approximated to level 3 and the represented fault output is approximated to level 1.It means that there may be an error if the unknown threshold value is found on bins 2 or 3; therefore, the number of errors is incremented on those bins.This evaluation is performed for every stimulus applied to the circuit in the presence of every pair of representative and corresponding represented faults and the errors obtained for the corresponding bins are summed.That is, the number of times each stimulus causes an error on each bin is counted and a table is created with this information.Table 1 is an example of such a table obtained using 4 bins and 5 input stimuli.
A second table is built by finding the number of errors in the previous table that are higher than or equal to n, for each stimulus and bin.Table 2 shows the entries where the number of errors is larger than or equal to n = 2, that are set to 1; the other entries are set to 0.
Taking the data of the second table, we sum the number of cases where the number of errors is higher than or equal to n. Dividing that value by the number of stimuli, s, multiplied by the number of bins, k, we get the error probability where that is obtained for a specific number of strata, L. A value of L is user's defined, taking into account the fault list compression rate and the error probability, computed for each number of strata.In the analyzed case-study, a value of k = 300 was used.This number of bins was found to be appropriate for the analysis since smaller values lead to substantially different error probabilities and above this value the results don't change significantly.Performing this analysis, for each integer value of L, the above mentioned probability is computed.Results show that even with low L values it is possible to obtain low probability values of 2 or more error occurrences, which shows that a significant fault list compression is possible without prohibitive sacrifice of FS accuracy.
DC-DC Converter Case Study
This research was driven by the need to reduce fault simulation effort in a real DC-DC converter industrial design.Therefore, priority was given to the identification of analog cells present in this case study.The DC-DC converter, implemented using a Chartered 65 nm CMOS technology, is composed of many blocks that are studied in this section.It contains two comparators that use one instance of a differential pair cell, each.The differential pair cell is one of the cells that is detailed in this section in order to illustrate the proposed methodology, while considering voltages as the input and output variables.
Another relevant block is a current generator that contains several cells.It is studied to show how to apply the methodology while using multiple outputs and currents as output variables.A table that contains the statistics for the main blocks of the DC-DC converter is shown in the next section.
Differential Pair Cell Fault Stratification
The differential pair cell is composed of 7 transistors and is shown in The differential pair is supplied with V DD = 2 V and different stimuli (V ID = V IP − V IN ) are applied at the inputs.For the Operation-aware test, Fault-Free and faulty simulation results for the 14 faults (7 transistor stuck-on, 7 stuck-off) are shown in Table 3, where only 5 of the total stimuli are shown.
Applying the method described in Subsection 2.4 to the differential pair cell and using the data partially presented in Table 3, distinct number of strata, L, where used.For every value of L in the set {1, ..., f}, the probability of occurrence of n or more errors was computed and it is presented in Figure 5, obtained for the set of 33 input stimuli that drive the MOSFET transistors to all possible operating zones (Operation-aware test), but taking the results given by stimuli in the range from −335 to 335 mV.The applied input voltage range was found appropriate to fully exercise the differential pair functionality, since for each limit the drained current shifts almost completely from one branch of the differential pair to the other.Each curve corresponds to one value of n.The curve that displays the highest probabilities corresponds to n = 1 (the probability of occurrence of one or more errors for each value of L).As expected, the error probability decreases when the number of strata and RF increases, allowing to exploit the tradeoff between accuracy and FS effort.For example, by using L = 3 (and simulating only L-1 = 2 RF) the probability of occurrence of more than n = 2 errors in this cell is reduced to almost zero.Figure 6 shows the error probability obtained using now the (Operation-aware test) but for the full input range [−2, 2] V.The results show higher probabilities than the ones obtained with a smaller range.This may be explained by the fact that the strata obtained in this case are not optimized for the operating zone of the differential pair (normally small differential input voltages).In the DC-DC converter under study, the differential pairs are excited with differential input voltages not greater than 300 mV.So, for this cell it is appropriate to use a small range for the input voltage, in order to select fault representatives that minimize the error probability.
Stimulus and Output Voltage
Temperature, power supply voltage and process variations were applied to the DIFFpair cell, in the Fault stratification and Error evaluation steps (Sections 2.4 and 2.5).The clusters obtained were the same as the ones obtained without variations and little variations were observed in the error probabilities, for this cell.A more elaborate study is needed, in order to get an accurate measurement of this dependencies, which is not within the scope of this paper.Nevertheless, the results show limited impact of temperature, power supply voltage and process variations on the error probability.A similar result was obtained using the same procedure, but now with the (Sweep test), applying stimuli obtained by dividing the differential pair input voltage range in 100 equal steps and using only stimuli obtained from −340 to 340 mV.A total of 17 stimuli were used in this case.Table 4 shows the clusters of representative and represented faults for 3 strata (L = 3), where the RF are bold typed.Fault xt30-TSOFF is one that drives the cell output node to a high impedance state.Thus, this fault was removed from the fault representation analysis, since the resulting unpredictable behavior cannot be represented by any other fault results, so it is not present in the above mentioned table.
Current Generator Cells Fault Stratification
The current generator is a block that contains one Bia-sLink cell, two CascodeBias cells and one IbiasSet cell, as shown in Figure 7.The analysis of the cells present in this block is done by applying variations to the nominal input current variable, I BIAS , performing a Sweep test for a range of ±30% around the nominal value.The output The error probability values computed for each output, obtained in the manner described in subsection 2.5, are combined to obtain the final error probability as follows: 1 1 1 1 where P i ( ), are the error probabilities obtained for the individual outputs.
1, , i j
As an illustration, Figure 8 shows the error probabilities obtained for the IbiasSet cell while considering only 5 output variables.The Sweep test was applied in a range of ±30% around the I BIAS nominal value, for a total of 20 input stimuli (s = 20).The results obtained for the total number of outputs on this block of the DC-DC converter are presented in the following section, where it was possible to get an optimal number of strata L = 39, for a total number of 76 faults from 19 outputs.The results obtained for the BiasLink cell and for the Cascode-Bias cells are also presented in the next section.
DC-DC Converter Fault Representativeness
The evaluation of the proposed fault representation technique is mandatory in order to ascertain if the exercised functionality and the accepted output difference leads to clusters of faults that are not detected or simultaneously detected when real circuits are submitted to production test.The testability analysis of a low complexity DC-DC converter industrial design, whose simplified block diagram is shown in Figure 9, requires circuit-level simulation of 1608 × 2 faults, taking 33 days!The proposed technique aims at reducing the computational effort with limited impact on the accuracy of the obtained testability measure.To ascertain this, the fault simulation dictionary of the complete fault list was used, followed by a scenario of fault representativeness evaluation: for each individual fault in a stratum, the coincidence in the test strobe, that is, the coincidence in the observation instant is evaluated.The analog simulator used at Silicongate is hspice.Hence, the hspice netlist was used for cell recognition and fault injection, adding resistors where faults need to be injected.Analog fault simulation is carried out also with the hspice simulator.Fault injection is done by assigning an appropriate value to each resistor using.alter sections.Measures in specified simulation instants are the test strobes that allow fault dropping.Additionally, a timeout for the simulation of each fault is also used in order to limit computational effort.This fault simulation method was used to simulate all the faults in the DC-DC that already included Test Point Insertion (TPI) as design for testability solution.The test stimuli and measures used in the fault simulation process are the same as Silicongate provides in the product's Test Integration Guidelines as being appropriate for production test.Different test modes are used, changing the DC-DC normal control actions.Each test mode drives the DC-DC into a state that allows the evaluation of specific static and/or dynamic behavior of selected nodes, made observable through an embedded analog multiplexer used for the TPI. Figure 9 shows the output signal used for test proposes (anatestbus), provided by the Analog Multiplexer block.
The testbench used in the design of all production test modes was used for the fault simulation of all the faults in de DC-DC.The fault simulation of 1608 × 2 faults during 33 days allowed the creation of a partial fault dictionary (only the first detection was registered).With the aim of being able to reduce the fault list size in the future, taking advantage of the proposed representative faults, a software tool was designed to recognize the basic cells.This cell recognition tool allowed the identification of the number of topologies present in the DC-DC listed in Table 5.
It can be concluded that a ignificant number of tran-s analysis of the exact measure that detects each fault in each stratum in each differential pair (D.P.1 and 2).The groups of representative and represented faults are the ones previously presented in Table 4, obtained for a number of representative faults L = 3.The group of faults present on Stratum 1 (undetectable faults) were not detected (ND) on both differential pairs.On Stratum 2 every fault was detected by Measures 13 and 1 on differential pairs 1 and 2, respectively.Stratum 3 contains faults that were not detected on D.P. 1 and were all detected by Measure 5 on D.P. 2. This evaluation confirms that, for two instances of the differential pair cell, all the faults are well represented by their representatives.
Choosing this specific number of strata (L = 3), faults in each group were detected by a unique measure or were not detected.The fault stratification procedure was carried out for every possible number of representatives (L), in the DIFFpair cell, and an evaluation was performed for both instances of this cell present in the DC-DC converter.Results are shown in Figure 10, where the used input stimuli correspond to the test associated with the referred "Measure".In this figure, R represents the ratio of well represented faults in a stratum, that is, faults that lead to the same simulation result as the corresponding RF, i.e., when both the RF and the represented fault are simultaneously ND, or detected by the same Measure.The ratios of well represented faults, as a function of the number of strata, L, are shown for D.P. 1 and 2. The data in Figure 10 result from similar stratification of the faults using the same procedure, but two different sets of input stimuli sistors is identified, remaining around 1/8 as unidentified.The tool has a graphical interface that details the statistics presented in Table 5 for each module of the circuit hierarchy.This interface helps identifying the modules where more unidentified transistors remain.
Differential Pair Fault Representativeness Evaluation
Fault simulation was carried out, by injecting only TSON and TSOFF faults in the two differential pairs of the DC-DC converter, either by injecting all the 28 listed faults (full fault list simulation), or by injecting only the 4 representative faults.Table 6 contains the results of the (Operation-aware test or Sweep test).For the same number of strata, L, the same partition of the total population occurs when the 33 or the 17 input stimuli have been used; only the representative faults are different for some values of L, not affecting significantly the error probability results.The input set corresponding to the Operationaware test set (33 stimuli) leads to the same fault representativeness as the sweep set (17 stimuli): for L > 2, all faults are well represented on both differential pairs (Figure 10).The stimuli used to calculate the error probabilities were obtained in the same interval used for fault stratification, from −340 to 340 mV, shifted one unit, in order not to coincide with the ones used for the Sweep test set, as to avoid biasing the results.
DC-DC Converter Blocks Fault Representativeness
Using the software tool referred above to identify all the seven cell types already processed by the methodology in the DC-DC converter and the complete fault dictionary, two tables were obtained.Table 7 presents the representativeness process evaluation.This is done for different levels of accuracy (L i ) in the stratification process of all the cells in the various blocks by calculating the corresponding ratio of well represented faults (R i ).The three different levels of accuracy (L 1 = L max , L 2 = L med and L 3 = L min ) were analyzed taking into account the error probabilities obtained for each cell, that obey to the conditions shown in (9).Each of these levels of accuracy can be useful in different test preparation stages.
Table 5 presented before, contains the fault compression rate, calculated for every cell already processed by the methodology, using the referred different levels of accuracy shown in (9).This illustrates the trade-off between simulation effort and compression rate, for every cell, that is used for different phases of the test prepara-
tion.
Table 8 contains the number of identified cells, number of faults per type of element and fault coverage (for the entire block and only for the identified cells on the block).It contains the estimated fault coverage and cells fault representation level (ratio of well represented faults in the cells) for three different levels of accuracy L i .Part of the peakdetector block schematics is presented in appendix A, showing two identified cell instances.
Taking the data on Table 8, the ratio of well represented faults in the cells already identified in the entire DC-DC converter was calculated for the 3 different levels of accuracy.A simulation effort reduction is obtained, as a consequence of the fault compression.Figure 11 represents the ratio of well represented faults as a function of the simulation effort, for the 3 different levels of accuracy considered.For the lowest level of accuracy, representing 86% of well represented faults in the entire circuit, we have a considerable simulation effort reduction, that is, an effort lower than 40%.For 97% of well represented faults we still have a quite low simulation effort, which is 56%.
Structural Fault Coverage Improvement
Observing Table 6 it is possible to conclude that the structural fault coverage obtained for the differential pair 2 (D.P. 2) is 100%, since all the detectable faults (on stratum 2 and 3) are detected by the production test used.This instance of the differential pair cell is part of the (VFB Comparator) found on the DC-DC converter partially represented in Figure 9.The other comparator (PG Comparator) contains D.P. 1 instance, for which the fault coverage is less than 100%, as 6 of the faults are not detected.This result shows that the test quality of the test set used may be improved.To increase the fault coverage This signal is made observable as a circuit output for system integration proposes, signaling whether the DC-DC is delivering the requested power or not, and may be used to test the differential pair (D.P. 1).This is possible because the PG Comparator output drives pg signal to logic level "1" when one of the faults on the second stratum of the inner differential pair is activated.Since this state (pg = "1") corresponds to the transference of the requested power, to test the (D.P. 1) it is sufficient to force the control to drive the pg signal to logical level "0", using an extra load to sink more current, and observe that this signal is kept permanently at logic level "1", using an extra Measure.This improvement in the test set was implemented and it was possible to achieve 100% fault coverage for both differential pairs, simulating only 4 faults instead of 28.
Conclusion
In this paper, a method for reducing the fault simulation effort by simulating only representative faults (RF) is proposed for AMS circuits.The fault list compression technique includes analog cell identification, criteria for fault clustering in strata, according to a user's defined number of strata, L, and RF identification.A trade-off between fault simulation eff rt and accuracy can be de-o fined for each test preparation phase: a fast, initial fault simulation can be performed with lower accuracy, just aiming to highlight testability problems.Then, later on, higher accuracy fault simulation can be used for the validation of the final test sequence.In order to support the selection of the appropriate number of RFs for each test preparation phase, a method is proposed for the evaluation of the probability of occurrence of n or more errors due to the RF-only simulation, and the assumption that the represented faults behave accordingly.The method was illustrated in a differential pair cell showing that using L = 3 (and simulating only L-1 = 2 RF) was enough to reduce to almost zero the probability of occurrence of more than n = 2 errors in 14 faults.The fault compression technique is further used in a commercial DC-DC converter that has two instances of this differential pair.Fault simulation results show that RF-only simulation does not lead to any error if L > 2, when the RFs are selected using an exhaustive test, going through all the combinations of the MOS regions of operation in the fault free and each faulty circuit or by applying equally spaced input stimuli in a narrow interval around the origin.The technique was then used in the main blocks of the DC-DC converter in order to show the quality of the representativeness process when applied to the seven types of cells identified.The impact of different levels of accuracy on the estimated fault coverage and representation level on each block was shown.Finally, a plot that shows the variation of the ratio of well represented faults as a function of the simulation effort of the complete DC-DC converter was presented.The definition of the optimum stimuli for RF selection is now under research.More analog cells are being defined.Moreover, a preliminary analysis shows limited impact of temperature, power supply voltage and process variations on the error probability, which confirms the methodology reliability.The study of these dependencies is being carried out and it will be reported in the future.
Figure 1 .
Figure 1.Fault stratification example: Three RF and seven represented faults in a universe of ten faults.Each faulty response, p i , contains two values (s = 2).
Figure 2 .
Figure 2. Example where the representative fault and the represented fault output values are quantized to the nearest levels (3 and 1), respectively.
Table 1 .Table 2 . 2 .
Number of errors example, obtained on each bin by the corresponding stimuli (Stm), when k = 4 and s = 5.Number of errors caused by each stimulus Bin Bins where the number of errors caused by a stimulus is higher than or equal to n = Stimulus that cause number of errors ≥2 Bin
Figure 3 .
For this cell, the exhaustive test, used for stratified fault grouping, can be obtained by exciting the inputs (V IP , V IN ) with stimuli that drive the differential pair either into each possible combination of transistor states (Operation-aware test) or by applying equally spaced test vectors (Sweep test), as stated before.The testbench used for the research of potentially representative faults in the differential pair uses a common mode voltage source (CM) of 1 V and two signal sources
Figure 5 .
Figure 5. Diff.pair cell: Probability of error obtained with the exhaustive test set (33 input stimuli, driving the transistors to all possible state combinations for an input voltage in the interval [−335, 335] mV).The number of errors, n, varies from 1 to 8.
Table 4 .
Representative and Represented Faults in the Differential Pair Cell for L = 3.variables are the currents at the output nodes (I OUTiP and I OUTiN ), for a total of 19 outputs.In this case the stratification step was performed using the k-means algorithm, as the number of output variables was quite large.
Figure 6 .
Figure 6.Diff.pair cell: Probability of error obtained with the exhaustive test set (87 input stimuli, driving the transistors to all possible state combinations for an input voltage in the interval [−2, 2] V).The number of errors, n, varies from 1 to 8.
Figure 7 .
Figure 7. Current source generator simplified schematic where four cells are identified: One BiaLink Cell, two cascodeBias Cells and one IbiasSet Cell with multiple outputs.
Figure 8 .
Figure 8. IbiasSet Cell: Error probability obtained with the exhaustive test set (20 input stimuli for the Sweep test) in a range of ±30% around the IBIAS nominal value and for 5 output variables.
Figure 9 .
Figure 9. DC-DC converter block diagram in a production test environment.
Table 5 .
Mixed-Signal Cells and Faults Identified in the DC-DC Converter, number of representative faults and compression rate for 3 different levels of accuracy (L i ).
Figure 10 .
Figure 10.DC-DC Converter: Ratio between the faults that are well represented and all the faults in the differential pair instances, for RFs obtained with 32 stimuli that correspond to the transistor state combinations, in the input range [−340, 340] mV.
stimulus and an additional Measure were designed to detect the undetected set of faults on D.P.1.Looking at Figure 9 we observe that the output signal of PG Comparator is connected to the Digital Control block input.This block has two output signals (swp, swn) that are connected to the Level Convertes block, whose outputs (swpvinz, swnvin) are in turn connected to the Power Devices and Drivers block.This last block delivers power to the load.The load voltage (vout = vfb) is sensed and is used to control the DC-DC operation, acting on both comparators in a feedback loop.Another signal present at the Level Converters block output is pg, that corresponds to the PG Comparator output signal (cmppg) after passing through the Digital Control block and the Level Converters block.
Figure 11 .
Figure 11.DC-DC Converter: Ratio between the faults that are well represented and all the faults in the instances of the cells identified in the DC-DC converter.
Table 8 . DC-DC converter main blocks and corresponding statistics: number of identified cells, number of faults per type of element and Fault Coverage (for the entire block and only for the identified cells on the block) as well as Estimated Fault Coverage and Cells Representation Level for different levels of accuracy (L i ).
Accuracy LevelL 1 L 2 L 3 L 1 L 2 L 3 L 1 L 2 L 3 L 1 L 2 L 3 L 1 L 2 L 3 L 1 L 2 L 3 L 1 L 2 L 3
|
2017-10-14T23:10:56.232Z
|
2013-08-30T00:00:00.000
|
{
"year": 2013,
"sha1": "fe3760749f3a0f823b38a809eb3e9e8a111bc045",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=36865",
"oa_status": "HYBRID",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "fe3760749f3a0f823b38a809eb3e9e8a111bc045",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
29612884
|
pes2o/s2orc
|
v3-fos-license
|
Equivalence of Quantum Metrics with a common domain
We characterize Lipschitz morphisms between quantum compact metric spaces as those *-morphisms which preserve the domain of certain noncommutative analogues of Lipschitz seminorms, namely lower semi-continuous Lip-norms. As a corollary, lower semi-continuous Lip-norms with a shared domain are in fact equivalent. We then note that when a family of lower semi-continuous Lip-norms are uniformly equivalent, then they give rise to totally bounded classes of quantum compact metric spaces, and we apply this observation to several examples of perturbations of quantum metric spaces. We also construct the noncommutative generalization of the Lipschitz distance between quantum compact metric spaces.
INTRODUCTION
Quantum metrics on C*-algebras, formally provided by generalized Lipschitz seminorms called Lip-norms [23,24], are the seeds for a new analytic framework which brings techniques from metric geometry in C*-algebra theory and provides a new tool set for problems from mathematical physics, such as finite dimensional approximations of quantum space-time [28,25,11,14,26,27,21], or perturbations of quantum metrics [18,16]. Quantum compact metric spaces form a natural category [18], whose morphisms are Lipschitz in an appropriate sense. In this paper, we prove that any *-morphism between two quantum compact metric spaces is actually Lipschitz if and only if it is compatible with the domains of the Lip-norms. In particular, *-automorphisms which preserve the domain of a Lip-norm must be bi-Lipschitz, and thus all Lip-norms with a common domain are actually equivalent. We then explore three related problems: we show that the topology of pointwise convergence on the group of Lipschitz automorphisms of a quantum compact metric space may be metrized using Lip-norms, and that our previous work on quantum perturbations naturally provide new examples of compact classes of quantum compact metric spaces for the quantum propinquity. We also construct the noncommutative generalization of the Lipschitz distance.
A compact quantum metric space is a generalization of a Lipschitz algebra, inspired by the work of Connes [3,4] and formalized by Rieffel [23,24]: Notation 1.1. The space of self-adjoint elements in a C*-algebra A is denoted by sa (A), while the state space of A is denoted by S (A). The unit of a unital C*algebra A is denoted by 1 A . Last, we denote the norm on a normed vector space E by · E by default.
Last, the diameter of a metric space (E, d) is denoted by diam (E, d).
Convention 1.2.
We adopt a convenient convention, when working with seminorms, throughout this paper. We use this convention accross our work: if L is a seminorm defined on a subspace dom(L) of a vector space E, then we set L(x) = ∞ for all x / ∈ dom(L). Thus, with our convention, dom(L) = {x ∈ E : L(x) < ∞}. The classical picture behind Definition (1.3) is provided by a pair (C(X), Lip) of the C*-algebra of a compact metric space (X, d) and the Lipschitz seminorm Lip associated to the distance function d. A generalization of quantum compact metric spaces to the quantum locally compact setting was proposed in [12,13].
We may define a category whose objects are quantum compact metric spaces, and whose morphisms are a special type of *-morphisms between the underlying C*-algebras. There are at least two natural ideas. We may require that a Lipschitz morphism be a *-morphism which is also continuous with respect to the Lipnorms. Formally, if (A, L A ) and (B, L B ) are two quantum compact metric spaces, and ϕ : A → B is a *-morphism, this first approach to Lipschitz morphism consists in requiring that there exists C > 0 such that L B • ϕ CL A . This relation imposes that ϕ must be unital or null. Indeed, L B • ϕ(1 A ) CL A (1 A ) = 0 so ϕ(1 A ) ∈ R1 B ; since ϕ is a *-morphism, this leaves us with ϕ(1 A ) ∈ {0, 1 B }. In this paper, we will work with unital *-morphisms.
Alternatively, we may require that the dual map associated to a unital *-morphism be a Lipschitz map between the state spaces equipped with their respective Monge-Kantorovich metric. Continuing with our notations, we would thus ask that there exists C > 0 such that for all µ, ν ∈ S (B), we have: In general, these two notions of a Lipschitz morphisms are not equivalent, owing to the fact that the Monge-Kantorovich metric does not allow the recovery of the Lip-norm from which it was defined. After all, many Lip-norms may give the same Monge-Kantorovich metric.
However, among all Lip-norms which provide a given Monge-Kantorovich metric, there is a particular one: the largest among all of them, which is characterized as being lower semi-continuous with respect to the norm of the underlying C*algebra. In [24], the study of this problem led to the notion of a closed Lip-norm, though the context there was more general (the underlying space was not a C*algebra but a more general object called an order-unit space, which may not be complete, leading to some important subtleties).
For our purpose, it is thus natural to work with lower semi-continuous Lipnorms. In this context, our two notions of Lipschitz morphisms coincide. So we summarize our notion by: 18]). Let (A, L A ) and (B, L B ) be two quantum compact metric spaces, with L A and L B lower semi-continuous with respect to the norms of, respectively, A and B. A unital *-morphism ϕ : A → B is k-Lipschitz for some k 0 when L B • ϕ kL A , or equivalently: It is easy to check that indeed, composition of Lipschitz morphisms is again Lipschitz, and the identity morphism is 1-Lipschitz, so we have indeed defined a category. It is also easy to check that a k-Lipschitz morphism between two classical compact metric spaces is indeed of the form In this paper, we investigate a third approach of Lipschitz morphisms. If dom(L A ) and dom(L B ) are the domains of L A and L B , then a *-morphism may satisfy ϕ(dom(L A )) ⊆ dom(L B ). This appear to be a weaker notion, but as we shall see, it is again equivalent to the notion of a Lipschitz morphism. This reinforces that our notion of a category of quantum compact metric spaces is indeed appropriate.
Our paper then continues with an observation regarding sets of uniformly equivalent Lip-norms. We note that such sets are naturally compact for the quantum propinquity. The quantum Gromov-Hausdorff propinquity [20] is a special member of the family of Gromov-Hausdorff propinquities [17,15], which are all noncommutative analogues of the Gromov-Hausdorff distance [9,8] extending the topology of the latter to quantum compact metric spaces. The Gromov-Hausdorff propinquity provides a framework for the geometric study of classes of quantum compact metric spaces. We have established, for instance, the continuity of various natural families of C*-algebras such as quantum tori [14] or certain AF-algebras [1]. We constructed finite dimensional approximations for these spaces as well, answering some informal statements in mathematical physics. Another recent advance was the generalization of Gromov's theorem to our propinquity [19], providing us with an insight into the topological properties of various sets of compact quantum metric spaces.
In general, proving that a class of quantum compact metric spaces is indeed totally bounded for the propinquity may be subtle. For instance, [19] relies on finite dimensional approximations, which are themselves a challenge. Nonetheless, as seen in [1, Theorem 6.3], our generalized Gromov's theorem can be put to use. Another approach taken in [1] was the construction of continuous maps from some compact spaces onto classes of quantum compact metric spaces; this method also applies to quantum tori and their finite dimensional approximations [11,14] and we shall see that it applies to conformal deformations as well in this paper. In this paper, we take yet another road to establish the compactness of some interesting classes of quantum compact metric spaces, obtained as perturbations of given quantum metrics.
Prior to the introduction of noncommutative Gromov-Hausdorff distances [28,10,20,17,15], the idea of perturbations for a quantum metrics seemed to largely rely on the informal idea that certain algebraic expressions are qualitatively close to some original metric. We recently formalized this idea by actually establishing bounds on how far, in the sense of the propinquity, certain particular perturbations actually are. Examples of such perturbations include conformal deformations [22] of quantum metrics arising from certain spectral triples [18], leading to twisted spectral triples introduced by Connes and Moscovici [5] . We also brought curved quantum tori of Sitarz and Dąbrowski [6,7] into our program [16]. We shall see that a core ingredient of the constructions of these perturbations provide a uniform equivalence between Lip-norms, which in turn gives a compactness result.
We note that, besides the Gromov-Hausdorff distance, a standard extended metric between compact metric spaces is the Lipschitz distance. We provide in this paper a construction for the noncommutative version of the Lipschitz distance, which fits very well with the picture of Lipschitz morphisms presented in this paper.
The last section of this paper presents a natural metric on Lipschitz morphisms, built from quantum metrics.
EQUIVALENCE OF LIP-NORMS AND LIPSCHITZ MORPHISMS
Quantum metrics defined on the same domain are, in fact, equivalent, under a natural technical condition, which may always be assumed to no cost to the underlying metric structure. This observation is the subject of the following theorem, and constitutes our main result. Convention 2.1. Let L be a seminorm defined on a dense subspace dom(L) of a normed vector space E. By Convention (1.2), we regard L as a [0, ∞]-function over E. Now, we say that the seminorm L is lower semicontinuous on E when {x ∈ dom(L) : L(x) 1} is closed in E. Note that this is stronger than requiring L to be lower semicontinuous as a function from domL. Let S be a seminorm on dom(L) such that: (1) S is lower semi-continuous on sa (A) with respect to · A , i.e. {a ∈ dom(S) : S(a) 1} is a closed subset of (A, · A ), Then there exists C > 0 such that for all a ∈ dom(L):
S(a) CL(a).
Proof. Our proof proceeds in three steps. First, we prove that the domain of a lower semi-continuous seminorm can be made naturally into a Banach space. Then, we use the open mapping theorem to show that different lower semi-continuous seminorms defined on the same domain give rise to equivalent Banach norms with the construction in our first step. Last, we conclude our theorem.
Step 1. Let S be a seminorm defined on some dense subspace dom(S) of sa (A), such that {a ∈ dom(S) : S(a) 1} is closed in sa (A). Let · S = · A + S. We first check that dom(S) is a Banach space for the norm · S .
It is straightforward that · S is a norm on dom(S). Let (a n ) n∈N be a Cauchy sequence in dom(S) for · S . Thus (a n ) n∈N is a Cauchy sequence for · A , which is complete, so (a n ) n∈N converges to some a ∈ sa (A) for · A .
We also observe that (a n ) n∈N is Cauchy, hence bounded, for S; thus there exists M > 0 such that S(a n ) M for all n ∈ N. As {a ∈ dom(S) : S(a) M} is closed in A, we conclude that a ∈ dom(S) and S(a) M (alternatively, with our convention that S(a) ∈ [0, ∞), our assumption on the unit ball of S is equivalent to requiring S to be lower semicontinuous on A as a [0, ∞]-valued function, and thus we obtain that S(a) lim inf n→∞ S(a n ) M; thus a ∈ dom(S)).
Let ε > 0. Since (a n ) n∈N is Cauchy for S, there exists N ∈ N such that for all p, q N we have S(a p − a q ) ε. Since S is lower semi-continuous with respect to · A , we thus have, for all p N: Thus lim p→∞ S(a − a p ) = 0. This proves that lim n→∞ a − a n S = 0 as desired.
Step 2. If L and S be two lower semi-continuous seminorms on some common dense subspace dom(L) of sa (A), then the norms: The norms · L and · S both make dom(L) into a Banach space, by step 1.
We begin with a simple observation. Let (x n ) n∈N be a sequence in dom(L) which converges for both · L and · S . Let x ∈ dom(L) be the limit of (x n ) n∈N for · L and y ∈ dom(L) be the limit of (x n ) n∈N for · S . We note that in particular, (x n ) n∈N converges to both x and y for · A . Thus x = y.
Let now · * = · L + · S . If (x n ) n∈N is a Cauchy sequence for · * , then the sequence (x n ) n∈N is Cauchy for both · L and · S and thus converges for both these norms, since they are complete; by our previous observation, (x n ) n∈N has the same limit x ∈ dom(L) for both these norms. Hence, (x n ) n∈N converges to x for · * , i.e. (dom(L), · * ) is a Banach space. Now, since · L · * , the open mapping theorem [2] implies that there exists k > 0 such that · * k · L . We then conclude easily that · S k · L . Similarly, for some k ′ > 0, we have · L k ′ · S .
Step 3. We now conclude our theorem.
We thus are given a lower semi-continuous Lip-norm L on A, and some seminorm S on the domain of L, with S(1 A ) = 0 and S lower semi-continuous with respect to · A .
Using our previous step, there exists k > 0 such that for all a ∈ dom(L), we have: As L is a Lip-norm, there exists t ∈ R such that: Of course, L(a + t1 A ) = L(a) and S(a + t1 A ) = S(a) since S(1 A ) = 0. Thus: This concludes our proof, with C = ((k − 1)D + k).
Theorem (2.2) has the following consequences: be two compact quantum metric spaces whose Lip-norms are lower semi-continuous on, respectively, sa (A) and sa (B). Let ϕ : Proof. Let S = L B • ϕ. We note that S is a lower semi-continuous seminorm which takes finite values on dom(L A ) by assumption. Moreover S(1 A ) = 0. Thus our corollary follows from Theorem (2.2). Corollary 2.4. Let (A, L A ) and (B, L B ) be two quantum compact metric spaces whose Lip-norms are lower semi-continuous, respectively, on sa (A) and sa (B).
Corollary 2.5. Let (A, L) be a quantum compact metric space where L is lower semicontinuous over sa (A) and sa (B), respectivly. Let α be a *-automorphism of A. The following two assertions are equivalent: Proof. Assume (2) first. Let a ∈ dom(L). Then: The converse inclusion is proven similarly. Assume (1). Then (2) follows from Corollary (2.4).
Corollary 2.6. Let A be a unital C*-algebra and L 1 , L 2 be two lower semi-continuous Lip-norms on A. The following assertions are equivalent: (1) L 1 and L 2 are equivalent, Proof. Apply Corollary (2.5) to the identity automorphism of A.
COMPACTNESS OF CLASSES OF PERTURBATIONS OF QUASI-LEIBNIZ QUANTUM COMPACT METRIC SPACES
To present the result in this section, we introduce a simple metric on Lip-norms over a fixed C*-algebra. We will adopt the following terminology, to keep our statements readable. Convention 3.1. If (A, L) is a quantum compact metric space, then we say that L is llower semicontinuous to mean that L is lower semi-continuous, as a [0, ∞]-valued function, over sa (A).
Definition 3.2. For any two lower semi-continuous Lip-norms
We begin by observing that our distance Haus • A is indeed finite. Let us start by recalling the following fundamental characterization of quantum compact metric spaces proved by Rieffel in [23], and akin to a noncommutative Arzéla-Ascoli theorem: 23,24]). Let A be a unital C*-algebra and L a seminorm defined on a dense subspace dom(L) of sa (A) and such that {a ∈ dom(L) : L(a) = 0} = R1 A . The following assertions are equivalent: In particular, if L is lower semi-continuous on sa (A), then L is a Lip-norm if, and only if any of the sets above are compact.
Lemma 3.4. Let
A be a unital C*-algebra. For any two lower semi-continuous Lip-norms L 1 and L 2 on A, and for any state ϕ ∈ S (A), we have: In particular, 1} are compact for j = 1, 2. Thus the Hausdorff distance (for the norm of A) between these two sets is finite; let us denote it by d. Let As the argument is symmetric is L 1 and L 2 , we have shown our lemma.
Our goal is to establish a new sufficient condition for certain classes of quantum compact metric spaces to be totally bounded for the quantum propinquity. We refer to [20,17,15,19,18] for the definition of the quantum propinquity and some of its properties. We briefly recall from [20] the notion of a bridge, as it will be used in our next proof, and provide a characterization of the quantum propinquity.
Let A and B be two unital C*-algebras. A bridge (D, ω, π A , π B ) from A to B is a unital C*-algebra D and two unital *-monomorphisms π A : A ֒→ D and π B : B ֒→ D, as well as an element ω ∈ D such that for at least one state ϕ of D, we have ϕ(ωd) = ϕ(dω) = ϕ(d) for all d ∈ D. The set of all such states of D, denoted by S 1 (D|ω), is the 1-level set of D.
Now if (A, L A ) and (B, L B ) are two quantum compact metric spaces, then we can associate a number, called the length, to a bridge γ = (D, ω, π A , π B ) from A to B. We first define the reach of γ as the Hausdorff distance between {π A (a)ω : a ∈ sa (A), L A (a) 1} and {ωπ A (b) : b ∈ sa (B), L B (b) 1} for Haus · D . We then define the height of γ as the maximum of the Hausdorff distance, for Haus mk L A , between S (A) and {ϕ • π A : ϕ ∈ S 1 (D|ω)}, and the Hausdorff distance for Haus mk L B , between S (B) and {ϕ • π B : ϕ ∈ S 1 (D|ω)}. The length λ (γ|L A , L B ) of the bridge γ is the maximum of its reach and its height. The quantum propinquity is constructed from bridges, although it requires a few technical steps. In particular, the quantum propinquity is defined on classes of F-quasi-Leibniz quantum compact metric space for an admissible function F, i.e. a function F : [0, ∞) 4 → [0, ∞) which is increasing for the product order on [0, ∞) 4 and such that F(x, y, l x , l y ) xl y + yl x for all x, y, l x , l y 0. Given such a function, a F-quasi-Leibniz quantum compact metric space (A, L) is a quantum compact metric space such that for all a, b ∈ sa (A) we have: L(b)).
The following result characterizes the quantum propinquity.
Theorem-Definition 3.5 ([20]). Let L be the class of all F-quasi-Leibniz quantum compact metric spaces for some admissible function F. There exists a class function Λ F from L × L to [0, ∞) ⊆ R such that: for any (A, L A ), (B, L B ) ∈ L we have: for all (A, L A ), (B, L B ) ∈ L and for any bridge γ from A to B, we have: We connect our new distance between Lip-norms on a fixed C*-algebra and the propinquity easily. Proposition 3.6. Let A be a unital C*-algebra. If L 1 and L 2 are two F-quasi-Leibniz lower semi-continuous Lip-norms on A for some admissible function F, then: . Proof. We simply use the bridge (A, 1 A , id, id) where id is the identity on A.
The purpose of this section is to establish the fact that uniformly equivalent Lip-norms, as defined in the hypothesis of the next proposition, provide totally bounded classes of quantum compact metric spaces for the metric Haus • and thus for the quantum propinquity, whenever applicable.
Proposition 3.7. Let (A, L) be a quantum compact metric space where L is lower semicontinuous. If Ξ is a set of lower semi-continuous Lip-norms on A for which there exists
C > 0 such that, for all Lip ∈ Ξ, we have L CLip, then Ξ is totally bounded for Haus • A (and therefore, when applicable, for the quantum propinquity as well).
Proof. We fix µ ∈ S (A). By assumption, for all Lip ∈ Ξ, we have: Now, since L is a lower semi-continuous Lip-norm, the set: Theorem (3.3). Thus by Blaschke's Theorem, the hyperspace of the closed subsets of L is compact for the Hausdorff distance Haus · A . We conclude our proof using Lemma (3.4) (and Proposition (3.6) for the quantum propinquity conclusion).
The main application of Proposition (3.7) in this paper concerns certain perturbations we have established in [19,18]. These perturbations were constructed using [18, Lemma 3.79], which encapsulates a recurrent computation when estimating the quantum propinquity between two quasi-Leibniz quantum compact metric spaces. When using this lemma, one will typically obtain uniformly equivalent families of Lip-norms, thus the following examples will be typical.
The pair (A, L ω ) is a Leibniz quantum compact metric space for all bounded self-adjoint
ω on H such that ω B < 1 2r , and, moreover: is continuous for the quantum Gromov-Hausdorff propinquity Λ.
For all a ∈ dom(L) and for all t ∈ R, we have Since L is a Lip-norm, there exists t ∈ R such that a − t1 A A rL(a). Thus we conclude ∀a ∈ dom(L) |L(a) − L ω (a)| 2r ω B L(a).
A deeper example of a non trivial class of totally bounded quantum metrics obtained from perturbations and [19, Lemma 3.79] is given by curved quantum tori, where, once more, the space of parameters is not totally bounded. These examples come from mathematical physics and were treated in [16] from the metric perspective.
]). Let A be a unital C*-algebra, and let α be a strongly continuous ergodic action of a compact Lie group G on A. Let n be the dimension of G.
We endow the dual g ′ of the Lie algebra g of G with an inner product ·, · , and we denote by C the Clifford algebra of (g ′ , ·, · ). Let c be a faithful nondegenerate representation of C on some Hilbert space H C .
We fix some orthonormal basis {e 1 , . . . , e n } of g ′ , and we let X 1 , . . . , X n ∈ g be the dual basis. For each j ∈ {1, . . . , n}, we define the derivation δ j of A via α and X j , by: wherever defined. Let A 1 be the common domain of ∂ 1 , . . . , ∂ n , which is a dense *-subalgebra in A.
Let τ be the unique α-invariant tracial state of A. Let ρ be the representation of A obtained from the Gel'fand-Naimark-Segal construction applied to τ and let L 2 (A, τ) be the corresponding Hilbert space. As A 1 is dense in L 2 (A, τ), the operator ∂ j defines an unbounded densely defined operator on L 2 (A, τ) for all j ∈ {1, . . . , n}.
Let H = L 2 (A, τ)⊗H C where ⊗ is the standard tensor product for Hilbert spaces. We define the following representation of A on H : where for all j, k ∈ {1, . . . , n}, the coefficients h jk are elements in the commutant of A in L 2 (A, τ), and where H is invertible as an operator on H ′ = ⊕ n j=1 L 2 (A, τ). We denote the identity over H ′ by 1 H ′ .
We define: so that for all a ∈ A 1 : where h ′ jk lies in the commutant of ρ(A) for all j, k ∈ {1, . . . , n} and where H ′ is invertible as an operator on H ′ , then, defining L H ′ similarly to L H , we conclude: We thus may apply Proposition (3.7) when we restrict the parameter space C} for any C > 0. Of course this set itself is not totally bounded, yet the space {L H : H ∈ H C } is totally bounded for both Haus • A and Λ. We conclude this section with another example of a compact class of quantum compact metric spaces obtained from perturbations. We include this example as another compact class of quantum metric spaces, derived from [19,Lemma 3.79], though in this case it does not require Proposition (3.7). Example 3.12 (Conformal Perturbations). We proved in [18] that small conformal perturbations of quantum metrics are indeed close for the quantum propinquity. We recall the result here to fix our notations.
A , 0 -quasi-Leibniz quantum compact metric space and moreover, if (h n ) n∈N is a sequence in GLip(A) which converges to h ∈ GLip and such that: . Let K 1 , K 2 , K 3 > 0 and define: Then Ω K 1 ,K 2 ,K 3 is compact for · A since L is a lower semi-continuous Lip-norm. Theorem (3.13) shows that conformal perturbations are continuous for the quantum propinquity, and thus, using our notations, {L ω : ω ∈ Ω K 1 ,K 2 ,K 3 } is compact.
LIPSCHITZ DISTANCE BETWEEN COMPACT QUANTUM METRIC SPACES
The Lipschitz distance between compact metric spaces [9] provides a distance between homeomorphic compact metric spaces based upon bi-Lipschitz isomorphisms, and thus it is natural to define it in this paper in light of our study of Lipschitz morphisms.
This section provides the noncommutative generalization of the Lipschitz metric, which in essence is a metric on Lip-norms with common domains. The quantum Lipschitz distance is complete and dominates the quantum propinquity when working on appropriate classes of quasi-Leibniz quantum compact metric spaces. The Lipschitz distance also provides natural examples of totally bounded classes for the quantum propinquity, and thus compact classes for the dual propinquity [17].
with the understanding that this quantity may be infinite. We refer to this quantity as the dilation factor, or just dilation of the given Lipschitz morphism.
with the convention that inf ∅ = ∞.
A natural class for the study of the Lipschitz distance is given by the following definitions, which includes all quasi-Leibniz quantum compact metric spaces: we simply require lower semi-continuity of the quantum metrics, as it fits the general framework of this paper, and we require that the domain of the Lip-norm is a Jordan-Lie algebra, to retain a minimum amount of information on the multiplicative structure of the C*-algebra. The Lipschitz distance between Jordan-Lie quantum compact metric spaces is actually achieved, as established in the following lemma. This observation will prove useful in establishing that the Lipschitz distance in indeed a distance up to quantum isometry.
Proof. Suppose that LipD((A, L A ), (B, L B )) = C for some C 0. There exists a sequence of *-isomorphism (ϕ n ) n∈N such that for all n ∈ N we have: Let a ∈ sa (A) with L A (a) < ∞. Since ϕ n (a) B = a A and L B • ϕ n (a) 2CL A (a) for all n ∈ N, we conclude that (ϕ n (a)) n∈N admits a convergent subsequence since L B is a Lip-norm. Let ϕ ∞ (a) be its limit; as L B is lower semicontinuous, we conclude that L A (ϕ ∞ (a)) 2CL A (a).
Since {a ∈ sa (A) : L A (a) n, a n} is compact for the norm · A , hence separable for all n ∈ N, so is: Let F be a countable dense subset of {a ∈ sa (A) : L A (a) < ∞}. A diagonal argument proves that there exists a subsequence (ϕ f (n) ) n∈N such that for all a ∈ F we have (ϕ f (n) (a)) n∈N converges uniformly to ϕ ∞ (a) (see [20,Theorem 5.13]).
Moreover, if a ∈ sa (A) with L A (a) < ∞, then for all ε > 0, there exists a ε ∈ F with a − a ε < ε 3 . Let N ∈ N be such that for all p, q N, . Thus for all p, q N, we have: Thus (ϕ f (n) (a)) n∈N converges as well, since it is a Cauchy sequence in A which is complete. Its limit is denoted once more by ϕ ∞ (a). Note that since for all n ∈ N and for all a ∈ dom(L A ), we have ϕ n (a) B = a A , we also have ϕ ∞ (a) B = a A . We thus have defined an isometric map ϕ ∞ : dom(L A ) → sa (B). Moreover, as a pointwise limit of Jordan-Lie morphisms, ϕ ∞ is also a Jordan-Lie morphisms on dom(L). Now L B is lower semi-continuous and, for all . Thus dil(ϕ ∞ ) C. Thus ϕ ∞ extends by continuity to a Jordan-Lie morphism from sa (A) to sa (B). Our argument is now concluded in the same manner as [20,Claim 5.18,Theorem 5.13] and proves that ϕ ∞ extends to a unital *-morphism from A to B with dil(ϕ ∞ ) C.
The same method may be applied to construct some subsequence of ϕ −1 n n∈N converging pointwise on dom(L B ) to some *-morphism ψ ∞ on B with L A • ψ ∞ CL A . Up to extracting further subsequences, we shall henceforth assume that both (ϕ f (n) ) n∈N and (ϕ −1 f (n) ) n∈N converge pointwise to, respectively ϕ ∞ on dom(L A ) and ψ ∞ on dom(L B ). It is then immediate to check that ϕ ∞ • ψ ∞ is the identity of dom(L B ) and ψ ∞ • ϕ ∞ is the identity on dom(L A ). Then by construction, ψ ∞ • ϕ ∞ is the identity on A and ϕ ∞ • ψ ∞ is the identity on B. Thus ϕ ∞ is a *-isomorphism from A to B.
In particular, we also obtain that L A • ϕ −1 ∞ CL B and thus dil(ϕ −1 ) C. As we may not have both dil(ϕ) < C and dil(ϕ −1 ) < C, since C is the infimum of the dilations of such *-isomorphisms, the lemma is proven.
We now establish that the Lipschitz distance is indeed, a distance up to quantum isometry, and that it dominates the quantum propinquity. If there exists an isometric isometry between two compact quantum metric spaces, then their Lipschitz distance is null. Only the converse of this observation requires our assumption that the domain of Lip-norms be Jordan-Lie algebras. We simply apply Lemma (4.6).
We now prove that closed balls for the Lipschitz distance are compact classes for the dual propinquity. Therefore, the closure of B for the dual propinquity is compact.
Proof. For all (B, L) within Lipschitz distance R of (A, L A ), there exists by definition a *-isomorphism ϕ B : A → B which maps the domain of L A into the domain of L. We set L ′ = L • ϕ B and note that L ′ is a lower semicontinuous Lip-norm with the same domain as L A . Moreover, by definition of the Lipschitz distance, we have L A exp(R)L ′ . Last, (A, L ′ ) and (B, L) are isometrically isomorphic, thus their propinquity is zero, and so is their Lipschitz distance.
Thus by Proposition (3.7), the closed ball B of center (A, L A ) and radius R is totally bounded for Haus • A . By Proposition (3.6), the subclass of F-quasi-Leibniz quantum compact metric spaces in the closed ball B is totally bounded for the quantum propinquity. The rest of the theorem follows from the completeness of the dual propinquity [17] and the dominance of the quantum propinquity over the dual propinquity.
We conclude this section by proving that the Lipschitz distance is indeed complete.
Theorem 4.9. The distance LipD is complete on the class of Jordan-Lie quantum compact metric spaces.
Proof. As in the proof of Theorem (4.8), we can assume that we are given a sequence (L n ) n∈N of lower semi-continuous Lip-norms on some unital C*-algebra, such that (A, L n ) n∈N is a Cauchy sequence for LipD.
By Theorem (4.7), the sequence L n = {a ∈ sa (A) : L n (a) 1} is Cauchy for Haus • · A . As the latter metric is complete, (L n ) n∈N converges to some L. It is easy to check that L thus defined is a convex closed subset of sa (A). Let L be its Minkowsky gauge functional.
Let ε ∈ (0, 1). There exists N ∈ N such that for all n N we have: for all p, q N. In other words, L q ⊆ (1 + ε)L p and L p ⊆ 1 1−ε L q for all q, p N. Now (1 + ε)L n is closed for all n ∈ N, thus the hyperspace of its closed subsets is complete and thus closed for the Hausdorff distance Haus · A . Consequently, L ⊆ (1 + ε)L n and 1 1+ε L n ⊆ L for all n N. Thus, dom(L) = dom(L n ) for n N, and thus dom(L) is a dense Jordan-Lie subalgebra of sa (A). Moreover, we conclude that L n (1 + ε)L. In particular, by [23, Lemma 1.10], (A, L) is a quantum compact metric space.
We also conclude from these inclusions that LipD((A, L), (A, L n )) ln(1 + ε) for n N. Our theorem is now proven.
A METRIC FOR POINTWISE CONVERGENCE ON THE AUTOMORPHISM GROUP OF A QUANTUM COMPACT METRIC SPACE
We now introduce a new metric on the automorphism group of a quantum compact metric space. Our motivation is given by Theorem (2.2), and in particular Corollary (2.5), as well as our new understanding of compactness for Lip-norms with a shared domain. if id A is the identity of A then mkℓ L (id A ) = 0. Now let α ∈ Aut(A). If mkℓ L (α) = 0 then a − α(a) A = 0 for all a ∈ dom(L) with L(a) 1. Thus a − α(a) A = 0 for all a ∈ dom(L), and by continuity of α and density of dom(L), we conclude that a − α(a) A = 0 for all a ∈ sa (A). Thus by linearity, α(a) = a for all a ∈ A.
Moreover, since α is an isometry of (A, · A ), we have for all a ∈ A: from which it follows that mkℓ L (α) = mk L α −1 .
Let now β ∈ Aut(A). For all a ∈ A: from which we conclude: mkℓ L (α • β) mkℓ L (β) + mkℓ L (α). For each a ∈ F, there exists j a ∈ J such that if j ≻ j a , we have α j (a) − α(a) A ε 3 . As J is directed and F is finite, there exists j ′ ∈ J with j ′ ≻ j a for all a ∈ F. Thus, if a ∈ B and j ≻ j ′ , then: where we used that all automorphisms are isometries. Thus, for all j ≻ j ′ we have: mkℓ L (α −1 j • α) ε. Conversely, let (α j ) j∈J be a net in Aut(A) converging for mkℓ L to α ∈ Aut(A). Let a ∈ sa (A) and ε > 0.
|
2017-10-03T13:26:40.749Z
|
2016-04-04T00:00:00.000
|
{
"year": 2016,
"sha1": "3977d4898d700219f25cdde569f47c96f09e1bd1",
"oa_license": "elsevier-specific: oa user license",
"oa_url": "https://doi.org/10.1016/j.jmaa.2016.05.045",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "2c3e3a24925f0ab8162708d23a1075422f25a4ff",
"s2fieldsofstudy": [
"Mathematics",
"Physics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
6262022
|
pes2o/s2orc
|
v3-fos-license
|
Aboriginal Community-Centered Injury Surveillance: A Community-Based Participatory Process Evaluation
While injuries are a leading health concern for Aboriginal populations, injury rates and types vary substantially across bands. The uniqueness of Aboriginal communities highlights the importance of collecting community-level injury surveillance data to assist with identifying local injury patterns, setting priorities for action and evaluating programs. Secwepemc First Nations communities in British Columbia, Canada, implemented the Injury Surveillance Project using the Aboriginal Community-Centered Injury Surveillance System. This paper presents findings from a community-based participatory process evaluation of the Injury Surveillance Project. Qualitative data collection methods were informed by OCAP (Ownership, Control, Access, and Possession) principles and included focus groups, interviews and document review. Results focused on lessons learned through the planning, implementation and management of the Injury Surveillance Project identifying lessons related to: project leadership and staff, training, project funding, initial project outcomes, and community readiness. Key findings included the central importance of a community-based and paced approach guided by OCAP principles, the key role of leadership and project champions, and the strongly collaborative relationships between the project communities. Findings may assist with successful implementation of community-based health surveillance in other settings and with other health issues and illustrate another path to self-determination for Aboriginal communities. The evaluation methods represent an example of a collaborative community-driven approach guided by OCAP principles necessary for work with Aboriginal communities.
vehicle crashes was 1.9 per 10,000 standard population amongst Status Indians compared to 0.7 for other residents. The suicide rate was 1.7 for Status Indians compared to 0.7 for other residents. Potential mechanisms for the increased injury rates may include both disparities in accessibility to prevention mechanisms, programs and health care facilities as well as incompatibility between the cultural characteristics and needs of the Aboriginal populations and the policies, programs and services that are provided (Beals et al. 2009;Hernandez et al. 2009).
Any health promotion activities within Aboriginal communities must respect the integrity of the community and traditional sources of knowledge (Cochran et al. 2008), as well as ensure that ownership of the work and information is preserved within the community. In Canada, this requires collaborative community-driven approaches underpinned by OCAP principles (First Nations Centre 2007). OCAP principles, which refer to Ownership, Control, Access, and Possession, were developed by First Nations and sanctioned by the First Nations Information Governance Committee (First Nations Centre 2007). They advocate for the control by Aboriginal populations of data and health information concerning Aboriginal populations, and support self-determination of Aboriginal people in relation to research processes. The collaborative OCAPbased approach works to empower Aboriginal people by providing control and self-determination over the research process (First Nations Centre 2007).
There are a number of evaluation methods that are based on principles of collaboration, participation and empowerment of disempowered groups and are well suited to OCAP principles and the needs and priorities identified by First Nations in Canada. In a seminal review of participatory evaluation techniques, Cousins and Whitmore (1998) describe three key elements of participatory evaluation, including control of the evaluation process, stakeholder selection for participation in the evaluation, and depth of participation of stakeholders. Each of these elements can be considered with respect to the level of control over decisions regarding the evaluation process and conduct that the different parties have. This can range from control resting entirely with the researcher, to resting entirely with the stakeholders. Participatory approaches seek to maximize control for stakeholders. Fetterman (1994Fetterman ( , 2001 describes empowerment evaluation as designed to use the evaluation process and findings towards improvement and self-determination. Later writings outline ten principles for empowerment evaluation: 1) improvement of individuals, organizations and communities; 2) community ownership, including power over and responsibility for the evaluation; 3) inclusion of and collaboration with stakeholders; 4) democratic participation of stakeholders, ensuring process is transparent and demystified; 5) social justice in use of the evaluation to facilitate attainment of resources and improve inequalities; 6) community knowledge is reflected in the evaluation tools; 7) scientific evidence is valued and appreciated; 8) capacity building occurs in organizations' abilities to use data and sustain evaluation efforts; 9) organizational learning and change results; and 10) accountability for the evaluation is shared by all involved with the evaluation (Wandersman et al. 2005). Adherence to these ten principles is sought to maximize empowerment and change.
The principles that these community-based participatory approaches espouse formed the basis of the evaluation described in this paper. We present a Canadian evaluative case study of a community-centered injury surveillance system in a First Nations community.
Importance of Community-Based Injury Surveillance
The disproportionate burden that injuries represent for Aboriginal populations has already been described above. However, available national and provincial statistics do little to illuminate the diversity in injury patterns or risk factors in regional settings (Bell et al. 2011). In BC, overall injury prevalence and rates of specific types of injuries within First Nations populations vary by geographic region. Even a crude cut parsing of the province into four geographic units reveals a great deal of variability with rates varying from 11.9 to 22.1 deaths per 10,000 standard population (British Columbia Vital Statistics Agency 2000). Rates of injury due to specific causes, such as suicides, have also been found to vary widely across communities (Chandler and Lalonde 1998). A recent report indicated no youth suicides between 1992-2006 for many BC bands, with others reporting rates exceeding 700 per 100,000 population (British Columbia Provincial Health Officer 2009).
The vast variability in injury rates highlights the importance of obtaining community-level data to facilitate communities' understanding of their injury profile and hazards. Not only do First Nations bands vary substantially in the risk and protective factors present in their communities, but they also may reside in different geographical contexts, such as mountainous or aquatic terrains, that can impact the rates and types of injuries. As a result of these varying contexts, collecting and analyzing community-level data are key to understanding injury patterns and risk factors to assist in efforts to reduce injury rates (Bell et al. 2011;Mullany et al. 2009).
Few published studies describe injury data collection initiatives in Aboriginal communities internationally, and none to our knowledge in Canada. Two studies in the United States and Australia (Helitzer et al. 2009;Shannon et al. 2001) report on the design and evaluation of general injury prevention community development projects in Aboriginal communities. These projects collected baseline injury data in the community, and designed and evaluated intervention activities. The US study showed significant improvements in attitudes towards the targeted safety concerns, as well as improvements in knowledge, skills and confidence among stakeholders and increased community capacity to conduct and evaluate safety projects (Helitzer et al. 2009). The Australian study found a statistically significant drop in average monthly injuries at a community medical clinic from 96 to 65 injuries following the introduction of community-generated interventions (Shannon et al. 2001).
One study addressing the establishment of communitybased injury surveillance systems in Aboriginal communities describes a surveillance system developed to gather data from both community and clinical settings on suicidal behavior among Apache youth in Arizona (Mullany et al. 2009). Surveillance activities assisted with identification of unique risk factors, guided targeted programs such as adaptation of an evidence-based emergency department intervention for suicidal youth. Evidence from these international studies suggests the potential for positive results from community-based injury surveillance in Aboriginal communities.
A group of community health leaders (Health Directors) representing the Secwepemc Nation in BC recognized the burden that injuries represented for their communities and the absence of community-level data allowing for local injury prevention planning. They identified the need to establish a community-based injury surveillance system. The Secwepemc Nation includes 17 bands diverse in size and geography, with populations ranging from less than 100 residents to over 1000 residents. The communities also vary considerably with respect to distances to town centers and access to health service facilities. In 2005, several Secwepemc First Nations communities officially launched the Injury Surveillance Project (ISP) and initiated data collection using the Aboriginal Community-Centered Injury Surveillance System (ACCISS).
Aboriginal Community-Centered Injury Surveillance System (ACCISS)
ACCISS was designed to be implemented and owned by Aboriginal communities for the purposes of selfdetermining action on injury (Auer and Andersson 2001). Development of ACCISS was sponsored by the Canadian government, and focused on capacity building and facilitating community administration of the four key injury surveillance activities: collecting, analyzing, interpreting and using data. Basic tenets associated with the system are that it is community-based and community-paced to ensure that injury surveillance activities are undertaken at a community-determined pace and based on individual community needs, readiness and capacity. ACCISS was developed to function in the diverse circumstances of Aboriginal communities, such that it can be implemented in communities with varied population sizes, geographic characteristics, resources, and health service delivery models. Prior to commencing data collection, each community undertakes an injury mapping process to identify how injury cases can be systematically identified, where key sources of injury data exist and methods by which injury data can best be collected, ensuring that data collection methods are tailored to reflect the unique structures and services of individual communities (Brussoni et al. 2009). For example, a remote community may consider collecting data in the local health center, school, child care facility and worksite, whereas a community located within or near an urban center may also consider partnering with local hospitals and tertiary care facilities to obtain data on community members.
Three tools support the injury data management component of ACCISS: an injury surveillance form, a data entry database, and a data analysis program. The electronic component uses Epi Info TM (Centers for Disease Control and Prevention 2008) as its software platform to drive data entry and analysis functions. The ACCISS 2.0 user manual (Health Canada & First Nations and Inuit Health Branch, 2007) provides an additional resource.
Injury Surveillance Project (ISP)
Secwepemc Nation Health Directors sought to build capacity to manage heath data and to improve the health, safety and well-being of the people, thus providing impetus for undertaking ISP. The project was implemented in three phases: 1) pre-implementation activities setting the groundwork for undertaking surveillance; 2) implementation activities included data collection, analysis and reporting followed by using and sharing data for injury prevention activities; and 3) overall project maintenance and monitoring activities.
Health Directors assumed leadership and coordination of injury surveillance in their respective communities. Figure 1 illustrates the organizational diagram of personnel and their roles in the ISP and the evaluation described herein. At the project level, the Health Director's Project Team was formed to aid with project coordination and administration, and consisted of a subset of Health Directors from participating communities. Communities also had access to a Project Support Team that provided technical assistance and training.
ISP personnel within each community included data collectors, and data entry and report generators. Data collectors completed the injury surveillance form and included people already responsible for identifying or treating injured community members, as well as a wide range of staff and supervisors. Data entry and report generators coordinated and supported the data collection network within their community; entered, cleaned, and analyzed data; and produced summary reports.
In 2007, the Health Directors Project Team initiated an evaluation of the implementation of ISP focusing on the pre-implementation and implementation phases, which included the establishment of data collection, analysis and reporting for the first 3 years of the project. They established a collaboration with the evaluation team, which consisted of external evaluators (authors), and internal evaluators, the latter having been part of the Project Support Team that worked with the project communities in establishing and maintaining ACCISS. The evaluation participants were the Health Directors, and community staff involved in implementation and ongoing management of ACCISS, as well as key stakeholders identified by the Health Directors Project Team.
This paper presents the methods and findings from the process evaluation of ISP. The evaluation was community driven, guided by the Health Directors and underpinned by OCAP principles. Our aim in this paper is to identify lessons learned regarding implementation of an injury surveillance system that may benefit other communities considering implementation of health surveillance. Figure 2 illustrates the collaborative evaluation process undertaken for this evaluation. Initial steps involved collaborative planning by the Health Directors Project Team and the evaluation team through which evaluation objectives were developed that guided data collection and analysis strategies. The objectives focused on in this paper include those addressing: identification and description of project implementation facilitating factors and challenges; project learnings and promising practices; and project outcomes achieved to date. At the time of the evaluation, ten project communities were actively participating in injury surveillance, had collected at minimum 22 months of injury data and were producing community-specific injury reports. The evaluation was sanctioned as a project activity by Band Council Resolutions and communitybased protocols. Ethics approval was obtained from the university research ethics board.
Data collectors
Collect surveillance data
Data entry & report generators
Input and manage surveillance data, produce summary reports
Internal evaluators
Assist with ISP evaluation
Project Support Team
Provide technical assistance and training for establishment of ACCISS
Health Directors Project Team
Coordinate and administer ISP Consistent with Cousins and Whitmore's (1998) conceptualization, a collaborative participatory communitybased approach was used to develop and carry out evaluation activities. This approach used evaluation questions driven by the community, methods and measures that were community-sensitive and reporting that was community-focused (Mathison 2005). This approach was critical for ensuring that analyses and findings were culturally sensitive and accurate (Davis and Reid 1999;Mullany et al. 2009). Also guiding the evaluation process were OCAP principles (First Nations Centre 2007), which were relevant for ensuring appropriate and sensitive evaluation methodology, and ownership of the evaluation process, protocols and products resting with the Secwepemc Nation. The Health Directors Project Team decided that a qualitative approach for data collection and analysis was the most culturally aligned methodology for maximizing cultural sensitivity of data collection strategies, for understanding these issues from the perspective of community members and other stakeholders, and for developing an in-depth understanding of the contextual issues impacting implementation of the surveillance system. Primary respon-Planning Evaluation (October 2007)
Analyzing and Synthesizing
Evaluation Data (April-June 2008) Step 1: •Open coding 77 codes •Consensus categorization 12 categories •Developed preliminary themes Step 2: •Reviewed initial interpretations with internal evaluators •Revised themes, sub-themes •Developed conceptual framework Step 3: •Presented findings to Health Directors Project Team •Refined findings and interpretations •Developed draft report for review Step 4 (June-November 2008) Collaborative Evaluation Process (October 2007-November 2008 Fig. 2 Collaborative evaluation process, including tasks, data collection, analysis procedure and timelines sibility for data collection was assumed by the external evaluators while data interpretation and reporting of findings was undertaken by all team members.
Evaluation Process and Activities
As illustrated in Fig. 2, the evaluation planning process occurred through a series of face-to-face meetings between evaluators and the Health Directors Project Team. This planning process included defining the focus and scope of the evaluation, refining evaluation objectives, and developing evaluation protocols and data collection instruments. A logic model was developed to outline the overall project, delineate the focus of the evaluation and define evaluation objectives.
Data Collection
Data collection was undertaken in two locations within the Secwepemc Nation to facilitate the participation of all project communities and accommodate natural northern and southern geographical groupings. Data were collected via multiple sources including document review, focus group discussions, and individual interviews. The methods and focus of data collection activities are described in Table 1.
Document Review
The document review aided in identifying key ISP stakeholders; increasing understanding of operations and processes; gathering and synthesizing historical context; and verifying information about project timelines, activities and outcomes. Materials examined included annual reports, contracts between bands and government agencies, e-mail correspondence, ACCISS user manual, project presentation slides, meeting agendas, and other relevant documents.
Focus Groups Five focus groups were conducted: one with the Health Director's Project Team, and two each with data collectors and data entry/report generators. In situations where participants filled multiple roles (e.g. data collector and data enterer), they participated in more than one focus group. Each focus group also included three members of the evaluation team-a facilitator, a note taker and a flip chart recorder. All project communities were represented at the focus groups, which included a total of 32 participants. All focus groups covered topics of capacity building, training received, and lessons learned. In addition, the Health Directors Project Team Focus Group included discussion of: community readiness to take on ISP; issues related to the injury surveillance tools, activities and processes; sustainability; and the use of injury data. The Data Collectors Focus Groups also explored: participants' perspectives on the process of collecting injury data; surveillance tools; and data collection activities and processes. The Data Entry/Report Generators Focus Group discussed use of the injury surveillance tools and associated tasks; project readiness; and the use of injury data.
Individual Interviews Ten in-person and telephone interviews with key stakeholders were conducted to provide indepth information from a range of perspectives. The selection of interview respondents was informed by discussion with the Health Directors Project Team who identified individuals able to provide perspectives on topics relevant to the evaluation objectives. This included individuals within the communities, the Project Support Team, Value and utility of the project and surveillance system Federal, provincial and regional government representatives (n=4) Interest of organization in surveillance system, working with Aboriginal communities, government-related challenges, influence on government priorities Non-participating community (n=1) Community factors affecting readiness Project Support Team (n=2) Implementation logistics, shifting roles, value and utility a Participants with overlapping roles attended all applicable focus groups and regional and federal government representatives. All interview respondents were asked core questions related to community readiness, project challenges and facilitators, project outcomes and sustainability. Additional interview questions were developed for the different project-related roles that interviewees held to gain insight into their unique perspectives.
Consistent with ensuring that the methodology was culturally acceptable to the project communities, handwritten notes (rather than audio recordings) were taken at all interviews and focus groups. Each data collection event included an interviewer/facilitator and at least one other note taker. Notes were transcribed and then imported into NVivo 7™ software for analysis. Figure 2 shows the four steps involved in analysis of the evaluation data. Analysis activities were guided by methods outlined by Miles and Huberman (1994) and involved the external evaluators, internal evaluators, and Health Directors Project Team in data analysis and interpretation. The analysis process was structured to support consensual analysis whereby evaluation team members and Health Directors were involved in data analysis and interpretation (Hill et al. 2005). This served to minimize bias, improve validity and support OCAP principles.
Data Analysis
Open coding resulted in the identification of 77 codes (e.g., developing leadership skills; increasing awareness of injury issues in the community). Through a consensus process, this list was condensed to 12 categories (e.g., capacity building outcomes, use of data for prevention activities). Data fractured during initial coding were reassembled via development of preliminary themes to provide coherence and illuminate relationships (e.g., leadership and champions, culture of prevention). Themes were refined initially via discussion with internal evaluators and subsequently via consultation with the Health Directors Project Team. This process resulted in four main areas of focus that were outlined in the final evaluation report, with up to six themes for each (total themes=18). This paper describes a selection of these findings with broader relevance to other communities and contexts.
Results
We identified lessons learned regarding planning, implementation, management and early outcomes of ISP across five main thematic areas: 1) Project leadership and staff; 2) training; 3) project funding; 4) initial project outcomes; and 5) community readiness for implementation.
Project Leadership and Staff Lessons Learned
Strong leadership for ISP and community leaders' support held central importance throughout the project: from deciding to undertake ISP; to identifying resources for implementation; through to implementing and administering project activities; and working to ensure sustainability.
Evaluation participants identified that project champions, who often were community Health Directors, were essential. Champions were seen as contributing through recognizing the need for local injury surveillance and advocating for ISP to government organizations, to staff working in the field and to general community members. Furthermore, they provided education to staff and potential data collectors about the purpose and importance of the project, particularly since staff were often concerned about how ISP might add to existing workloads. Continuity in Health Director leadership, particularly in early stages of planning and implementation, facilitated assuming the role of champion and overseeing project implementation activities.
Champions also played a key role in advocating for ISP among community leaders. Formal support of community leadership through Band Council Resolutions or Board Motions was critical to initiation of ISP. This support was obtained in project communities despite potential challenges in convincing leaders as to the purpose and benefit of collecting injury data that may not be available for use for a year or more.
Health Directors representing the project communities developed collaborative, solution-oriented approaches that contributed to their ability to successfully negotiate challenges that arose throughout the course of the project. Unexpectedly, working as a team created "positive peer pressure" to continue; maintained a mutual project path; and ensured a stable course of action in moving the project forward. As ISP progressed, a smaller group of Health Directors established the Project Management Team, which held primary responsibility for project administration. Several benefits seen to be associated with having this team included facilitating ongoing decision-making, fostering project stability and reducing workload for any one Health Director. Access to external expertise via the Project Support Team was also seen as a crucial component for project implementation through provision of ongoing methods expertise, training, mentoring and support.
Project staff attitudes played a role in project implementation, particularly since staff needed to manage surveillance tasks in addition to existing workloads. Staff expressed concerns around competing workload priorities and commented on how it was sometimes difficult to incorporate data collection as part of routine activity. Participants noted that ideally more than one staff member in each community should be trained in injury surveillance so as to minimize disruptions from staffing change. Despite challenges identified by project staff, they expressed enthusiasm and recognized the value of ISP. Project staff members were interested in the capacity building associated with developing injury surveillance skill sets that also had the potential to transfer to other areas of work, and were prepared to undertake the extra work and activities involved.
The community-based and community-paced nature of ISP enabled individual communities to drive implementation of injury surveillance activities based on their community-specific needs, schedules and capacities, rather than conforming to external deadlines. Health Directors strongly emphasized this as an important factor influencing their decision to become involved, since it allowed them to consider their community's readiness to undertake surveillance.
Training Lessons Learned
Training for project staff was delivered using three main formats: 1) group training sessions focusing on injury surveillance and prevention theory, as well as hands-on surveillance skills training; 2) on-site training relating to community specific issues and individual training needs; and 3) ad hoc training of data collectors emphasising the correct completion of injury surveillance forms. The first two formats were provided by the Project Support Team and modelled on adult learning approaches to accommodate differing abilities, learning styles and life experiences. The third training format was provided on an as-needed basis by project staff to new data collectors.
Evaluation participants reported that the diverse educational backgrounds and skill levels of project staff posed the greatest challenge to formal group training sessions. However, the mixed skill level was also seen as an opportunity for more experienced staff to mentor those with less experience. The ad hoc training provided to data collectors by project staff used varied informal approaches (e.g., staff meetings). This training served to meet immediate needs and circumstances; however some evaluation participants raised concerns regarding the less consistent delivery and lower levels of detail provided in the ad hoc training approach.
Project Funding Lessons Learned
Evaluation participants representing government organizations expressed their recognition of the substantial burden that injuries represent for Aboriginal populations, highlighting the lack of correspondence between official government priorities for funding and the main sources of impact on health in Aboriginal communities. Furthermore, lack of government priority and resources for injury prevention meant that obtaining resources and funding for implementation of injury surveillance and for action on injury priorities identified through data collection represented an ongoing challenge. However, evaluation participants considered the Health Directors' determination to implement the project, develop innovative solutions for identifying project funding and community resources as strong influencers on project success and sustainability Initial Project Outcomes Lessons Learned Project staff reported extensive and often unexpected capacity building across a variety of areas including: 1) the ability to establish a surveillance system for other health issues; 2) expanded understanding of all health data; 3) development and enhancement of skills in leadership, project management, and communication; and 4) development of OCAP-based management policies that could be applied to the general management of community-based health data. They perceived that their skill development had a significant impact across their work, and resulted in ongoing commitment and championing for ISP as a worthwhile activity to have undertaken and to continue involvement with into the future.
There were early indications that community-specific injury data were already leading to injury prevention efforts in several communities. For example, one community that identified falls occurring among Elders in home bathrooms as an issue implemented a bath mat distribution program. Another community established an ice shoe loaner program to address falls resulting from icy outdoor surfaces. Evaluation participants noted that some of their injury prevention efforts already appeared to be reducing injuries.
Community Readiness Lessons Learned
Based on the findings outlined in the sections above, the evaluation team worked with the participating Secwepemc First Nations communities' Health Directors to identify factors of community readiness for injury surveillance that were seen to be relevant for communities contemplating implementation of surveillance. The Health Directors highlighted three factors as particularly important for project implementation success and longerterm sustainability: 1. Awareness and knowledge that injuries are a problem as a basic precursor to promoting the preventability of injuries and the central function that surveillance plays for injury prevention efforts. 2. Having shared vision and values amongst the project team regarding project implementation to keep the project focused and cohesive in its capacity building and problem solving approaches. 3. Leadership stability within the project team to ensure continued championing of the project and a stable course for project activities.
Discussion
The disproportionate burden that injuries represent in Aboriginal populations makes injury prevention a priority. Addressing this health issue has numerous challenges, including the impact of colonialism that resulted in the loss of Aboriginal culture and tradition, the geographic location that may limit access to services, and the lack of local injury data to assist with injury prevention planning (Auer and Andersson 2001). While it is important to recognize the risk factors, negative conditions and challenges associated with health issues in Aboriginal populations, many Aboriginal communities have worked to overcome impacts associated with colonialism. Highlighting their successes is important for illustrating another path to self-determination, as a tool for knowledge transfer and as inspiration for other communities and health practitioners. The Secwepemc First Nations participating communities' ISP is one such example of an innovative collaborative initiative with the potential to provide new insights to other communities and practitioners working on injury issues and to policy makers with interest in this area. The lessons learned from this project can serve to assist with successful implementation of injury and other types of health surveillance in community settings. Below are highlighted methodological and community-related factors that emerged as key findings in the process evaluation of ISP that may be applicable to other settings and health issues.
Importance of Community-Based Participatory Approaches to Injury Surveillance in Aboriginal Communities
This evaluation clearly demonstrated how the implementation of injury surveillance in the Secwepemc Nation benefited from use of a community-based and community-paced approach sensitive to the needs and challenges specific to communities involved. A key aspect of this approach was the use of ACCISS, which was designed as a tool to be adapted to community circumstances and provide data owned and held at the community level.
The OCAP principles that guided surveillance data collection, analysis and management fostered a sense of community control over the information and helped mitigate issues of distrust. The Secwepemc Nation experi-ence using OCAP principles as a central guiding framework helped to support the communities' self-determination and highlights their potential to assist with implementing injury prevention data collection efforts in a culturally relevant and sensitive way.
Project and Community-Related Factors Impacting Injury Surveillance
Through this process evaluation, it was found that project leadership and staff contributed in important ways to the ability of the ten Secwepemc communities to persevere in implementing injury surveillance despite a lack of dedicated funding. These lessons included the importance of champions at several project levels, stable leadership, collaboration within and between community structures, and training responsive to community needs.
While published research on health promotion or injury prevention in Aboriginal communities is limited, what is available in the literature supports the findings of this process evaluation. For example, Helitzer et al. (2009) noted the importance of perseverance in maintaining stakeholders' interest and enthusiasm for mobilizing community action in an agricultural safety program among members of the Navajo Nation. Broader research on health promotion in community settings also highlights the role of perseverance, in addition to the need to coordinate efforts in mobilizing community action (Butterfoss 2006). Finally, the importance of positive relationships among key stakeholders is seen as a central attribute of sustainable innovations (Johnson et al. 2004). Similar to the previous research, a significant component of our findings related to the collaborative relationship between Health Directors and project staff across communities and the role they played as ongoing champions for the project, persevering despite competing pressures.
The knowledge gained through this evaluation regarding community-related factors that emerged as important may assist Aboriginal and non-Aboriginal communities to assess their own readiness to undertake a similar initiative. Secwepemc Nation Health Directors recognized potentially detrimental effects to communities when efforts to implement injury surveillance did not succeed due to implementation challenges. These concerns are supported by the research literature that suggests that implementing a strategy in a community that is not ready for it can lead to negative results ranging from project delays to failures (Nilsen 2004). Negative impacts can be mitigated through the development of improved understanding of factors that signal a community's likelihood for success. It is our hope that the learnings from this project and the factors for community readiness identified above will assist communities with determining their own preparedness to establish injury surveillance. These factors may also be applicable to other health promotion issues, such as prevention of infectious diseases. Further research is needed on these and other factors of community readiness for health promotion activities to determine their value in predicting community success for project implementation.
Capacity Building and Injury Prevention Planning
This evaluation focussed on the process of planning and implementation. However, at the time of the evaluation several promising project outcomes were identified, such as capacity building in the community and initial efforts to use local data for planning prevention strategies. The wideranging capacity building that participants reported as enhancing their ability to undertake surveillance, as well as a multitude of other skills may provide additional incentive for communities considering undertaking similar projects. This close connection between surveillance activities and prevention planning efforts in each community are a major strength, particularly since the lack of such connection has received criticism in the literature in that surveillance activities are often removed from the population of interest with little linkage between the data and preventive activities (Auer and Andersson 2001;Johnston 2009;Pless 2008).
Evaluation Approach
A main strength of this evaluation lies in the incorporation of a community-based participatory approach reflecting OCAP principles. This collaborative approach for the design, data collection and interpretation of findings ensured input from communities at all stages of the process and added to the credibility, relevance and cultural sensitivity of the results for project communities. It is consistent with Cousins and Whitmore's (1998) definition of participatory evaluation in that primary control for decisions regarding the evaluation process, stakeholder involvement, and the depth of participation of stakeholders rested with the Health Directors Project Team. Likewise, the evaluation adhered to many of the principles of empowerment evaluation outlined by Wandersman et al. (2005), such as community ownership, inclusion, democratic participation, community knowledge, evidence based, and accountability. Other principles may emerge with additional time. For example, the results of the evaluation provide support for the value of ACCISS and justification for allocation of resources to community-based injury surveillance (social justice principle); however, it is too soon to tell whether it will influence key decision-makers to identify resources for this activity.
Study rigor of the evaluation results was enhanced through methods that included data and investigator triangulation. In this evaluation, we used multiple data sources including interview, focus group and document review data. Investigator triangulation was ensured through a five-member evaluation team supporting interdisciplinary, internal and external perspectives. In addition, Health Directors provided guidance regarding the cultural sensitivity and appropriateness of methods, as well as interpretation of findings. Ongoing consultation with project communities supported the consensual evaluation process.
Study Limitations
Focus groups conducted with the data entry/report generators were smaller than what is considered an optimal number (Krueger and Casey 2009); however, they were consistent with the community context within which this evaluation was conducted, that includes small community sizes with commensurate level of staffing All individuals involved in these roles participated in the focus groups, and the discussions were extensive. The Secwepemc Nation Health Director Project Team requested that the recording of evaluation data from focus groups and interviews rely on written notes and flip charts, rather than audio recording. While this method did not allow for transcribing of exact wording, accuracy was maximized by including at least two note-takers at each data collection session.
Conclusion
Injuries continue to pose a major burden to the health of Aboriginal people living in Canada. This process evaluation utilized a collaborative, participatory, community-based approach to better understand the challenges and facilitators associated with implementation of an injury surveillance system in a group of First Nations communities in Canada, taking steps to address the injury burden among community members. This evaluation outlines factors that may inform communities considering similar activities in relation to readiness to undertake injury surveillance. It also supports the importance of ensuring that policy and programming efforts within Aboriginal communities are communitybased and consistent with OCAP principles.
Open Access This article is distributed under the terms of the Creative Commons Attribution Noncommercial License which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited.
|
2014-10-01T00:00:00.000Z
|
2011-12-03T00:00:00.000
|
{
"year": 2011,
"sha1": "577ccbb0e9a87042c7f5eb3c8ec582bf94272302",
"oa_license": "CCBYNC",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s11121-011-0258-x.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "577ccbb0e9a87042c7f5eb3c8ec582bf94272302",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
235214340
|
pes2o/s2orc
|
v3-fos-license
|
Isolation of Cell-Free miRNA from Biological Fluids: Influencing Factors and Methods
A vast wealth of recent research has seen attempts of using microRNA (miRNA) found in biological fluids in clinical research and medicine. One of the reasons behind this trend is the apparent their high stability of cell-free miRNA conferred by small size and packaging in supramolecular complexes. However, researchers in both basic and clinical settings often face the problem of selecting adequate methods to extract appropriate quality miRNA preparations for use in specific downstream analysis pipelines. This review outlines the variety of different methods of miRNA isolation from biofluids and examines the key determinants of their efficiency, including, but not limited to, the structural properties of miRNA and factors defining their stability in the extracellular environment.
Introduction
The biology of miRNA and their transport from cells and into intracellular space, including biological fluids, were actively studied in the last two decades. High stability and value of cell-free miRNA (cf-miRNA) suggests them as promising diagnostic and prognostic biomarkers for a plethora of diseases, including cancers of different origin [1][2][3].
Accurate evaluation of cf-miRNA profiles includes purification, quantification and intelligent data analysis ( Figure 1).
The main features and efficacy of isolation techniques are hugely determined by the biological properties of miRNAs, their interaction with other biomolecules and packaging. In turn, purity and quality of retrieved cf-miRNA affect accuracy, reproducibility and reliability of their quantification. miRNAs are small non-coding RNA molecules (19-22 nt) which can be vastly different in GC-content, base and backbone modifications (Table 1).
Small amount of miRNA is detected in the extracellular space and in biological fluids (see Figure 1 in [4]). Cf-miRNA stability in blood and other biofluids can be attributed to the protection from RNAses conferred by interaction with biomolecules as well as packaging in membrane-coated or membrane-free particles ( Table 2).
These complexes strongly interfere with the isolation efficacy requiring liberation of miRNA from such structures to ensure effective cf-miRNA isolation and prevent coisolation of polymerase reaction inhibitors.
The key to successful study of cf-miRNAs by high throughput methods or precision techniques in research or clinical environment is the quantitative isolation of miRNAs from samples of biological fluids independent from their primary structure, modifications and content of biological fluids. Meanwhile, methodological aspects defining the efficacy and reproducibility of cf-miRNA purification and sample preparation often attract little attention, but in many real-life cases, diagnostically relevant margin of difference is rather narrow and, thus, accurate and reproducible pre-analytical approaches can significantly improve comparative analysis of miRNA expression. In this review, we provide an analysis of current methods of cf-miRNA purification from biological fluids and examine factors that affect its efficacy, including intrinsic features of miRNAs, packaging and common contaminants of miRNA preparations.
Overview of cf-miRNA Properties and Factors Influencing Their Extraction from Biofluids
Depending on the cell type it can contain up to 120,000 total mature miRNA molecules [52]. This molecular population is heterogeneous, with each miRNA containing (sometimes multiple) miRNA isoforms (isomiRs), different in the sequence of 5 -or 3 -ends [53][54][55]. Additionally, immature miRNA species that may be present in the sample and should be excluded from quantification or analysis. Like other RNA species miRNA can carry a repertoire of base and backbone modifications, including methylation, uridinylation, adenylation, adenosine-to-inosine editing by RNA-dependent adenosine deaminase (ADAR) or inclusion of pseudouridines (Table 1) [5][6][7]11,[56][57][58][59][60]. These modifications can additionally affect the half-life of specific miRNAs in biofluids by giving them increased resistance to exonucleases and higher affinity to miRNA binding biomolecules. For example, methylated miR-21-5p is more resistant to digestion by 3 →5 exoribonuclease polyribonucleotide nucleotidyltransferase 1 (PNPT1) and has higher affinity to Argonaute-2 (AGO2), which may contribute to its higher stability and stronger inhibition of programmed cell death protein 4 (PDCD4) translation, respectively [12]. Since cf-miRNAs originate from the cellular pool one can suppose that they also share the same structural features as cellular miRNA.
The lifetime of free RNAs in biological fluids is very short (15 s in blood according to [61]). This is due to the intrinsic lability of RNA structure, especially in the presence of bivalent metal cations, [62,63] and due to high levels of ribonuclease and phosphodiesterase activity found in most biofluids [64][65][66][67][68][69]. Sequences with lower GC content and stable secondary duplex structures appear to be less stable and therefore at risk of being lost during the extraction process [70]. Despite this, endogenous RNA was shown to survive in bloodstream significantly longer-from several minutes to several hours or even days [71]. Cell-free miRNAs are relatively stable in blood, urine, cerebrospinal liquid, bronchoalveolar lavage, lymph, saliva, milk, tears and others resisting degradation at room temperature for up to 4 days and could withstand as boiling, multiple freeze-thaw cycles, high or low pH [72][73][74]. The observed stability is provided by complexing with proteins, lipoproteins, supramolecular complexes and packaging in membrane-coated vesicles and membranefree particles ( Table 2).
Particle-free miRNAs are frequently found in complexes with proteins: such miRNAs like miR-16 and miR-92a, are mainly (up to 95%) co-precipitated with the protein fraction [44]. In the cell, the main partners of miRNA are the Argonaute proteins (AGO1-4 in humans). It is therefore not surprising that cf-miRNA in supernatants of MCF-7 cells, blood [43], urine [75] and pericardial fluid [76] were found in strong complexes with AGO2 (Kd~20-80 nM) [77][78][79]. qRT-PCR profiling of 375 miRNAs in size-exclusion chromatography fractions of human plasma more than 67% of the assayed miRNAs were associated with fractions containing AGO2 protein and a corresponding portion of plasma miRNAs could be recovered by AGO2 immunoprecipitation from plasma [47]. In addition to AGO2, miRNA in blood can be in a complex with AGO1 [45].
In blood cf-miRNAs were also found in small complexes 30-40 kDa containing nucleophosmin (NPM1) [46] passing dialisis membranes, although this interaction found in vitro is yet to be confirmed to exist in vivo [43,49]. In other biofluids other interaction partners could be more prevalent, for example in urine Tamm-Horsfall protein (THP; uromodulin) was shown to possibly bind miRNAs [80]. Since number of miRNAs in cell dramatically exceed number of AGO proteins (14 times) significant part of mature miRNAs can be bound with c RISC [81][82][83] and other proteins with classic RNA-binding motifs like HuR, AUF1, etc. [79], involved in transport and functioning of miRNA [84,85].
High density and, to a lesser extent, low density lipoproteins (HDL and LDL, respectively) are another type of confirmed miRNAs transporters in blood, carrying distinct populations of miRNA [38][39][40]. HDL and LDL are 5 to 1000 nm supramolecular complexes (nano-or microparticles) composed of lipoproteins and an assorted collection of lipids [86,87]. Despite the high concentrations of HDL and LDL, it is estimated that they contain no more than 10% of cf-miRNA detectable in blood plasma [39]. Substantial part of cf-miRNA in blood, urine and other biological fluids are packed in membrane-coated extracellular vesicles (EV) which are secreted by normal and cancer cells [88][89][90][91][92]. EV is a collective term describing 30-1000 nm particles coated by a double membrane layer, including exosomes, microvesicles and apoptotic bodies which differ in size, structure, surface markers, molecular composition including distinct miRNAs subsets (Table 2) [15,20,21,93]. On average, one milliliter of blood and urine contains 10 8 -10 12 or 3-8 × 10 9 exosomes, correspondingly [94,95]. Some reports suggest that exosomes can be the main miRNA transporters in blood and saliva [31]. Exosomes derived from 4 mL of blood serum typically yield approximately 2-10 ng of RNA; exosomes derived from 10 mL urine yield approximately 2-4 ng RNA, while whole blood serum and urine contain about 10-fold more RNA [94]. Thus, a substantial fraction of cf-miRNA must be associated with proteins, complexes and vesicles other than exosomes. Stoichiometric analysis has shown that the miRNA content of exosomes is not as high as previously thought, with no more than a copy of any single miRNA per exosome, in average [96]. It should be noted that most of miR-16 and miR-223, packed in exosomes, were found to be in complexes with AGO2, adding more fuel to the debate of the dominant form of miRNA circulation in blood [97].
Molecular pattern of apoptotic bodies is not so characteristic like exosomes [16] and can include a variety of proteins, fragments of genomic DNA, most types of RNA including mature and immature miRNA [15,98]. Thus, analyzing miRNA isolated from apoptotic bodies special care should be put into managing potential background. Frequently EV isolation is the first step in the cf-miRNA investigation studies. The methods used for EV enrichment are worth a separate for discussion and have been well summarized in a number of reviews [99][100][101].
As it was mentioned before EV contains membrane and thus a set of lipids like phosphatidylserine, cholesterol, ceramide and sphinogolipids in exosomes [102,103] and lysphosphatidylcholines, sphingmyelin and acylcarnitines in microvesicules [93,104,105]. Presence of lipids in EV suggests necessity of their elimination during miRNA isolation process as well as previously mentioned disintegration of miRNA complexes with biomolecules.
Additionally, recent data suggests that a subpopulation of cf-miRNA circulates bound to the surface blood cells (Table 2). This miRNA fraction is perspective as a source of diagnostic markers because the expression of some cell surface bonded miRNAs changes with the development of cancer [27,32].
To date, there is no universally accepted hypothesis on generation of free cf-miRNA pool. Some types of miRNA-bearing complexes could be actively secreted (exosomes) [106], while for others passive leakage from damaged or dying cells is more feasible [107,108]. Another unknown is how, presumably, different clearance rates of different complexes could impact the half-life of different populations of cf-miRNA.
Handling and Storage of Biological Fluids before miRNA Isolation
A key step in cf-miRNA isolation includes separation of the liquid portion of biofluids from cells and cell debris. This is necessary to prevent the contamination of sample by cell miRNAs. More than half (almost 58%) of diagnostically relevant miRNAs are highly expressed in at least one type of blood cell [109]. Hemolysis results in significant increase in levels of many cf-miRNAs, including, most prominently, the eritrocyte-specific miR-451a [109][110][111][112]. Significant number of epithelial cells is present in urine, saliva, cerebrospinal and other biofluids [113][114][115] and should be removed before they have the chance to contribute to the cf-miRNA by active secretion or passive leakage. That is commonly achieved by sequential centrifugation at low and high speed [19,116,117] with subsequent separation of cell and debris-free supernatants.
Storage conditions of biofluids samples before removal of cell debris and subsequent storage of blood plasma/serum, cell-free urine and other fluids affect the efficacy of cf-miRNA isolation and quantification [118,119]. Use samples preserved in the slurry resin [119], long-term storage may decrease miRNA concentration in the sample as do multiple freeze-thaw cycles [4,118]. As described in the review [4] for midterm storage (<20 month) no major differences in serum miRNA levels were observed between −80 • C and −20 • C, nonetheless some individual miRNA were seriously affected by those conditions. However, a slightly decrease within the range of 2-4 years; after 6 years of storage, a significant decrease of miRNA levels was perceived that only accentuates in the course of time.
Concerning dried serum spots incomplete drying of blots before storing was prejudicial for its preservation [4].
In experiments with urine all conditions demonstrated a surprising degree of stability of miRNAs: by the end of ten freeze-thaw cycles, 23-37% of the initial amount remained; over the 5-day period of storage at room temperature, 35% of the initial amount remained; and at 4 • C, 42-56% of the initial amount remained [120].
With that, it is possible that different fractions of miRNA, including particle-free miRNA and EV cargo miRNA can differ in sensitivity to storage conditions. For example, exosomal miRNAs showed extra stability under different storage conditions [121]. Stability of individual miRNA in general also depends on the storage conditions. For example, miR-145 and miR-20a degraded at room temperature, but both are stable at 4 • C, −20 • C and −80 • C for 72 h in serum and as cDNA, which was additionally shown to be stable for at least 3 months at −20 • C and survive four freeze-thaw cycles at −20 • C without significant degradation [122].
Based on the foregoing, it may be concluded that not all biofluids are similar in terms of storage conditions and the method and time of storage should be determined by the tasks set in each specific case.
General Considerations and Technical Parameters Defining Isolation Method
While choosing a method for isolation of miRNAs from biological fluids, it is necessary to take into account the peculiarities of their composition (ionics, proteins, polysaccharides, etc.), which can affect the efficiency of this isolation. For example, blood plasma contains high amount of proteins with total concentration of 7.2 g/dL forhealthyadult human [123]. Accordingly, the concentration of chaotropic agents should be sufficient to extract miRNAs from various protein complexes. Urine contains a large fraction of nitrogen, with urea the most predominant, phosphorus, sodium and potassium, with the total suspended solids at 21 mg/L and total dissolved solids at 31.4 mg/g [124]. That is why, while using chaotropic salts (guanidine), their concentration should be lower than for isolation of miRNA from blood plasma. Normal saliva contains a large amount of glucose (0.5-1.00 mg/100 mL) [125], which can also affect the efficiency of excretion.
Progress in cf-miRNA extraction technologies has made it evident that any effective protocol should successfully achieve three main goals: − enable complete dissociation of miRNA complexes with biomolecules of different nature; − protect miRNA from enzymatic and non-enzymatic degradation during isolation regardless of their sequence; − prevent contamination of miRNA preparations with inhibitors of enzymes used in downstream analyses or substances that hinder accurate quantification by UV absorption or fluorescence detection.
While numerous protocols allow for total RNA isolation, several more recent options are specifically tailored for miRNA isolation and provide either isolation of total RNA without loss of miRNA or selective purification of miRNA [70]. The choice of isolation method not only determines the quality and quantity of extracted miRNA as a whole, but can also favor isolation of certain individual cf-miRNAs leading to differences in the expression of the same miRNAs when isolated by different methods. Extraction of at least some cf-miRNAs is closely dependent on the extraction method suggesting incomplete dissociation of cf-miRNA complexes and highlighting a connection between the efficacy of isolation, miRNA sequence and type of complexing with biomolecules [126].
For example, depending on the starting volume of the biological samples used, the conditions of the protocol can favor the extraction of GC-poor miRNAs [127]. Such differences in isolation efficiency can have a dramatic impact on downstream analyses and most importantly normalization of cf-miRNAs.
Successful liberation of miRNA from complexes is the key to efficient miRNA isolation. Formation of complexes with proteins and lipoproteins can be sequence dependent. Complexes of different type can have different affinities and stabilized by different types of interactions (ionic, hydrophobic, etc.). EV contain membrane featuring distinct combinations of lipids: phosphatidylserine, cholesterol, ceramide and sphinogolipids in exosomes [102,103]; lysphosphatidylcholines, sphingmyelin and acylcarnitines in microvesicles [93,104,105]. High lipid content of EV and lipoproteins needs to be removed during miRNA isolation and any possible interaction with miRNA should be disrupted.
Complete dissociation of heterogeneous complexes in the presence of high abundance of biomolecules of varied nature often demands using high excess of denaturing solutions. This is true for blood plasma, serum and especially urine. To ensure adequate denaturation and removal of the high protein content from samples (albumin, immunoglobulins, coagulation and complement components among others), the lysis reagent-to-specimen ratio has to be increased several-fold. Together with the starting fluid volume, this is the most variable step in different protocols [128] leads to use large volume of pelleting reagents or increasing of efficacy of miRNA binding by adsorbents.
When planning a miRNA study, it is necessary to choose an extraction method taking into account the following parameters: sample type, expected type of miRNA packaging and its abundance, compatibility of the method with downstream applications (for example, with the method of EVs isolation), the duration and cost of the procedure, the available infrastructure (e.g., ability to safely handle and dispose of hazardous substances.
Methods of miRNA Isolation from Biofliuds
The overwhelming majority of currently used methods for miRNAs isolation from biological fluids are based on acid guanidinium thiocyanate-phenol-chloroform extraction, pioneered by Chomczynski and Sacchi in 1987 [129,130]. This method allows efficient fractionation of RNA, DNA and proteins and dissociates most part of miRNA complexes, which is why it is considered the "gold standard" for total RNA (and by proxy miRNA) extraction. A number of commercial kits, for example, Trizol LS, utilize phenol-based extraction for isolation of miRNAs in a fast and simplified manner and can also further specialize in dealing with specific sample types, such as serum and plasma, animal or plant tissue [70]. Products like miRVANA and miRNeasy include an additional stage of purification on columns with fiberglass sorbents for miRNA enrichment [131]. Solid phase extraction methods take advantage of the interactions between the functional groups of nucleic acids and solid sorbents under particular conditions. The adsorption of nucleic acids on the silica surface can be regulated with the use of chaotropic agents at different pH, temperature or ionic strength and additionally enhanced by the addition of bivalent metal ions into the sorption buffer or the sorbent itself [132]. The efficiency of elution also depends on pH and temperature, although recently RNAse-free water is the elution buffer of choice, given its convenience for most downstream applications.
To date, several strategies have been suggested as replacement for acid phenolchloroform extraction. Strong chaotropic properties of guanidinium thiocyanate allow efficient dissociation of both cell-free DNA and RNA complexes. However, some cf-miRNA complexes seem to resist the effects of guanidinium thiocyanate [151]. We previously suggested using Folch solution combined with guanidinium thiocyanate to disrupt hydrophobic interactions and assure complete dissociation of miRNA-containing complexes with subsequent application of the mix directly to fiberglass sorbents [151]. The method avoids bypasses using phenol or detergents, which is undesirable due to micellation and subsequent reduction in the efficiency of fiberglass sorbent. The method was shown to successfully retrieve miRNA from blood plasma, but has proven cumbersome for cf-miRNA extraction from urine due to the large starting volume.
In the mercury LNA RT kit (Qiagen, Hilden, Germany) instead of a laborious organic extraction procedure, cf-miRNA complexes are dissociated and excess protein and lipoprotein content is precipitated while miRNA is purified from the supernatant using fiberglass columns [133]. Extraction only requires 30 min and avoids highly toxic chemical agents. According to several studies, this approach is at least as effective as phenol-chloroform extraction [131,152]. Possible drawbacks of the methods could include partial loss of miRNA as a result of incomplete denaturation of the complexes or co-precipitation with excess biopolymers present in biological fluids. There is also evidence that an additional phosphorylation step may be required to use the obtained preparations for miRNA sequencing analysis [131,152].
Another method based on precipitation of biopolymers with octanoic acid was offered for immunoglobulin isolation [153]. In our lab, we found that complexes of miRNA in blood and urine were dissociated and pelleted by octanoic acid in the presence guanidinium thiocyanate, followed miRNA cleanup with fiberglass columns [154]. The method has demonstrated good efficacy of miRNA isolation from blood and even better performance in isolating urine cf-miRNA. Meanwhile, disadvantages of the method have not been fully explored yet.
Some isolation techniques are to be used when special downstream analyses are planned. For example, the most commonly used method to study protein/RNA complexes is co-immunoprecipitation of RNA with antibodies against the protein interacting with it [155]. Immunoprecipitation can be conducted when antibodies are added to solution and then mixed with the antibody sorbent in the resulting mixture, or an antibody is immobilized to a sorbent and then added to a solution of the protein. These approaches allow to study miRNA-AGO2 complexes [48]. It is known that immunoprecipitation which differ in the order of interaction of the components of the complex, (antigen (Ago2) antibodies and PA-sepharose) as well as in presence of blocking antibodies favor different miRNAs [48]. The authors suggest that non-specific binding to PA-sepharose and autoantibodies against miRNA binding proteins might contribute obtained results [48].
Lipoprotein-bound miRNAs can be isolated using sequential density ultracentrifugation of plasma with the adjustment to well-defined densities using potassium bromide salt. Another method used to separate plasma lipoproteins is the size exclusion chromatography, often with a fast-protein liquid chromatography (FPLC) system combined with columns filled with high-resolution stationary phases [156]. Affinity chromatography using columns linked to monoclonal antibodies specific to apolipoproteins offer one more approach of plasma lipoprotein isolation. However, the use of low pH required to elute lipoproteins retained by the immuno-affinity column lead to potential risk for the loss of nucleic acids from the purified lipoprotein. Thus, sequential density ultracentrifugation remains the standard for the isolation of well-defined lipoprotein classes in sufficient quantities to conduct subsequent in vitro or in vivo studies of lipoprotein function and RNA composition [156].
Finally, there are methods and kits designed used for miRNA isolation from sources other than biofluids, which could be adopted for cf-miRNA. For example, RNAgem (microGEM, Southampton, UK) provides temperature-driven, single-tube extraction of total RNA and miRNA from mammalian cells, tissues, insects, bacteria and virus.
The presence or the absence of the precipitation of miRNA and the type of the coprecipitator may affect the miRNA yield during miRNA isolation. The re-precipitation stage represents an additional stage of miRNA sample processing which lead to longer time of isolation and to potential loss of miRNA. The similar problem is known for circulating DNA precipitation, for which in unpredictable DNA loss or contamination was shown for some precipitation protocols, especially those which used positively charged compounds [157]. RNA bacteriophage carrier (MS2), yeast RNA, tRNA and glycogen are usually used carriers [137,[158][159][160]. The most common co-precipitator is glycogen; however, the combination of tRNA and glycogen was shown to improve the yield and purity of RNA greatly and to maximize the extraction of miRNA from plasma when using the TRIzol LS [160]. In our opinion, the most suitable carrier is glycogen, since it does not contain components of a nucleic nature which may interfere subsequent analysis of miRNAs (NGS, microchip technology, etc.).
Numerous efforts have been taken to compare the effectiveness of different miRNA isolation methods ( Table 3). Most of these studies compare protocols for miRNA isolation from blood plasma and most commonly simple phenol-chloroform extraction is superior to both phenol extraction with column-based cleanup techniques and phenol-free column-based methods (Table 3). No conclusive results have been obtained regarding the differences in the performance of column-based methods, suggesting that an even greater effort is still needed to compare existing extraction methods and working toward developing universal standards for miRNA extraction (Table 3; [116]).
Rapidly developing microfluidic technologies could further enhance cf-miRNA extraction. Microfluidic devices are compact units composed of a network of microchannels with diameters of tens to hundreds of micrometers capable of handling viscous media within a concentration range of pico-to microliters. Specialized units are used for tuning of fluid movement. Microfluidic devices have a tremendous potential and are able to reproduce laboratory techniques on a microscale with a high accuracy and specificity [161]. MiRNA isolation as well as their detection is among areas of application of microfluidic technologies. Recently developed microfluidic platforms can perform effective exosome separation and exosomal miRNA detection for liquid biopsies within a single device. Such methods offer advantages of integrity and fast procedure [162]. Moreover, in microscale processes reagent consumption can be reduced from milliliters to microliters. [161].
Conclusions
Current data clearly show there is no single most effective method of cf-miRNAs isolation and the choice of methodology is often dictated by sample type and properties of the miRNA fraction under investigation, as well as the ease of use in the context of each specific research project. This strongly aligns with the opinion of NIH Extracellular RNA Communication Consortium, which states that, despite the significant development of the methodology, there is no optimal method for isolating miRNAs and work on the development and optimization of new approaches to this problem must be continued [152]. At this point, both attempts to refine and optimize existing technologies and exploration of novel approaches could shift this paradigm and give a significant boost to cf-miRNA studies and potentially see some of their diagnostic applications make an appearance in the clinical setting.
Author Contributions: O.B. and M.K. performed data analysis, manuscript preparation; I.Z. contributed to manuscript editing; A.Y. contributed to data analysis; P.L. performed supervising and manuscript editing. All authors have read and agreed to the published version of the manuscript.
Funding:
The research was carried out within the state assignment of Ministry of Health of Russian Federation (#121031300227-2 "Use of extracellular microRNA for non-invasive diagnostics of lung cancer") and the Russian state budget project to the ICBFM SB RAS (#121030200173-6 "Diagnostics and therapy of oncological diseases").
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
|
2021-05-28T05:18:03.606Z
|
2021-05-01T00:00:00.000
|
{
"year": 2021,
"sha1": "4ffb86dee22fa0f83c365acbcdbca31f0f131d5f",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2075-4418/11/5/865/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4ffb86dee22fa0f83c365acbcdbca31f0f131d5f",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
237562908
|
pes2o/s2orc
|
v3-fos-license
|
The challenge of simulating the star cluster population of dwarf galaxies with resolved interstellar medium
We present results on the star cluster properties from a series of high resolution smoothed particles hydrodynamics (SPH) simulations of isolated dwarf galaxies as part of the GRIFFIN project. The simulations at sub-parsec spatial resolution and a minimum particle mass of 4 $\mathrm{M_\odot}$ incorporate non-equilibrium heating, cooling and chemistry processes, and realise individual massive stars. All the simulations follow feedback channels of massive stars that include the interstellar-radiation field, that is variable in space and time, the radiation input by photo-ionisation and supernova explosions. Varying the star formation efficiency per free-fall time in the range $\epsilon_\mathrm{ff}$ = 0.2 - 50$\%$ neither changes the star formation rates nor the outflow rates. While the environmental densities at star formation change significantly with $\epsilon_\mathrm{ff}$, the ambient densities of supernovae are independent of $\epsilon_\mathrm{ff}$ indicating a decoupling of the two processes. At low $\epsilon_\mathrm{ff}$, more massive, and increasingly more bound star clusters are formed, which are typically not destroyed. With increasing $\epsilon_\mathrm{ff}$ there is a trend for shallower cluster mass functions and the cluster formation efficiency $\Gamma$ for young bound clusters decreases from $50 \%$ to $\sim 1 \%$ showing evidence for cluster disruption. However, none of our simulations form low mass ($<10^3$ $\mathrm{M_\odot}$) clusters with structural properties in perfect agreement with observations. Traditional star formation models used in galaxy formation simulations based on local free-fall times might therefore not be able to capture low mass star cluster properties without significant fine-tuning.
INTRODUCTION
Recently there has been significant progress in the optimisation of computer algorithms and in the increased capability of highperformance computing systems. Together with an improved numerical implementation of the physical processes governing the evolution of the galactic interstellar medium (ISM) numerical simulations are able to describe the evolution of entire galaxies including a realistic multi-phase ISM component. This is an important step forward (see e.g. Naab & Ostriker 2017, for a review) in the understanding of galaxy evolution, as the galactic ISM is the location of all star formation and most of the metal enrichment in the Universe. In addition, the ISM is the driving site for galactic outflows and the origin of most galactic observables at all cosmic epochs.
These next generation simulations have pushed the use of sub-grid models to ever smaller scales. For some cosmological simulations, entire (>100 pc) patches of the ISM are modelled with sub-resolution ★ E-mail: jmhislop@mpa-garching.mpg.de models (see e.g. Somerville & Davé 2015, for a review). However, many new high-resolution galaxy formation simulations can resolve the multi-phase ISM structure down to ∼ parsec scale. This allows to partially follow important physical processes setting the ISM properties directly, such as the impact of individual supernova (SN) explosions by the approximation of thermal energy injection. Recent simulations also represent the galactic stellar population with increasingly lower mass stellar tracers down to populations of several thousands (e.g. Hopkins et al. 2018;Kretschmer et al. 2021;Marinacci et al. 2019) or several tens to several hundreds solar masses (Hopkins et al. 2011;Renaud et al. 2015;Rosdahl et al. 2015;Dobbs et al. 2017;Agertz et al. 2020;Ma et al. 2020;Jeffreson et al. 2021). The highest resolution studies have even started to trace individual massive stars for isolated galaxy models (e.g. Hu et al. 2016;Emerick et al. 2018;Lahén et al. 2020a;Gutcke et al. 2021;Smith et al. 2021;Hirai et al. 2021). While individual stars are the lowest possible resolution element, also these simulations still rely on sub-grid models for estimating the star formation rates, for the sampling of individual stars and, to some degree, for the modelling of their radiation, energy and momentum output. In this study, we focus on the star cluster population properties in simulations of entire galaxies which have the potential to resolve the multi-phase ISM structure as well as the internal structure of star clusters.
The majority of the recent high resolution galaxy formation studies, including those mentioned above assume an underlying simple sub-grid model which estimates the local star formation rate based on the local gas density divided by its free-fall time multiplied with an efficiency ff parameter (Schmidt 1959) [see equation (2) below]. The star formation efficiency per free-fall time based sub-grid model is the most commonly adopted model in all numerical galaxy formation research (see e.g. Naab & Ostriker 2017) and is used with all major simulation methods, i.e. grid codes (e.g. Kravtsov 1999;Teyssier 2002;Bryan et al. 2014), moving mesh codes (e.g. Springel 2010), and particle based hydrodynamics codes (e.g. Springel 2005;Hopkins 2015). The different models have varying additional constraints on the properties of the gas particles which become eligible for star formation in the first place. For simulations with resolved ISM this typically refers to the (collapsing) dense and cold gas phase.
The success of this star formation sub-grid model is based on observational evidence at all cosmic epochs that the star formation rate scales with the gas surface density and is an inefficient process (Schmidt 1959;Kennicutt 1998b;Leroy et al. 2008;Genzel et al. 2010;Tacconi et al. 2013). For typical galaxies the fraction of dense, cold gas turned into stars per free-fall time is low, typically around a few per cent (see e.g. Krumholz et al. 2012) and this simple star formation model easily captures the observed scaling of star formation rate with gas surface density (see e.g. Schaye & Dalla Vecchia 2008, for a concise overview).
The ability to follow low mass stellar units or even individual stars in galaxy evolution simulations has changed the focus of numerical studies to entire galactic star cluster populations. Together with observationally well determined galactic star cluster properties (see e.g. Portegies Zwart et al. 2010;Krumholz et al. 2019, for reviews) this has opened a new diagnostic window for the small scale structure of the star forming gas and the impact of stellar clustering in galaxy evolution simulations. Star cluster population studies therefore support the scientific validation or falsification of current and novel future theoretical models for the evolution of galaxies with resolved ISM properties.
The origin and impact of clustered star formation is a fundamental question in star formation studies. From observations, we observe star formation to be clustered, although the fraction of stars born in clusters heavily depends on the definition of the cluster (Bressert et al. 2010;Gieles & Portegies Zwart 2011). Star clusters are observed wherever there is star formation, irrespective of galaxy mass, such as the Small Magellanic Cloud, the Milky Way, or the Antennae galaxies. The stellar clusters are observed to follow uniform cluster mass functions (CMFs) dN/dM ∝ M with power law slopes of around ∼ −2. The observed normalisations of the CMFs change with the star formation rate of the galaxies and the age of the cluster populations . Low mass clusters typically disperse quickly, while young massive clusters (YMCs) that we observe today embedded in the ISM might be longer lived and have properties which could make them potential globular cluster progenitors (Longmore et al. 2014;Krumholz et al. 2019).
A fundamental observed property of star clusters is that the number of clusters in star forming galaxies decrease with the age of the cluster population but the slope of the mass function is unchanged (see e.g. Krumholz et al. 2019). There are discussions in the literature whether the cluster formation rate (CFR) follows the global star formation rate, irrespective of galaxy type, meaning that the fraction of stars born in clusters is independent of the star formation rate per unit area (e.g. Chandar et al. 2015Chandar et al. , 2017. Or on the other hand, young star clusters show higher rates of disruption in galaxies with higher gas densities and star formation rates (Bastian et al. 2012) but the fraction of stars born in clusters increases for high star formation rates per unit area (e.g. Kruijssen 2012;Bastian et al. 2012;Adamo et al. 2020a).
In almost all recent high-resolution galaxy evolution simulations, the normalisation of the star formation rate is regulated by stellar feedback. It becomes independent of the assumed ff for the dense star forming gas (see e.g. Hopkins et al. 2011, and many other simulations thereafter). However, variations of ff can change the structure and distribution of newly formed stars, such as cluster sizes, cluster mass functions, and the fraction of stars formed in clusters.
Recently, some of the highest resolution galactic studies have focused on clustering/star cluster properties in galaxy evolution simulations. Renaud et al. (2015) report clusters above 10 5 M with typical sizes of ∼ 5 pc without a clear indication for power-law cluster mass functions. Li et al. (2017) use a cluster formation model combined with a free-fall based star formation model. They find power law like mass functions for clusters above ∼ 10 3 M and cluster formation efficiencies increasing with star formation rate surface densities. In a follow-up study Li et al. (2018) found that the global galactic properties are almost insensitive to changes in ff as long as the feedback is sufficient. For low values of ff ≤ 0.1 their cluster age spreads are inconsistently larger than predicted by current observations. They conclude that the range ff = 0.5−1.0 matches observations best. The cluster formation model, however, does not allow for an investigation of the internal cluster structure.
Assuming a very high local star formation efficiency of ff = 1, Ma et al. (2020) report cluster mass functions with slope ∼ −2 above 10 4.5 M and typical sizes of ∼ 20 pc. These sizes are larger than for observed YMCs in the nearby Universe. High local star formation efficiencies are plausible for dense regions of star forming clouds due to the resemblance of the cloud core mass function and the stellar initial mass function (e.g. Wu et al. 2010;Evans et al. 2014;Heyer et al. 2016;Lee et al. 2016;Vutisalchavakul et al. 2016). In the simulations, "early" stellar feedback before SN explosions then regulates the global efficiencies down to observed values (Hopkins et al. 2020). However, Ma et al. (2020) report more stellar mass in bound clusters i.e. higher cluster formation efficiencies, in simulations with lower star formation efficiencies. In a recent study, Smith et al. (2021) conclude that the formation of HII regions has the strongest impact on the clustering of SN explosions and the results are independent of the assumed star formation efficiency parameter. Gutcke et al. (2021) also find that SN feedback reduces the clustering of young stars.
On the other hand, Semenov et al. (2018) find a dependence of the galactic depletion time with star formation efficiencies lower than ∼ 1 per cent and a decrease of the fraction of star forming gas for efficiencies higher than ∼ 1 per cent. In a related study, Semenov et al. (2021a) test the effect of variable ff and fixed ff on the observed spatial de-correlation between star formation and molecular gas (e.g. Kruijssen et al. 2019) with the conclusion that low (∼ 1 per cent)/high efficiencies under-/over-predict the spatial decorrelation.
For some of the highest resolution simulations, Lahén et al. (2019) and Lahén et al. (2020a) using dwarf merger simulations with 4 M resolution find clear evidence for power-law star cluster mass functions from a few hundred to ∼ 10 6 M with increasing formation efficiency in regions with high gas and star formation rate surface densities. At this resolution, also a first investigation of the internal star cluster rotation/kinematics has become possible (Lahén et al. 2020b). These studies have assumed ff = 0.02 together with a Jeans threshold for immediate star formation. While many star cluster prop-erties for massive clusters ( 10 4 M ) are in good agreement with the observations, the entire star cluster population appears for be more compact than observed, making the observed cluster disruption difficult. In contrast, the results of the model presented in Dobbs et al. (2017) show clear evidence for rapid cluster disruption. However, their simulated clusters have lower densities than observed clusters at a resolution of ∼ 300 M per particle. This would artificially support tidal disruption.
In summary, no high-resolution simulation so far has produced star clusters with formation properties and an evolution history (i.e. disruption) that are in agreement with observations. While powerlaw mass functions seem to be a general outcome, the star clusters are either too compact and do not dissolve, or when they dissolve they have been too diffuse at their formation.
The aim of this study is to investigate the effect of the star formation efficiency per free-fall time on the properties of star cluster populations in dwarf galaxies. Changing this parameter effectively controls how dense a collection of gas particles is allowed to become before star formation begins, along with the associated stellar feedback. We present a suite of isolated gas-rich dwarf galaxy simulations. These simulations have a gas particle mass resolution of 4 M and realise individual massive stars with their respective evolutionary tracks as well as modelling their radiation and supernova feedback at sub-parsec spatial resolution. This allows us to realise individual clusters down to the smallest observed cluster masses of ∼ 200 M in order to examine the effect of varying ff on the cluster properties and global galaxy properties.
The simulation suite is a part of the G project 1 (Galaxy Realizations Including Feedback From INdividual massive stars). The aim of this project is to perform galaxy scale simulations of individual galaxies and galaxy mergers (e.g. Lahén et al. 2020a) at such high resolution and physical fidelity that individual massive stars can be realised and important feedback processes such as supernova explosions (Steinwandel et al. 2020) can be reliably included to study the formation of a realistic non-equilibrium multi-phase interstellar medium (Hu et al. 2016(Hu et al. , 2017Hu 2019). As discussed in Naab & Ostriker (2017), the level of detail included in modern numerical simulations is of significant importance as the environmental density of supernova explosions is controlled by stellar clustering as well as stellar feedback processes.
The paper is organised as follows. Sec. 2 describes the simulation setup, particularly the star formation model. We describe the global galaxy properties of the simulations in Sec. 3 such as the ambient density of star formation and supernovae explosions. We then describe the star cluster analysis. Sec. 4 describes how we identify friends-of-friends (FoF) groups and perform an energetic unbinding routine in order to identify bound clusters. We then analyse these FoF groups and bound clusters with a discussion of the CMF, cluster formation efficiency (CFE), ages and sizes. We contrast and discuss these findings on both global and small scales in Section 5, before summarising our findings in Section 6.
Simulation code
All simulations were run using a modified version of the smoothed particle hydrodynamics (SPH) code SPHGal presented in Hu et al. (2014Hu et al. ( , 2016Hu et al. ( , 2017, based on G -3 (Springel 2005). SPHGal is a well tested implementation developed to more appropriately treat fluid mixing, alleviating many of the previously studied difficulties of SPH codes. Gas dynamics are modelled using a pressure-energy formulation (see e.g. Read et al. 2010;Saitoh & Makino 2013), with the gas properties smoothed over ngb = 100 neighbouring particles using the Wendland C 4 kernel (Wendland 1995;Dehnen & Aly 2012). A 'grad-h' correction term (Hopkins 2013) ensures better conservation properties in regions of strongly varying smoothing lengths. The artificial viscosity modelling is updated to better account for converging flows and shear flows (Monaghan 1997;Springel 2005;Cullen & Dehnen 2010). SPHGal also includes artificial conduction of thermal energy in converging gas flows to suppress internal energy discontinuities. Time-stepping is augmented with a limiter to keep neighbouring particles within a time step difference by a factor of four to capture shocks, in particular from SN explosions accurately (see e.g. Saitoh & Makino 2009;Durier & Dalla Vecchia 2012). For a more detailed explanation, please see Hu et al. (2014Hu et al. ( , 2016Hu et al. ( , 2017.
Initial conditions
All simulations presented in this paper are produced from identical initial conditions, described in Hu et al. (2016). The initial conditions were set up using the method developed in Springel (2005). The dark matter halo follows a Hernquist profile with an NFW-equivalent (Navarro et al. 1997) concentration parameter = 10 with a virial radius R vir = 44 kpc and a virial mass M vir = 2 × 10 10 M . Embedded in this dark matter halo is a 2 × 10 7 M stellar disk as well as a 4 × 10 7 M gas disk. The initial disk consists of 4 million dark matter particles, 10 million gas particles and 5 million stellar particles, setting a dark matter particle mass resolution of DM = 6.8 × 10 3 M and a baryonic particle mass resolution of baryonic = 4 M . The gravitational softening lengths are DM = 62 pc and baryonic = 0.1 pc for the dark matter and baryonic particles, respectively.
In this paper, we present seven simulations, all with identical initial conditions. For six of these simulations, we vary their star formation efficiency per free-fall time ff between 0 and 50 per cent, which we refer to as SFE0, SFE02, SFE2, SFE10, SFE20 and SFE50. We also ran our fiducial model SFE2 without photoionisation, which we refer to as SFE2noPI. For convenience we refer to the simulations with their star formation efficiency percentages in the figures.
Chemistry
Our model for chemistry and cooling closely follows the implementation in the SILCC 2 and the GRIFFIN project (Walch et al. 2015;Girichidis et al. 2016;Hu et al. 2016;Lahén et al. 2019), based on earlier work by Nelson & Langer (1997); Glover & Low (2007); Glover & Clark (2012). We track the chemical composition of gas and stars by following the abundances of 12 elements (H, He, N, C, O, Si, Mg, Fe, S, Ca, Ne and Zn) based on the implementation in Aumer et al. (2013), as well as the non-equilibrium evolution of six chemical species (H 2 , H + , H, CO, C + , O) and free electrons. The abundances of the first three species are integrated explicitly based on the rate equations within the chemistry network. H + is formed via collisional ionisation of hydrogen with free electrons and cosmic rays, and is depleted through electron recombination in the gas phase and on the surfaces of dust grains. H 2 is formed on the surfaces of dust grains and destroyed via interstellar radiation field photodissociation, cosmic ray ionization and collisional dissociation with H 2 , H and free electrons.
Cooling & heating
We use a set of non-equilibrium cooling and heating processes, where the processes depend on the local density and temperature of the gas as well as the chemical abundance of species, which may not be in chemical equilibrium. Cooling processes include fine structure lines of C + , O and Si + , the rotational and vibrational lines of H 2 and CO, the hydrogen Lyman line, the collisional dissociation of H 2 , collisional ionization of H, and the recombination of H + . Heating processes include photo-electric heating from an interstellar radiation field, generated by new stars, varying in space and time (Hu et al. 2017), photoelectric effects from dust grains and polycyclic aromatic hydrocarbons, ionization by cosmic rays, photodissociation of H 2 , ultraviolet (UV) pumping of H 2 , and the formation of H 2 . For high-temperature regimes, where T > 3 × 10 4 K, the simulations do not follow non-equilibrium cooling and heating processes. Instead, we adopt a cooling function presented in Wiersma et al. (2009) which assumes an optically thin interstellar medium (ISM) that is in ionization equilibrium with a cosmic UV background from Haardt & Madau (2001).
Star formation model
The star formation algorithm samples stellar masses from a Kroupa IMF (Kroupa 2001) with an upper limit of 50 M . Sampled masses greater than the gas particle mass (4 M ) are statistically realised as individual stellar particles. Should the sampled mass be greater than the gas particle mass, the remaining mass is taken from nearby star forming gas particles in order to conserve mass in the simulation. Sampled masses lower than 4 M are realised as stellar population particles that store the IMF constituents with a mass above 1 M (see Hu et al. 2016).
Every gas particle has an associated Jeans mass, defined as = 5/2 3 where is the local sound speed of the gas, is the gravitational constant and is the gas density.
A gas particle becomes defined as 'star-forming' only if < thres ker , where ker = ngb gas is the SPH kernel mass and thres is a free parameter. As in Hu et al. (2017), we adopt thres = 8 to properly resolve the Jeans mass for the star-forming gas.
For gas particles with Jeans masses between 0.5 ker and 8 ker we use a 'Schmidt-type' ) approach to calculate the local star formation rate: where ff is the star formation efficiency per free-fall time, ff = √︁ 3 /(32 gas ) is the gas free-fall time (Binney & Tremaine 2008), and * and gas are the stellar and gas volume densities respectively. For gas particles with a < 0.5, we enforce instantaneous star formation, as introduced in Lahén et al. (2019).
In this study we explore the effect of varying the star formation efficiency parameter ff on the formation of star clusters. In equation (2), ff is the fraction of 'star-forming' gas which is turned into stars after a gravitational free-fall time. A unit efficiency ff = 1 describes a star formation rate for which all local star-forming gas is converted into stars on a free fall time. This numerical implementation has its origin in the early days of numerical galaxy formation simulations (see e.g. Katz 1992) and was motivated by galaxy observations (e.g. Kennicutt 1998a) on kpc scales. Despite higher resolution and physical fidelity of the simulations this star formation model is still being used (see e.g. Semenov et al. 2021b). This can be motivated by the finding that a relation between star formation rate and gas density is also valid within star-forming clouds (see e.g. Pokhrel et al. 2021) By varying ff , we control how much a region of self-gravitating gas particles is allowed to collapse before stars begin to form. For a collapsing cloud, it is expected that high values of ff allow gas particles to be converted into stars while the cloud is still relatively diffuse. Stellar feedback in the form of radiation and SN explosions is more efficient at low densities and might easily disperse the clouds before reaching high densities. A low value of ff allows the gas to collapse to higher densities before forming stars. This will generate denser stellar systems and stellar feedback might be less efficient at gas dispersal.
Stellar Feedback
In lower resolution galaxy formation simulations, a star particle typically represents an entire population of stars with an assumed IMF (see e.g. Naab & Ostriker 2017, for a review). From this, the abundance of massive stars is calculated and subsequently the energy budget of the stellar feedback of each stellar population particle. For the Kroupa IMF used in this study , there is around one type II supernova per 100 M of formed stars. A given stellar population particle with mass, m * would inject (m * /100 M ) ×10 51 erg into the ISM. In the simulations presented here however, we assume a minimum stellar mass of 4 M representing the low mass part of the IMF. Every massive star expected to form from the assumed IMF is realised individually in the simulation. Therefore, all stars that explode as SN are realised individually. As mentioned in Sec. 2.5, particles below 4 M are realised as stellar population particles with IMF constituents drawn from an IMF with a mass above 1 M .
At this mass resolution and 0.1 pc gravitational force softening in our simulations, individual SNe are well resolved at ambient densities n < 10 cm −3 (see e.g. Hu et al. 2016, Appendix B). As we show in Section 3.2 this corresponds to more than 99 per cent of all SNe in our simulations with photoionisation, and more than 97 per cent for SFE2noPI. Explosions at higher ambient density do not capture all details of the Sedov phase but result in the input of the expected amount of radial momentum to the ambient ISM (see Hu et al. 2016;Steinwandel et al. 2020).
We model the photoionisation of hydrogen (PI) by massive stars with a Strömgren approximation assuming the recombination rates balancing the photon production rates. The PI model reproduces well the evolution of D-type fronts (Spitzer 1998) in good agreement with the S results for different numerical implementations (Bisbas et al. 2015). We note that this model is a good approximation for the local impact of hydrogen ionising radiation but does not accurately follow the radiation field in dwarf galaxies on larger scales (see Emerick et al. 2018, for a discussion). Each image shown is 3×3 kpc 2 plotted with 1024×1024 pixels. Dense and compact star clusters form in the low efficiency simulations with half-mass surface densities as high as 5 × 10 3 M /pc 2 . The most extreme case is SFE2noPI, where we see surface densities as high as 10 4 M /pc 2 . At high star formation efficiencies (SFE20 and SFE50, bottom row) the stellar distribution is significantly smoother with fewer and more diffuse visible clusters. This showcases that star formation model parameters and feedback models have a strong impact on stellar clustering. We can see that for low star formation efficiencies (top row), the newly formed stars are more clustered. In contrast, the stellar distributions for the higher star formation efficiency simulations (bottom row) are more smooth, with less clustered star formation. Despite the differences in the stellar distributions, the corresponding gas distributions shown in Fig. 2 do not show substantial differences in structure. There might be a slight trend for more low density bubbles in the lower star formation efficiency simulations (top row) in comparison to the higher star formation simulations (bottom row). We also show the 2 per cent model but without photo-ionisation, SFE2noPI in the middle right panel of both Figs. 1 and 2. Here we see significantly stronger clustering in the stellar distribution compared to the corresponding simulation including photo-ionisation, SFE2. In addition, about a factor of three more mass in stars has formed in the SFE2noPI simulation by t = 400 Myr: M * (SFE2noPI) = 1.22 × 10 6 M vs. M * (SFE2) = 3.87 × 10 5 M .
More gas mass has been used up by star formation in the SFE2noPI run, which is also reflected in the gas distribution in Fig. 2 where we see lower densities in the gas overall as well as more substantial low density bubbles created by strongly clustered SN feedback. In Fig. 3 we show the stellar surface density radial profile of all six simulations at 400 Myr, smoothed over 10 pc. The stellar distributions for the runs with photo-ionisation are very similar, but comparing for example the lowest star formation efficiency SFE0 with the highest star formation efficiency SFE50, we can see that the stellar distribution for the SFE50 simulation is much smoother.
For the same physical model, varying the star formation efficiency therefore does not alter the radial distribution of stars. Instead, it alters how smooth the stellar distribution is. For the SFE2noPI run however, we quantitatively see the more efficient transformation of gas into stars by an increased normalisation of the profile. We can also observe the strong clustering in the fluctuations of the surface density. Fig. 4 shows the star formation rate (SFR, top panel), outflow rate (OFR, second panel) and the mass loading for all times in the simulation (third panel) as well as for just 200-500 Myr (bottom panel), defined as the ratio of the outflow rate to the average SFR. All simulations including photo-ionisation settle to similar star formation rates. In the first 50-100 Myr, the peak of the onset of star formation is noticeably higher and slightly delayed at lower star formation efficiencies, as it takes time for the gas to reach the density threshold, but as soon as gas manages to reach the density of 0.5 M J (discussed in Sec. 2.5), we immediately form many stars. This high peak in star formation is then reflected in a strong peak in the outflow rate with a small time lag. This feature is seen in all models, but less so in the higher star formation efficiency runs. Star formation is also more bursty in the SFE0, SFE02 and SFE2 runs in comparison to the SFE20 and SFE50 runs. The reason for this can be explained by comparing the two extreme models, SFE0 and SFE50. For the SFE50 run, once the gas passes the upper star formation threshold of 8 M J , these gas particles are defined as star-forming, and statistically 50 per cent per free-fall time of these star forming gas particles will become stars. In this regime, gas does not have to collapse to high densities before forming stars. Neighbouring gas particles will be affected early in the collapse phase by stellar feedback and supernovae after the formation of a star, resulting in a smoother star formation rate. In contrast, the SFE0 run has no star formation above 0.5 M J and so gas must collapse to much higher densities before forming stars on a short timescale. The feedback from the formed stars will therefore be more effective in keeping neighbouring star forming gas particles away from that density threshold, drastically halting star formation. This subsequently results in more bursty star formation. Despite these fluctuations, the star formation rates for all models with photo-ionisation stay almost constant across the entire simulation, around 10 −3 M /yr. The reasons for this are discussed next in Sec. 3.2.
Along with the SFR, the outflow rates of each of the simulations with photo-ionisation are very similar, remaining around 4−5×10 −2 M /yr, showing that the substantial differences in the stellar clustering from the different star formation efficiencies does not seem to effect the outflow rates. Subsequently, the mass loading of the simulations with photo-ionisation maintain very similar values, keeping a relatively constant value of approximately ∼ 50 − 60 from 200 Myr onwards.
For the run without photo-ionisation SFE2noPI, things look slightly different. Many stars are formed in the first 150 Myr and then from 200 Myr onwards the star formation is much more bursty, with several periods of no/very low star formation. The outflow rate however remains relatively constant at approximately 7×10 −2 M /yr as seen in the second panel. When looking at the average mass loading as calculated across the entire simulation as shown in the third panel, we see a slightly lower value in comparison to the runs including photo-ionisation. However, if we only observe the mass loading after 200 Myr and so excluding the initial starburst, we find the mass loading is approximately a factor of two higher in comparison with the same run but including photoionisation (SFE2). The values for the SFR and mass loading are summarised in Table 1 excluding any initial starburst in the galaxy. We also check the metallicity of these outflows from each of the simulations and find very little difference. At a typical metallicity of ∼ 0.13 , the outflows are metal enriched compared to the initial gas phase metallicity of = 0.1 .
The ISM densities for star formation and supernova explosions
Do we have an explanation for why the star formation model presented here results in similar star formation and outflows rates (see Fig. 4) despite the large differences in star formation efficiency? In Fig. 5 we show the density distribution of the gas particles which are transformed into stars. These density distributions vary significantly for different models. The density distribution without a free-fall time based star formation SFE0 (0 %, green), traces the threshold of 0.5 Jeans masses in the SPH kernel for cold gas from a density of n H [T = 10 K] ≈ 5 × 10 2 cm −3 to n H [T = 100 K] ≈ 5 × 10 5 cm −3 . The SFE02 simulation (0.2 %, blue) has a similar distribution with an emerging lower density tail due to the additional possibility for lower density gas to experience free-fall time based star formation. Still the gas densities peak in the range ∼ 10 4 − 10 5 cm −3 . In the fiducial SFE2 run (2%, orange) even more star formation at low densities is possible and the distribution becomes broader with a second lower density peak emerging at ∼ 100 cm −3 . The corresponding simulation without photo-ionisation (red) shows a similar distribution but with a larger fraction of stars forming at densities higher than ∼ 100 cm −3 . The high star formation efficiency runs SFE20 and SFE50 (20%, purple; 50% pink) do not reach high enough star formation densities to hit the Jeans threshold but form all their stars in the free-fall regime between 8 and 0.5 Jeans masses. As a consequence, star formation mostly takes place at gas densities of ∼ 100 cm −3 .
Comparing the models as we decrease ff for example from SFE50 to SFE10, we observe an increase in star formation at higher densities above 10 4 cm −3 and a decrease in the number of stars formed at lower densities, as we would expect. In summary, the ISM density distributions at which the stars form are qualitatively different when the star formation efficiency is varied. In Fig. 6 we compare the star formation densities shown in Fig. 5, now repeated as lightly shaded lines, to the distribution of the ambient ISM densities at which the massive stars explode as supernovae shown as the solid lines. In contrast to the star formation densities, all simulations including photo-ionisation show a similar behaviour for the ambient SN densities. The vast majority of the SNe explode at densities lower than the densities of star formation with two peaks: a first peak (A) at n H ∼ 10 −0.3 cm −3 and a second peak (B) at lower densities of n H ∼ 10 −3 cm −3 . For simplicity we have separated the ambient density distributions at a single fiducial density of n H ∼ 10 −2 , which approximately corresponds to the local minimum, into a "high" density region A and a "low" density region B. For the simulations with photo-ionisation, the majority of SNe explode at higher ambient densities (region A) while the number becomes about equal for SFE2noPI (2% -noPI). The distribution of densities also becomes broader with the exclusion of photo-ionisation. Very few ( 2 per cent) SNe explode at high densities n H 10cm −3 while typical stellar birth densities are much higher. An observational study by Hewitt & Yusef-Zadeh (2009) has used masers as signatures of supernovae remnants (SNRs) interacting with molecular clouds. Assuming the survey to be complete, they find around 15 per cent of SNRs are maser emitting. In the simulations, however, we only track the local ambient density at the time of the explosion. Some expanding supernova remnant shells may interact with dense gas thereafter.
In the top panel of Fig. 7 we show the fraction of SNE exploding in the high and low ambient density regimes A and B for all simulations. More than 60 per cent explode at higher densities and the fraction is increasing for simulations with high star formation efficiencies. The SFE2noPI (2% -nophoto) simulation shows about equal numbers of SNe in both regions. The bottom panel of Fig. 7 show the SN fractions as a function of the peak densities in the two regimes. The peak densities are very similar for the simulations including photo-ionisation and slightly lower for the one without. The bars indicate the dispersion in density. The lowest star formation densities indicated by the vertical dashed lines hardly overlap with the densities at explosion time. This indicates that the stars have "forgotten" about their birth environment as soon as they explode as SNe, i.e. the typical massive star explodes in a completely different environment than where it was born and this environment appears to be "universal" and independent of the details of the star formation model. This is a plausible explanation for why the outflow rates of the models with different star formation efficiency are so similar (see Fig.4) in all models. The SNe couple to the ISM in a very similar way.
We suggest that most SNe explode at typical ISM densities (region A) for these dwarf galaxy systems, which is dominated by neutral gas in equilibrium. At lower ambient densities (region B in Fig. 6) SNe explode in pre-processed environments, mostly affected by previous nearby SNe explosions. Qualitatively, these results agree with Star formation rates (SFR) for the five different simulations with varying star formation efficiency parameters and the nophotoionisation run. All simulations with photoionisation have similar star formation rates, the one without has a higher average rate and is more bursty. Second panel: Gas outflow rate (OFR), which is defined by the gas crossing 500 pc above and below the central disk in 10 Myr intervals. This plot shows relatively similar outflow rates for all simulations at all times during the simulations. In addition we give the average metallicity of the outflowing gas. Third panel: Mass loading, defined as the ratio of the outflow rate to the star formation rate. Here we divide the instantaneous OFR by the average SFR across the entire simulation (0-500 Myr). The simulations show no major differences. Bottom panel: Mass loading >200 Myr between 200 and 500 Myr. As in the third panel, we show the instantaneous OFR but this time divided by the average SFR between 200-500 Myr, which excludes the initial starburst in each of the simulations. Here we see the mass loading of the run without photo-ionisation (SFE2noPI) is a factor of two higher than the corresponding run with photo-ionisation (SFE2). For SFE0, SFE02, SFE2 and SFE2noPI most of the star formation takes place at densities n H 10 4 cm −3 with more and more extended tails towards lower densities of ∼ 10 2 cm −3 . In the high efficiency simulations SFE10, SFE20 and SFE50, most stars form from gas at densities of n H ∼ 10 2 cm −3 and the gas does not even reach the several orders of magnitudes higher threshold densities of e.g. SFE0. The various simulations are indicated by different colors. The SNe explode at several orders of magnitude lower ambient densities than the stellar birth sites with a double peaked distribution, characteristic for all simulations. One peak (A) is at ∼ 10 −0.3 cm −3 , the second peak (B) is at lower densities of ∼ 10 −3 cm −3 , and there is a minimum at ∼ 10 −2 cm −3 . The SFE2noPI (2% -noPI) has a higher SN rate. Less than 2 per cent of the SNe explode at densities higher than n H ∼ 10 cm −3 indicating that they have no memory of their birth places. Fig. 6) for the different simulations. SNe at higher ambient densities dominate. With increasing star formation efficiency the fraction of SNe at low environmental densities decreases. This can be connected to lower cluster formation efficiencies (see Sec. 4) and therefore less clustered SN events. The simulation without photo-ionisation (red symbols) show an inverted behaviour with the lower ambient densities becoming dominant. Bottom panel: Fraction of core collapse SNe in the high (A, diamonds) and low (B, circles) density regime as a function of ambient density. The colour coding is the same as in the top panel. The symbols indicate the density peaks and the horizontal bars show the dispersion. The dotted vertical lines indicate the lowest star formation densities in each of the simulations (see Fig. 6). All simulations with photo-ionisation have similar higher (A) and lower (B) density peaks of ∼ 10 −0.3 cm −3 and ∼ 10 −3 cm −3 , respectively. For the SFE2noPI (2% -noPI) simulation, the peaks are shifted to slightly lower ambient densities. In contrast to the other simulations, more than half of the SNe explode at lower densities.
IDENTIFYING STAR CLUSTERS AND THEIR PROPERTIES
In this section we discuss how we identify clusters of stars in our simulations as well as the properties of these clusters in the different simulations. Note here that we primarily only discuss four of the seven simulations previously shown, SFE02, SFE2, SFE20 and SFE50. The cluster properties from the SFE0 simulation are very similar to the SFE02 simulation, whilst the SFE10 population is very similar to the SFE20. We identify clustered stars in the simulations using a friends-offriends algorithm (FoF, see e.g. Davis et al. 1985) with a linking length of 5 pc. For each FoF group, which represents a star cluster, we impose a minimum of 35 stars (see e.g. Lada & Lada 2003) as well as a minimum mass of 200 M for the analysis. Following the FoF analysis, we perform a binding energy analysis as described in Section 4.2 on each of these FoF groups to determine if it is a bound cluster. Throughout this paper, effort is made to distinguish between: • FoF groups: Physical associations of stars identified by the friends-of-friends algorithm. Observationally, this corresponds to stellar associations and star clusters.
• Bound clusters: Bound groups of stars verified by the unbinding procedure described in Section 4.2. These are observed as bound star clusters.
Virial Analysis
To determine if a FoF group is a bound cluster, we perform a virial analysis by calculating the kinetic and potential energies of all stellar particles in a given FoF group. The kinetic energy is calculated using the velocities normalised to the center-of-mass motion of the FoF group. The potential energy is computed directly by calculating the potential between each pair of particles. We only consider stars and exclude gas or dark matter from this analysis. The virial parameter for each FoF group is calculated as vir = − /2 where U is the sum of the potential energies and K is the sum of the kinetic energies of all stellar particles within the FoF group. An parameter of more or equal to one therefore denotes a bound star cluster. Fig. 8 shows the virial parameters of FoF groups with ages less than 20 Myr formed between 200 and 500 Myr in the simulation, prior to the unbinding procedure, described later in Sec. 4.2. By plotting the FoF groups younger than 20 Myr, we capture how bound the stellar groups are shortly after their formation.
With increasing star formation efficiency, we see a decrease in total number and a decrease of the typical virial parameter of the FoF groups. It is important to note that plotted here are simply physical associations, and therefore these groups of stars are likely to contain contaminate stars. It is however interesting to see that such a high fraction of identified FoF groups which are still likely to contain contaminating stars have such high virial parameters for the SFE02 and SFE2 runs. Jumping from SFE2 to SFE20 however shows a steep decrease in the fraction of bound clusters. These higher star formation efficiencies result in a high fraction of FoF groups with virial parameters below 1. Some of these clusters are truly unbound but some are also contaminated with high velocity stars. The unbinding procedure explained in Section 4.2 removes these contaminate stars. One can note that the bound fraction increases again from the SFE20 to the SFE50 run, but with a very low number of FoF groups identified in these simulations, this is likely just low number statistics.
Unbinding procedure
As described in Section 4.1, we calculate the overall potential and kinetic energies of the FoF groups in order to determine if they are bound. Therefore, following the FoF analysis which identifies physical associations of stars, we perform an energetic analysis of the FoF groups in order to remove contaminating stars (or determine if they are completely unbound). This is done by first sorting the stars by their distance from the centre of mass and then calculating the potential energy of every member of the FoF group in relation to the other members. Working from the outside of the FoF group inwards, a star is defined to be bound if its potential energy exceeds its kinetic energy (normalised to the bulk motion of the overall FoF group in relation to the galaxy). When a star is determined to be unbound, it is removed from the group and the potential energies of the remaining stars are updated to exclude it. This is repeated for all stars within a FoF group. When a FoF group survives this procedure, that is at least 35 members remain (following the definition from Lada & Lada 2003), the FoF group becomes classified as a bound cluster.
Cluster mass function
In Fig. 9 we show the mass function for the SFE02, SFE2, SFE20 & SFE50 simulations (from top to bottom) and FoF groups, young FoF groups and young bound clusters (from left to right). The mass functions are plotted at different times as indicated by the colour bar. For comparison we show a typical mass function with a power-law slope of −2 (e.g. Larsen 2009;Gieles 2009;Zhang & Fall 1999;Vansevičius et al. 2009;Portegies Zwart et al. 2010) in each panel. Such a slope is favoured by observations. We find a very low number of clusters in the SFE20 and SFE50 runs. We therefore decide to stack the young FoF groups together, as well as the bound clusters. The reason for this is simply to increase the number of clusters. Without stacking, for the SFE20 and SFE50 runs, identifying a slope becomes difficult. The second and third columns therefore show the stacked cluster mass function (CMF) of young FoF groups and young bound clusters, respectively, with average ages of less than 20 Myr. The third column therefore captures the CMF in which bound clusters form. The average slopes from Fig. 9 are summarised in Fig. 10, showing the slope of the CMF plotted as a function of the star formation rate surface density Σ SFR . The Σ SFR is calculated within a circle of 1 kpc placed over the face-on disk and 500 pc above and below the plane of the disk. This encompasses >99% of star formation in the disk for all simulations. For , we see that increasing the star formation efficiency parameter results in shallower slope. We find that both the slope of the FoF groups as well as the bound clusters are both in agreement with −2, implying that this slope is universal and not just for bound structures. This ties into the hierarchical distribution of star formation where clouds, stellar associations as well as bound star clusters all broadly follow a slope of −2 (Elmegreen 2011). As mentioned, a slope of approximately −2 is the broadly accepted value of the cluster mass function of observed star clusters, however there is some variation in the literature. Adamo et al. (2020b) find a slope of between -1.5 and -2 for a population of star clusters within the Hubble imaging Probe of Extreme Environments and Clusters (HiPEEC) survey. Chandar et al. (2017) find a slope of −2 is consistent for a range of masses of objects, and it is worth noting that they also split their clusters by age and find no change in the slope, only in the normalisation of the CMF. From our simulations, young bound clusters formed in the SFE20 and SFE50 runs have slopes close to −2, therefore from the CMF alone, this supports a higher value for ff . This, however, is based on a small number of clusters. Something to consider is that observations are naturally biased towards more massive clusters. However, when correctly taking incompleteness into account, there should not be a large effect on the slope of the CMF.
Cluster formation efficiency
In the top panel of Fig. 11 we show the time evolution of the total mass of stars formed (dashed lines) as well as the total mass of stars in bound clusters (solid lines) and the mass in young bound clusters with ages younger than 10 Myr (dotted lines). For the low star formation efficiency runs (blue and orange), we see a similar Figure 10. It is important to note that even with stacking, the low number of FoF groups clusters present in the SFE20 and SFE50 runs means that making any statements or ruling out any of the models based on the CMF is not possible. Figure 10. The slope, of the cluster mass function (dN/dM) as a function of the star formation rate surface density. Both quantities are averaged between 300 to 500 Myr. The slope is shown for FoF groups (triangles) and bound clusters younger than 10 Myr in age (stars) with the colours representing the different simulations. Vertical error bars here show the standard deviation from the fit of and horizontal error bars show the standard deviation of the mean Σ SFR averaged over 300-500 Myr. We see that lower ff results in steeper slopes of the cluster mass function.
trend. The mass in bound clusters (all ages, solid line) increases steadily with the mass of stars formed (dashed line), indicating we have a constant fraction of stars in bound clusters at any given time. This is shown quantitatively in the bottom panel by the open circles which show the fraction of mass in bound clusters of all ages. We observe a roughly constant fraction of ∼ 0.5 for the SFE02 and ∼ 0.2 for the SFE2 run. The cluster formation efficiency (CFE, Γ) is usually defined as the fraction of the mass in young stars that have formed in clusters (see e.g. Elmegreen & Efremov 1997;Kruijssen 2012), which is shown in the bottom panel of Fig. 11 with filled circles. For the low efficiency runs, this value remains roughly constant (solid blue and orange circles, bottom panel of Fig. 11) indicating that newly formed clusters, which are all very bound (see Fig.8), are not disrupted. For the high efficiency runs SFE20 and SFE50, the situation is quite different. Both the mass in bound clusters at all ages (pink and purple solid lines, top panel of Fig. 11) and the mass of newly formed clusters (pink and purple dotted lines) stay constant. The total mass in young bound clusters remains constant, whilst the stellar mass steadily increases. We see that the fraction mass formed in young clusters (filled pink and purple circles, bottom panel of Fig. 11) stays roughly constant, whilst the overall mass in clusters decreases (open pink and purple circles). This is clear evidence for the disruption of bound clusters in these simulations. We also find in 46 per cent and 17 per cent of snaps in the SFE20 and SFE50 runs we do not identify any clusters at all. Qualitatively, we have seen this behaviour already Fig. 1 where the stellar distribution is more smooth for the higher SFE runs. Clusters with a lower virial parameter are less bound (see Fig. 8) and therefore more susceptible to internal and external disruption processes.
We take the average CFE between 300 and 500 Myr from 11 and show this as a function of the corresponding average star formation rate surface density Σ SFR in Fig. 12 for all simulations presented in this paper. Σ SFR is calculated as described in 4.3 and the CMF is calculated over the same area, which is a cylinder with radius of 1 kpc and a height of 500 pc. We show the young FoF groups (hexagons) as well as young bound clusters (stars). For the lowest star formation efficiencies, SFE0 (green) and SFE02 (blue), more than ∼ 50 % of stars are born in FoF groups or bound clusters (most FoF groups are bound, see Fig. 8). In contrast, the SFE10, SFE20 and SFE50 runs (lime green, pink and purple) have averages of only ∼ 1 − 2 per cent for young bound clusters and ∼ 10 per cent for young FoF groups. For these high efficiency simulations, most young FoF groups are not bound (see Fig. 8). The observational data in Fig. 12 show the median of data compiled in a recent review by Adamo et al. (2020a), which reveals a positive correlation between Γ and Σ SFR for young clusters (<10 Myr). It is important to note however that not all observational literature supports this relation. Chandar (Baumgardt et al. 2013) as well as the median of a collection of data compiled in a review by Adamo et al. (2020a) for young clusters with ages younger than 10 Myr. We also show data for the SMC and LMC from Chandar et al. (2017), who find a constant Γ ∼ 24 per cent, irrespective of Σ SFR for clusters with ages 1-10 Myr (dashed horizonal line). Individual observations of young (<10 Myr) clusters in nearby dwarf galaxies from Cook et al. (2012) hint at a negative trend between Γ and Σ SFR . The median of the data from Cook et al. (2012) binned between 10 −4.5 and 10 −2 in Σ SFR is also shown. estimated on short time scales (e.g. 1-10 Myr), whilst data at low Σ SFR is estimated over a longer time scale, up to 100 Myr. Accounting for this, when only considering newly formed stars over an age range of 1−10 Myr, they find a constant Γ of around 24 per cent, irrespective of Σ SFR , shown as the dashed horizontal line. The observational results have made varying assumptions on the definition of clusters, details of which are discussed in a recent review by Adamo et al. (2020a). We also include data for young (<10 Myr) clusters from Cook et al. (2012) who look at local dwarf galaxies. Their data hint at a negative correlation between Γ and Σ SFR , which is in slight tension with Adamo et al. (2020a). However, as is discussed in both these studies, cluster formation is highly stochastic and heavily dependent on the evolutionary phase of each individual galaxy. The majority of observational data suggest a positive correlation between Γ and Σ SFR , however the significant scatter means that none of our simulations with differing values of ff can necessarily be ruled out by Γ alone.
Linking back to the global properties discussed in Sec. 3.1, it is interesting to remind ourselves that with these different cluster formation efficiencies leading to very different clustering properties, we do not see an effect on the outflow properties.
Cluster ages
Observations of the age spreads of star clusters are challenging as projection effects can introduce contamination from older stars. From a variety of objects studied in the literature, an appropriate upper limit for the expected age spread would be approximately 5 Myr (Longmore et al. 2014). We look at the age spreads of the bound clusters in the simulations, which are shown in Table 1. In the SFE02 and SFE2 models where we also have the most clusters, the majority of clusters have age spreads of less than 5 Myr, but some have spreads of up to 20 − 30 Myr. For the SFE20 and SFE50 models, we have less bound clusters overall to examine, but these clusters have wider age spreads. This provides some support against these models with higher star formation efficiencies as the star formation histories of the individual clusters are more extended.
Cluster sizes
In Fig. 13 we show, from top to bottom, the half mass radius r 1/2 , half mass surface density Σ 1/2 and total surface density Σ tot , defined as the density of the region containing 90 per cent of the cluster mass, as a function of total cluster mass for the bound clusters identified in the simulations. In the SFE20 and SFE50 runs, we see that the clusters are more extended with half mass radii of around 3-10 pc, in comparison to the SFE02 and SFE2 runs which have half mass radii mostly an order of magnitude lower, around 0.1-1 pc. The SFE10 run shows clusters with similar sizes to the SFE20 and SFE50 with a few more compact clusters with r 1/2 of around 1 pc. The SFE10, SFE20 and SFE50 runs also have lower surface densities consistent with their lower virial parameters (see Fig. 8 (2021). It appears that the simulated clusters are either too small for low star formation efficiencies or too large for high star formation efficiencies. The half-mass surface densities of the low star formation efficiency simulations seem to be lower than observed (middle panel of Fig. 13), while the total surface densities (bottom panel) seem consistent with observations. This indicates that clusters produced in simulations with low ff below 10 3 M are too compact compared to observations. The situation improves for the few simulated clusters at higher mass (see also Lahén et al. 2020a, for simulated clusters in a starburst environment). The clusters from the SFE10, SFE20 and SFE50 runs are likely too diffuse. We discussed previously in Section 4.4 that there is cluster disruption in the higher star formation efficiency models. Observationally, it is expected that star clusters disrupt, and capturing these disruption properties appears to be a challenge of these galaxy formation simulations. If we do observe cluster disruption in these runs, it is likely due to the fact that the surface densities were too low at formation. It is worth noting as a caveat however that the cluster sizes and properties are likely heavily affected by the fact that the interactions between the stars are softened, reducing dynamical effects such as two-body relaxation. Discussion of this as well as proposed solutions are discussed in Sec. 5.2.
Star formation and outflow rates
For the same set of physical processes included in the simulations, the choice of ff significantly changes the properties of the forming star clusters but has little impact on the global galaxy properties such as SFR or mass loading. This is in agreement with previous studies (see Sec. 1). When excluding photo-ionisation, we form significantly more clusters. However, the peaks in the ambient supernova densities only slightly change. After the initial starburst, mass loading is increased by approximately a factor of two, which is in agreement with the simulations shown in Smith et al. (2021), indicating that stronger stellar clustering in their model without HII regions leads to higher mass loading factors, also by a factor of approximately two. Their modelling of heating/cooling, star formation, and HII regions is similar to the one used here. Smith et al. (2021) also find that the exclusion of photo-ionisation increases the maximum ambient densities at which SN explode quite substantially as well as broadening the distribution of ambient densities, in agreement with Hu et al. (2017) as well as our findings.
Over the entire simulation, a substantial part of the regulation of star formation is done by HII regions, which also result in smoother star formation histories. This conclusion seems very robust as it has also been put forward by earlier studies with varying setups and simulation codes (see e.g. Hopkins et al. 2011;Peters et al. 2017;Butler et al. 2017;Hopkins et al. 2018;Haid et al. 2019;Kannan et al. 2020;Hopkins et al. 2020;Smith et al. 2021;Rathjen et al. 2021). The formation of HII regions also reduces clustering and cluster growth (e.g. Guillard et al. 2018;Smith et al. 2021;Rathjen et al. 2021). We note however that beyond the initial starburst, the SFE2noPI simulation has a lower star formation rate.
Star cluster disruption
As this study shows, the observed star cluster mass function can be successfully reproduced with high-resolution galaxy evolution simulations. Also the fraction of stars forming in clusters can be controlled by varying the star formation efficiency. However, another fundamental star cluster property cannot be modelled yet, which is the relatively rapid destruction of star clusters after formation (see e.g. Portegies Zwart et al. 2010;Krumholz et al. 2019). In our study, only simulations with very high ff values show signs of cluster disruption. These clusters however, are too diffuse compared to observations and can therefore easily be dispersed by tides, which are naturally included in the simulations. The simulated clusters presented in Dobbs et al. (2017) show similar properties. No high resolution galaxy formation simulation to date produces star cluster properties which are all consistent with observations. The reason could be the still limited capability to properly resolve internal star cluster properties in galaxy evolution simulations. Important caveats here might be: • limitations for resolving the internal star-forming cloud structure and dynamics on sub-parsec scales • inability of the ff based star formation sub-grid model to capture the accurate distribution and timing of star formation and stellar feedback • inability of the feedback model to accurately capture gas expulsion from forming star clusters (see e.g. Bastian & Goodwin 2006) • missing feedback channels like stellar winds • limited capabilities of the typically used second order integrators to follow important relaxation effects on cluster scales.
A possible solution might require a turning away from Schmidt (1959)-type star formation models. Additionally, the interactions between the stars themselves are softened within the simulations, reducing the two-body interactions within the clusters. These interactions are essential in both the formation as well as the evolutionary fate of the clusters. Gieles et al. (2010) show that by 10 Myr, twobody relaxation has had a strong effect on the evolution of globular clusters. Accurately modelling these interactions could therefore be achieved by the use of higher order forward integration schemes (see e.g Rantala et al. 2021), which allow for higher dynamical fidelity in dense stellar systems and a straight forward coupling with hydrodynamics.
CONCLUSIONS
We present high-resolution (sub-parsec, 4 M ) simulations of the evolution of dwarf galaxies. The simulations include non-equilibrium heating and cooling processes and chemistry, an interstellar radiation field varying in space and time, star formation, a simple model for HII regions as well as supernova explosions from individual massive stars. We explore the impact of assumed star formation efficiencies, ff per free-fall time for a Schmidt (1959)-type formation model on the resulting star formation and outflow rates and the star cluster properties. We find the following results: (i) Star formation rates and outflow rates are independent of ff for the investigated range of ff = 0.02 -0.5 (SFE02, SFE2, SFE10, SFE20 and SFE50) and a model with instantaneous star formation at a high density threshold (SFE0). The test model without HII regions (SFE2noPI) has a similar star formation rate, but a slightly higher outflow rate resulting in a slightly increased mass loading.
(ii) All simulations form star clusters with power law mass functions similar to observations. With increasing ff , the slope increases from -3 to -2. The normalisation of the cluster mass function, i.e. the mass of the most massive cluster formed, decreases with increasing ff . At higher ff , clusters are less likely to form as well as survive due to the fact that the stars are formed at lower ambient densities.
(iii) The clusters become less bound and the cluster formation efficiencies decrease from Γ ∼ 0.6 to Γ ∼ 0.1 with increasing ff . The physical reason for this is due to the fact that changing the ff controls the densities at which the stars form, as shown in Sec. 3.2. Low star formation efficiencies mean more star formation at higher densities resulting in massive, compact, bound clusters. The low formation star efficiency ( ff = 0.02, SFE02) and the instantaneous formation model (SFE0) are inconsistent with all available cluster formation efficiency observations.
(iv) None of the models seem to match observed cluster sizes. Clusters in simulations with high efficiencies ff 0.2 are too diffuse. While they shown signs for cluster disruption these models are disfavoured as no internal rapid cluster evolution process can make cluster more compact. Clusters in low ff simulations are too compact and do not disrupt. A more accurate modelling of internal evolutionary processes might be able to alleviate this problem.
The failure of the current highest resolution galaxy evolution models to capture all fundamental star cluster properties poses a challenge for all future numerical studies on galactic star cluster populations.
DATA AVAILABILITY
The data will be made available based on reasonable request to the corresponding author.
|
2021-09-20T01:15:41.199Z
|
2021-09-16T00:00:00.000
|
{
"year": 2021,
"sha1": "61accac62838fa103581f5189022beb12d101095",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "61accac62838fa103581f5189022beb12d101095",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
248748943
|
pes2o/s2orc
|
v3-fos-license
|
Predicting the growth performance of growing-finishing pigs based on net energy and digestible lysine intake using multiple regression and artificial neural networks models
Backgrounds Evaluating the growth performance of pigs in real-time is laborious and expensive, thus mathematical models based on easily accessible variables are developed. Multiple regression (MR) is the most widely used tool to build prediction models in swine nutrition, while the artificial neural networks (ANN) model is reported to be more accurate than MR model in prediction performance. Therefore, the potential of ANN models in predicting the growth performance of pigs was evaluated and compared with MR models in this study. Results Body weight (BW), net energy (NE) intake, standardized ileal digestible lysine (SID Lys) intake, and their quadratic terms were selected as input variables to predict ADG and F/G among 10 candidate variables. In the training phase, MR models showed high accuracy in both ADG and F/G prediction (R2ADG = 0.929, R2F/G = 0.886) while ANN models with 4, 6 neurons and radial basis activation function yielded the best performance in ADG and F/G prediction (R2ADG = 0.964, R2F/G = 0.932). In the testing phase, these ANN models showed better accuracy in ADG prediction (CCC: 0.976 vs. 0.861, R2: 0.951 vs. 0.584), and F/G prediction (CCC: 0.952 vs. 0.900, R2: 0.905 vs. 0.821) compared with the MR models. Meanwhile, the “over-fitting” occurred in MR models but not in ANN models. On validation data from the animal trial, ANN models exhibited superiority over MR models in both ADG and F/G prediction (P < 0.01). Moreover, the growth stages have a significant effect on the prediction accuracy of the models. Conclusion Body weight, NE intake and SID Lys intake can be used as input variables to predict the growth performance of growing-finishing pigs, with trained ANN models are more flexible and accurate than MR models. Therefore, it is promising to use ANN models in related swine nutrition studies in the future. Supplementary Information The online version contains supplementary material available at 10.1186/s40104-022-00707-1.
Introduction
To maximize profits in swine production, farmers need to adjust diet formulations and feeding strategies based on their understandings of the relationships between the growth performance of pigs and nutrient supply. However, evaluating the growth performance of pigs in realtime is laborious and expensive. As a result, mathematical models were developed based on easily accessible variables to predict the response variables not easily determined, which has provided an effective approach to quantify the animal production processes and then to improve the efficiency and sustainability of the modern livestock system [1].
Multiple regression (linear or non-linear) is the most convenient tool to model the relationship between response variables and explanatory variables and is commonly used in animal nutrition studies. For example, the diet characteristics (e.g., available energy values in swine diets) [2] or production performance of livestock (e.g., milk yield in dairy cow) [3] could be predicted using multiple regression (MR) models with relatively high accuracy. The prerequisite for MR utilization is assuming a regression relationship (linear or non-linear) between the response variables and the explanatory variables, however, in reality, the relationships among variables are usually complex, resulting in large predictive errors in some situations when modelling using MR, e.g. modelling maintenance energy requirement of pigs [4,5]. Therefore, more efficient mathematical tools are needed to be evaluated if they could better model the complicated animal production systems to achieve better predictive performance.
As the integration of the information science and other disciplines in recent years, artificial neural networks (ANN) models were introduced into agriculture research considering their capacity to deal with complex and flexible nonlinear interrelationships without prior assumptions [6]. The ANN model has a parallel and distributed information processing structure, which consists of interconnected processing elements (artificial neurons or nodes), thus is more suitable to quantify the unknown or very complex relationships. Moreover, as a supervised learning process, ANN models usually have stronger learning ability and higher fault tolerance than MR models [7]. Recently, ANN models were reported to exhibit better prediction performance than MR models in other disciplines [8][9][10][11]. However, in swine research, the application of ANN models mainly focused on image identification, behaviour detection and disease detection. Only a few visionary scientists have applied ANN models in swine nutrition research, e.g. Ahmadi and Rodehutscord conducted a preliminary work using ANN models to predict metabolizable energy (ME) values in pig feed [12]. Thus, more works can be done to extend the applications of ANN models in swine nutrition.
To our knowledge, no previous studies have reported the utilization of ANN models in predicting the growth performance of pigs. Therefore, it is unclear whether ANN was still more powerful than MR models in predicting pigs' growth performance. Therefore, the objectives of this study were to 1) predict the average daily gain (ADG) and feed conversion ratio (F/G) of growingfinishing pigs based on dietary nutrient intake by developing MR models and ANN models 2) compare the performance of the two models in growth performance prediction in pigs.
Materials and methods
The general scheme of this study was outlined in Fig. 1.
Data sources
Data were derived from peer-reviewed journal articles published from 2010 to 2019 using the Web of Science online database. Considering the changes in the genetic background due to the progress in pig breeding, data from earlier literature were not considered. The keywords and phrases used for literature research were "pig OR pigs OR swine AND growth performance", and 212 papers with 285 trials and 1170 treatment diets were collected after screening.
According to our research objectives, the final database articles were selected based on the following criteria: (i) belongs to research articles published in English; (ii) included control treatment with adequate replicates per treatment (≥ 6) and proper randomization of treatments, and pigs used in the trial had ad libitum access to feed and water; (iii) presented complete diet compositions with ingredients included in Nutrient Requirements of Swine in China [13], and reported the growth performance data (body weight gain, feed intake, or feed conversion ratio) of pigs. Moreover, treatment diets included effects of antibiotics or feed additives, or not formulated based on corn and soybean meal, or used intact males, immunocastrated males, or pigs fed Ractopamine HCl were excluded from the database. Clear segmentation of pig breeds would produce more accurate input data and ultimately a more accurate prediction. Consequently, only Duroc × Landrace × Yorkshire crossbred pigs were included to eliminate the effects of genetic background. The experimental period should keep in the range of 7 d to 35 d in order that the calculated average BW can represent the growth stages of pigs. In addition, all the dietary nutrient concentrations of the diet should be given at least 85% of the recommended of NRC [14]. Finally. the Explore Outliers procedure in JMP Pro version 14.0 (SAS Institute, Cary, NC, USA) was used to eliminate the outliers. After excluding trials using the above criteria, 126 trials and 406 treatments were fetched from 72 papers for further analysis. The papers used in this study were given in Additional file 1: Table S1 and the statistic information of the training data set and testing data set were given in Additional file 1: Table S2.
Datasets preparation
Growth performance data extracted from the selected papers were recorded in a template that included ADG, average daily feed intake (ADFI), and F/G of pigs for each treatment diet. If any parameter above was missing, it was calculated from the other reported parameters in the paper if available, otherwise, the whole record (treatment diet) was discarded. The average BW of pigs fed each treatment diet was calculated by averaging the initial and final BW of all pigs in the same treatment group.
Nutrient concentrations of each treatment diet were calculated based on the nutrient concentrations of each ingredient and its proportion in the diet, and nutrient concentrations of individual ingredients from the Nutrient Requirement of Swine in China [13] were used as the reference values. Net energy (NE) was chosen because it is considered the most accurate system to quantify the energy content in pig feed currently [15]. All amino acids were expressed as the standardized ileal digestible (SID) concentrations (AA contents in ingredients multiplying the corresponding standardized ileal digestibility of the AA) to overcome the disadvantages and limitations of apparent ileal digestibility (AID) and true ileal digestibility (TID) [16]. The nutrient intakes were calculated by multiplying ADFI by nutrient concentrations of the corresponding treatment diet. The specific nutrient intake variables included in the original dataset were: NE intake (kcal/d), CP intake (g/d), SID lysine intake (g/d), SID methionine intake (g/d), SID threonine intake (g/d), SID tryptophan intake (g/d), SID valine intake (g/d), acid detergent fiber intake (ADF, g/ d) and neutral detergent fiber intake (NDF, g/d) on an as-fed basis.
Then the growth performance and nutrient intake data from the 406 treatment diets were randomly split into a training data set containing 70% of the observations and a testing data set containing the remaining observations. Descriptive statistics of the variables in training and testing data sets were presented in Table 1.
Variables selection
Theoretically, more input variables indicate increased discriminative power of the predictive models, but adding irrelevant variables can also distract the learning algorithm and defect the predictive performance [17]. Thus, the Fit Model procedure with standard least squares personality and emphasis on Effect Screening function in JMP Pro version 14.0 was firstly used to eliminate excess variables on ADG and F/G prediction. The input variables included the BW and all the nutrient intake parameters, as well as their interactive effects, and P < 0.05 was used as a selection criteria. Since no significant interactive effects were detected, the quadratic and cubic terms of the selected input variables were further included in the MR models, and the improved R 2 of each model was regarded as the selection criteria.
Developing MR models using training data set The Fit Model procedure with Stepwise Regression personality in JMP Pro version 14.0 was used to establish MR models to predict ADG or F/G. The NE intake (kcal/d), SID Lys intake (g/d), BW and their quadratic terms within each treatment diet (287 observations) in the training data set were treated as predictors for model development, and study effects were included as a random effect. The mixed direction and P-value Threshold stopping rules were chosen and the variables are entered and removed from the model at a probability below 0.01. Models with the maximal R 2 , minimized Akaike information criterion (AIC) and Bayesian information criterion (BIC) were identified as the best-fitted MR model [18], which was then checked through graphical inspection for normality on the residuals [19].
Developing ANN models using training data set Artificial neural networks are programs designed to learn and process information by simulating the human brain, which consists of three main components: an input layer, a series of hidden layers and an output layer [20]. The number of hidden layers in ANN is dependent on the complexity of the relationships between inputs and target outputs. More hidden layers can increase the chance of obtaining local minima during the training phase and contribute to a more unstable gradient.
Neurons, or called nodes, are the basic unit to compose hidden layers, which receive input from the input layer, scale each input by a weight, add a bias and then apply an activation function to the result [21]. The structure of a classical feedforward ANN model can be demonstrated using the following mathematics formulations: where H 1 was the value in the 1st node in the hidden layer, I m was the value of the mth input variable, w m was the weighting factor between the mth input variable and the 1st node in the hidden layer, a m was the bias; O 1 was the value of the 1st output variable, H n was the value of the nth node, b n was the bias, and F activation was the activation function. The Neural Network procedure in JMP Pro version 14.0 was used to develop a series of ANN models and the details were presented later. In the current study, the three-layer ANN, using Scaled Conjugate Gradient algorithm, including one input layer, one hidden layer and one output layer, was used for model development. Variables used in ANN models were the same as those in MR models to ensure the comparability between models. Moreover, it is necessary to normalize the data used in establishing the ANN models to get prediction errors with step sizes and update systematic weights due to the different unit scales the input variables have [22]. The training data set was normalized using the min-max approach as follows: where x i was the observed value of the ith input data and x 0 i was the ith normalized data. The output layer included two variables: ADG and F/ G. Because the input variables were normalized, the predicted output values were re-scaled using the minimal and maximal values of the training data for model evaluation. The re-normalization was conducted as follows: where y i was the predicted value of the ith output and y 0 i was the ith normalized output predicted using the ANN model.
The training conditions including a learning rate of 0.1, training epochs of 1000, and the Squared penalty method were adopted in the current study. Karlik et al. [23] compared five different activation functions and found hyperbolic tangent function performs better recognition accuracy than the other four functions. Meanwhile, Radial basis function neural networks is one of the most popular neural network architectures [24]. Thus, the hyperbolic tangent function ( tanhðxÞ ¼ e 2x −1 e 2x þ1 ) and radial basis function (RBðxÞ ¼ e −x 2 ) were chosen as candidate activation functions between the hidden layer and the output layer. Identifying the optimal number of neurons in the hidden layer is also a major step for establishing ANN models [25], so the mono-hidden layer structure containing 1 to 10 nodes were evaluated.
Models with different nodes and activation functions were selected by the R 2 and root mean square error (RMSE) and the model with the maximal R 2 and minimized RMSE was considered as the best-fitted ANN model.
Comparison between the MR models and the ANN models using testing data set
The testing data set was used to generate predicted ADG and F/G values based on the best-fitted MR models developed using the training data set. Meanwhile, the same testing data set was normalized, input into the best-fitted ANN models, and then re-scaled using the re-normalization equation to generate another group of predicted ADG and F/G values.
The RMSE, R 2 and concordance correlation coefficients (CCC) were calculated using the two groups of prediction data to evaluate the performance of the selected MR models and ANN models: were the predicted output values using MR model and ANN model and their corresponding variables, respectively. The lower RMSE value and higher R 2 and CCC values were considered as an indicator of better accuracy.
The observed vs. predicted plots were generated using observed values and predicted values from MR models or ANN models, and the following linear equation was obtained in each plot: where x refers to the observed growth performance variable (ADG or F/G), y refers to the predicted variable. The plot with a slope closer to 1 represents better prediction performance of the corresponding model.
Experimental design of the animal trial used to validate the prediction models
An animal trial was conducted to collect data for further comparison between the MR models and the ANN models. The animal handling procedures received approval from the Animal Care and Use Ethics Committee of the China Agriculture University (Beijing, China).
One hundred and ninety-two Duroc × Landrace × Yorkshire crossbred pigs with an average initial body weight of 35.29 ± 3.11 kg were randomly assigned to 4 treatment diets in a completely randomized design, with 4 replicate pens per treatment and 12 replicate pigs (6 barrows and 6 gilts) per pen. The experiment design was a 2 × 2 factorial with respective factors being two levels of SID Lys (100% Lys requirement vs. 130% Lys requirement) and two levels of NE (100% NE requirement vs. 105% NE requirement) content in diets (Additional file 1: Table S3). All the diets were fed in mash form and were formulated to meet the nutrient requirement of pigs [13]. The animal trial lasted for 84 d, and the individual pig BW and feed consumption (on pen basis) were measured on d 0, 14, 28, 42, 56, 70, 84 to calculate the ADG and F/G. Nutrient intakes of each pen were calculated using nutrient profiles presented in the Nutrient Requirements of Swine in China [13] and the ADFI of each pen. The values of pig BW, NE intake (kcal/d), SID Lys intake (g/d) and their quadratic terms of each pen are considered as one observation. In total, 96 observations were extracted from 4 replicates of 4 treatments and 6 phases. The details of each observation obtained from the animal trial were presented as Additional file 1: Table S4.
Comparison between the MR models and the ANN models using validation data set gained from the animal trial The validation data set gained from the animal trial was used to generate predicted ADG and F/G values based on the best-fitted MR models and the best-fitted ANN models established in the training phase. Again, all the input data were normalized firstly, and the output data were re-scaled lastly when the ANN models were applied as described in the previous part. The observed vs. predicted plots were generated as described previously.
Based on the results of previous steps, MR models exhibited larger errors at greater BW range of pigs. To further check the hypothesis whether growth stages would influence the prediction performance of the two models, the mean absolute error (MAE, MAE ¼ 1 n P n i¼1 jy i −y 0 i j) between the observed variables and the predicted variables from the MR models or ANN models were calculated.
The MAE values of the two kinds of models were grouped based on pig BW with a 10 kg interval as follows: 40-50 kg, 50-60 kg, 60-70 kg, 70-80 kg, 80-90 kg, 90-100 kg and 100-110 kg. Two-way ANOVA was conducted with predictive method and growth stage as the major effects. P < 0.05 was considered as significantly different and 0.05 ≤ P ≤ 0.10 was considered as a significant tendency.
Variables selection
The results of the two-step variable selection were shown in Table 2. Among the ten candidate variables, pig BW, NE intake and SID Lys intake showed the minimized P-value, which were all below 0.01. The MR models generated using those three variables in linear, quadratic, and cubic terms had shown R 2 of 0.89, 0.93, and 0.93 in ADG prediction, and 0.87, 0.89, and 0.88 in F/G prediction, respectively. Therefore, BW, NE intake, SID Lys intake and their quadratic forms were chosen as the input variables for the following model development.
Best-fitted MR models
The best-fitted MR models for predicting ADG and F/G were presented in Table 3. For ADG prediction, the MR model using BW, SID Lys intake, SID Lys intake 2 , NE intake, and NE intake 2 exhibited the smallest AIC (AIC = 3278), BIC (BIC = 3381), RMSE (RMSE = 72) and the maximized R 2 (R 2 = 0.929). Pig BW, SID Lys intake 2 , and NE intake 2 had negative effects on ADG while SID Lys intake and NE intake had a positive effect on ADG. For F/G prediction, the MR model using BW and BW 2 , SID Lys intake, and NE intake had the smallest AIC (AIC = 92), BIC (BIC = 116), RMSE (RMSE = 0.28) and the maximized R 2 (R 2 = 0.886). The BW, BW 2 , and NE intake had positive effects on F/G while SID Lys intake had an adverse effect on F/G. To better clarify the inconsistence between linear form and quadratic form of SID Lys and NE intake on their contributions to ADG, the responses of ADG on varied SID Lys or NE intake levels were illustrated in Fig. 2. It should be pointed out that Fig. 2 considered the single contribution of SID Lys or NE intake on ADG but ignored the influence of other factors. It was indicated that ADG increased with greater SID Lys intake level only when the SID Lys intake was below 38 g/d. Moreover, the improvement of ADG was observed as the NE intake increased within the range of 0-10,000 kcal/d.
Best-fitted ANN models
The structures of the two best-fitted ANN models for predicting ADG and F/G were presented in Fig. 3. The predictive performances on ADG and F/G of ANN models with different neurons in 1 hidden layer using different activation functions were exhibited in Tables 4 and 5. The best-fitted ANN models for ADG and F/G prediction were those using radial basis function with 4 and 6 nodes, with R 2 of 0.925 and 0.905, and RMSE of 51 and 21, respectively. Step 1 Step 2 Comparison between the MR models and the ANN models using testing data set The comparison between the best-fitted MR models and ANN models using the testing data set was shown in Table 6 and Fig. 4. For both ADG and F/G prediction, the ANN models showed lower RMSE and greater CCC and R 2 values, and had slopes closer to 1 in the observed vs. predicted plots than the MR models, implying greater accuracy of ANN models than MR models. In addition, there was a discrepancy in the performance of MR models between the training data set and the testing data set, reflected by a noticeable decrease of R 2 in ADG prediction (R 2 training = 0.929, R 2 testing = 0.584) and a slight decrease of R 2 in F/G prediction (R 2 training = 0.886, R 2 testing = 0.821), indicating the occurrence of over-fitting.
Comparison between the MR models and ANN models using validation data set gained from the animal trial The comparison between the best-fitted MR models and ANN models using the validation data set gained from the animal trial was shown in Fig. 5. For both ADG and F/G prediction, the ANN models showed slopes closer to 1 in the observed vs. predicted plots than the MR models, implying the superiority of ANN models in prediction than MR models.
In addition, the effects of growth stage and prediction method on the errors of the prediction models were shown in Table 7. The interaction effect between growth stage and prediction method was observed (P < 0.05).
For ADG prediction, the MAE of MR models were greater than ANN models in all growth stages (P < 0.01) except for 50-60 kg (P = 0.93), and the MAE of MR models in 60-70 kg, 80-90 kg, 90-100 kg and 100-110 kg were greater than those in 40-50 kg, 50-60 kg and 70-80 kg (P < 0.05). No difference was observed in different growth stages for the MAE of the ANN models. For F/G prediction, the MAE of MR models were greater than ANN models in all growth stages (P < 0.05) except for 70-80 kg (P = 0.93), and the MAE of MR models in 80-90 kg, 90-100 kg and 100-110 kg were greater than that in 50-60 kg (P < 0.05), while the MAE of the ANN model in 100-110 kg was greater than those in 40-50 kg and 50-60 kg (P < 0.05). Figure 6 illustrated the effect of growth stages on predictive performance of MR and ANN models in ADG and F/G prediction. The MAE of MR models exhibited increased tendency as BW increased, while the MAE of ANN models remained relatively stable. Meanwhile, ANN showed lower MAE than MR models in most growth stages (P < 0.05).
Discussion
In the simulating and predictive models, determining the input variables is one of the main tasks. Inclusion of irrelevant variables not only doesn't help prediction but can reduce forecasting accuracy through added noise or systematic bias. The most sensitive variables to predict ADG or F/G selected in the current study were BW, NE intake and SID Lys intake. Body weight represents the ADG average daily gain, AIC akaike information criteria, BIC bayesian information criteria, BW body weight, F/G feed conversion ratio, NE net energy, RMSE root mean square error, SID Lys standardized ileal digestible lysine. 1 The SID Lys and NE in the equations were the SID Lys intake and NE intake. 2 The variables in the equations were selected by a P-value < 0.01. Both the best-fitted MR models were generated using the training data set (n = 287). Fig. 2 The response of ADG on different SID Lys intake (a) and NE intake (b). The curves were generated by the best fitted MR models in training. Only SID Lys intake and SID Lys intake 2 were considered as input variables in Fig. 2a while other variables were neglected. Only NE intake and NE intake 2 were considered as input variables in Fig. 2b.
current physiological state of pigs, which is an important factor that could determine the feed intake and nutrient digestibility of pigs [22]. As pigs grow, more feed is consumed to meet their requirements, leading to greater energy intake, which is mainly used for maintenance and then body weight gain, thus the NE intake makes a great contribution to the growth performance of growingfinishing pigs [15]. The inclusion of SID Lys intake in prediction models was in accordance with the previous reports, which concluded that SID Lys intake had a significant effect on the growth performance of pigs [26,27]. The specific patterns of ADG influenced by SID Lys intake and NE intake were further illustrated in the current study. According to NRC (2012) [14], 100 g protein deposition in pigs requires nearly 10 g SID Lys. In the MR models built in this study, 38 g/d SID Lys intake would contribute to the highest ADG of 450 g/d, indicating greater efficiency than that reported in NRC (2012), which may be because the latter is an average value of the whole growth period. The declining trend of Fig. 3 The structure of the best-fitted artificial neural networks in predicting ADG (a) and F/G (b). H 1 was the value in the 1st node in the hidden layer; I 1 was the 1st input; a m was the bias; O 1 was the value of the 1st output variable; H 1 was the value of the 1st node; b n was the bias; F activation was the activation function. RMSE root mean square error. * Means the best performance of ANN models with different numbers of nodes and activation functions to predict ADG. 1 All the ANN models were generated using the training data set (n = 287). Means the best performance of ANN models with different numbers of nodes and activation functions to predict F/G. 1 All the ANN models were generated using the training data set (n = 287).
ADG with greater SID Lys intake more than 38 g/d could be interpreted in two aspects. On one hand, excess lysine intake would have an antagonistic action with other AAs (i.e., arginine, citrulline), which could cause the deficiency of other AAs, impair the protein accretion, and result in the retarded body growth [28]. On the other hand, the increased SID Lys intake is more likely to occur in a higher BW stage, during which period the growth performance of pigs is less affected by lysine intake [27]. As pigs grow, the increased energy requirements and more developed digestive tracts would result in greater feed intake and NE intake, among which the energy consumed beyond the maintenance requirement would deposit as protein or lipid [29]. This can interpret the positive relationship between NE intake and ADG in the current study. However, the deposition patterns for protein and lipid are different, with excess energy being used to deposit protein firstly at a cost of 10.6 kcal/g ME, and then to deposit lipid at a cost of 10.6 kcal/g ME, but the maximal rate of protein deposition (P dmax ) was not affected by BW [29][30][31]. Therefore, more NE intake was deposited as fat in the later growth stages of pigs, in accordance with the decreased slope in the developed model of NE intake vs. ADG in this study as NE intake gradually increased. Even though the regression models cannot always interpret the contribution of nutrients to the growth performance of pigs precisely, the above results indicated that the MR models generated in this study were successful, and could be helpful in optimizing the feeding strategies and decisions in pig production.
The results of the current study further confirmed the previous reports that the accuracy of ANN models was influenced by their architecture. Cross et al. [32] reported that the prediction performance of ANN models relied on the number of hidden layers, the activation function, and the number of neurons in the hidden layers. Insufficient numbers of neurons could limit the capacity of ANN to learn associations between inputs and outputs, while excess numbers of neurons may lead to undesirable effects of "learning rules by memorizing" instead of learning by generalizing the acquired information, which is usually known as "over-fitting" [33,34]. Boger and Guterman [35] stated that the number of neurons in the hidden layer of ANN models should be between 70% and 90% of the number of inputs. Blum [36] reported a general "rule of thumb" for selecting the number of neurons, which was recommended to be between the number of input and output variables. The optimal number of nodes in ADG and F/G prediction models developed in this study were 4 and 6, which is reasonable according to the above literature because the number of input and output variables in this study were 6 and 1, respectively. Furthermore, the activation function is also an imperative hyper-parameter in ANN, which can influence the accuracy of ANN by dealing with the weighting process between the hidden layer and output layer. The radial basis function was chosen in this study because it's a powerful technique for interpolation in multidimensional space, especially suitable for modelling time series (or dynamic) relationships [37]. It's surprising that the MR model for ADG prediction developed in the current study was found over-fitted, which did not occur for the ANN models. In many cases, MR models suffer from the prior assumption relationships between variables, thus always leading to "under-fitting" of the results [38]. Instead, the MR model generated to predict ADG in this study showed high accuracy in the training phase but failed to predict ADG with high precision in testing phase. Veum et al. [39] reported that the MR models could exhibit a high accuracy in a relatively large sample size of n = 496. With n = 287, the large sample size may attribute to the relatively high R 2 of the MR models achieved in this study. Differing from the MR models, there is a higher risk for the phenomenon of "over-fitting" occurring in ANN models because the run mode of ANN is to obtain a local optimal solution rather than a global optimal solution [40]. The supervised learning algorithm and penalty method 1 Values are presented as means ± SEM. a-b in the same line means the MAE with different superscripts differ in predictive methods (P < 0.05). V-Z in the same column means the MAE with different superscripts differ in growth stages (P < 0.05). Pound sign means an interactive effect of methods and growth stages (P < 0.05). 2 The MAE were calculated by using predicted values and observed values in the validation data set (animal trial).
were applied in ANN models, which can stop the learning process when the algorithm produced a larger error in the testing data set. But this method cannot be applied in MR models, and this may explain why the "over-fitting" occurred only in MR models.
The major finding of this study was that the ANN models were more flexible and accurate than the MR models in predicting the ADG or F/G of growingfinishing pigs. These results were consistent with the previous studies that reported the precision of ANN models were better than MR methods in ruminant nutrition or edaphology [8,10,21]. The better performance of ANN over MR models is mainly because the conventional MR model requires an assumption regression relationship (linear or non-linear) between input variables and output variables, which greatly limits the flexibility of the prediction [41]. The existing associations between input and output variables may not follow the preassumption of MR models, while ANN models do not make assumptions related to data distribution, such as homoscedasticity and normality of the residual errors [42]. Moreover, the accuracy of ANN models would be improved after careful selection of the structure and hyperparameters (i.e., hidden layers, nodes and activation functions) [21]. This could also explain the outperformance of the ANN models than the MR models. Large-scaled comparisons between those two models have illustrated that the ANN models would outperform the MR models when using relatively large datasets (n > 20,000), while the opposite pattern occurred for small datasets [43,44]. However, Margenot et al. [21] reported that the ANN models exhibited a better accuracy than the MR models on soil permanganate oxidizable carbon prediction in a data size of n = 144. As a result, the sample size in the current study (n = 287) was believed to be enough to predict the ADG and F/G of growingfinishing pigs using careful trained ANN models. It should be highlighted that the ANN models would also show a poor performance in some conditions when compared with the MR models, such as using a sample set with skewed distribution or introducing extra variables [34,45]. Currently, the applications of ANN models in swine are limited to image identification, behaviour detection and disease detection. Based on the results of this study, the ANN models also exhibited great potential as an accurate predictive tool in swine nutrition. Nevertheless, suitable sample size and careful selection of the structure and hyperparameters of ANN models are required to achieve good prediction performance.
We previously found that the prediction error of MR models increased with BW increased, so we speculated that growth stages may affect the accuracy of predictive models, which was eventually proved by the results of the animal trial. Many detailed studies had revealed the effect of growth stages (or BW) on the nutrient utilization [46], organ development [47], gut microbiota [48] and biochemical indices such as enzyme activities [49] of pigs, indicating the complex physiological status in different growth stages. The MR models assumed a stable relationship (whether linear or non-linear) between the variables in whole growth period, which is a rigid assumption that may be against the dynamic real conditions. As a result, the MR models could not fully capture the highly complex relations between growth traits and other indicators [50]. Instead, ANN is more Fig. 6 The MAE of MR and ANN models in predicting ADG (a) and F/G (b) in different growth stages. The MAE was calculated by using the predicted values and observed values in the validation data set (animal trial). * represents a significant difference between MR models and ANN models. # represents the growth stages have a significant effect on the MAE of prediction models.
capable to mimic the dynamic patterns between variables and is more appropriate in this situation [51]. This can interpret why the ANN models were less affected by growth stages on prediction performance compared with the MR models in the current study, especially for the greater MAE of the MR models in later growth stages. Therefore, the use of MR models as a predictive tool is suggested in a small BW range, e.g., below the span of 30 kg according to the results of this study.
Conclusion
Taken together, the accuracy of ANN models in predicting the growth performance of growing-finishing pigs was investigated in this study, and the results confirmed the hypothesis that BW, NE intake and SID Lys intake could be used as input variables to predict growth performance of pigs with high accuracy. Moreover, on testing and validation data set, ANN models revealed more flexible and accurate on ADG and F/G prediction after careful training compared with MR models. In addition, compared to MR models, ANN models were less affected by growth stages. Therefore, it is promising to use ANN models in related swine nutrition studies in the future.
Additional file 1: Table S1. The information of the papers used in this study. Table S2. The statistic information of training data set and testing data set. Table S3. Ingredients and nutrient compositions of the experimental diets in the animal trial (as-fed basis). Table S4. The validation sample obtained by the animal trial.
|
2022-05-14T06:22:48.105Z
|
2022-05-13T00:00:00.000
|
{
"year": 2022,
"sha1": "800451846576ae2e1c50cd9b56432534d897f161",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "80650e134bf5589958fad51f4a05f757c508263c",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
242020233
|
pes2o/s2orc
|
v3-fos-license
|
Oral Health Knowledge, Awareness and Associated Practices of Pre-school Children’s Parents in Damascus, Syria: a cross-sectional study.
Background: Oral health hygiene and practices of pre-school children depends on the knowledge, awareness, and attitude of their parents. Parental education level, family background and family size play an important role in adapting oral hygiene practices. Also, oral health behaviors vary between boys and girls, and it is generally believed that girls are better at taking care of their oral hygiene than boys. This cross-sectional study aimed to assess oral health hygiene and practices of pre-school children (4-6 years old) and its correlation with their parent's education level, child gender and child order between his/her brothers in the family. Methods: A survey was conducted randomly among 270 parents of Damascus population. Access to the parents in the target age group was achieved through face-to-face interaction (14 parents), online (87 parents) and two different kindergartens in two different social areas (169 parents). A set of 17 questions were formulated, and the questionnaire was distributed. A comparison of the answers from the collected data was made on SPSS 24 using Chi-Square Tests. Results: Chi-Square Tests showed the important role of parental education level and its associated with regular dental visit (9.3%), temporary teeth treatment (48.1%), no early extracted teeth due to caries (48.5%) and no current caries (35.2%). On the other hand, there was no difference between child gender or order and daily oral hygiene practices. Conclusion: This study highlights the role of Parental education level in the quality of a child’s oral hygiene practices. Although some parents were aware of the importance of temporary teeth treatment and preventing caries through a regular dental visit, they weren’t aware of some deleterious oral habits.
Background
The dental health of preschool children has extensive implications on the oral health of the individual as he grows into an adult. Parents/guardians of preschool children play a central role in enforcing proper oral hygiene and preventive regime in these children.
The oral health care provided by the parents to the pre-school children is of crucial importance here as this determines not only the current oral health status of the child but also lays the backbone of attitudes and practices that a child adopts in this age which he carries over in his or her adulthood. [1] Improvement in children's oral health depends on parent's awareness and knowledge. It is essential to start basic good oral health habits from childhood so that the important dental norms are formed and then maintained into future.
[2] Family background plays an important role in adapting oral hygiene practices. [3] Oral health behaviors vary between boys and girls, and it is generally believed that girls have better COHB than boys. [4,5] Dental caries is a multifactorial disease, with many risk factors contributing to their initiation and progression.
Early childhood caries (ECC) is a term used to describe dental caries in children aged 6 years old. Oral streptococci are considered to be the main etiological agent of tooth decay in children. The risk factors can be categorized as biological, environmental or socio-behavioral [6]. In preschoolers, high consumption of sucrose, sweet drinks, high sugar intake between meals, and frequent snacking have all been associated with dental caries [7,8]. Additionally, the quality of a child's oral hygiene practices and the parents' ability to withhold cariogenic snacks are also factors associated with dental caries [6,9]. Some studies have found an association between tooth-brushing and lower caries prevalence, although the ndings are inconsistent. [8,10,11] Moreover, socioeconomic factors such as income, education level and family size impact disease prevalence. [12,13] Primary teeth play an important role in basic life functions such as speech, phonetics, and eating. The management of deciduous teeth is not considered a primary concern in most of the population. The most increasing problem nowadays is the increase in caries risk referred to as early childhood caries. Further, the treatment of primary teeth is not considered important as it is believed that primary teeth will shed as the child grows, without having an effect on permanent dentition.
Dental caries in children is rapidly increasing compared to that seen in permanent teeth. In early childhood caries, there is an aggressive spread of dental caries most commonly affecting the upper anterior teeth as they are the rst teeth to erupt. An increase in dental caries increases the pain experienced by the child, and eventually decreases the intake of food, thereby decreasing the essential minerals and vitamins leading to malnutrition.[14] Early childhood caries is commonly seen in bottlefeeding children overnight because of the increase in sucrose content attacking the tooth surface, thereby increasing caries incidence.
Loss of anterior teeth in children eventually results in speech development and lack of con dence due to peer in uence. The space present in the primary dentition due to early extraction must be maintained to prevent unorganized eruption of the permanent teeth. Improper maintenance of space in primary dentition leads to lack of space for eruption of the permanent teeth, leading to crowding and impaction.
Poor oral habits include a wide spectrum of habits including, thumb sucking, nger sucking, blanket sucking, tongue sucking, soother/paci er use, lip sucking, lip licking, mouth breathing, and nail biting, among others.
These habits can alter the normal muscle balance in the face, resulting in an orofacial myofunctional disorder, which can have a negative impact on facial growth. Thumb sucking is the most recognized oral habit that is widely understood to negatively affect the growth of the jaws and the teeth. When the thumb is in the mouth it displaces the tongue so it is not resting fully in the palate. Lip sucking or lip biting can also have effects on the upper anterior causing proclination. Bruxism is a sleep disorder wherein the child bites his teeth during sleep. Habitual mouth breathing is when the child has a habit of breathing from the mouth instead of the nose. This could be because of three reasons -obstruction in the nasal passage, habitual, or anatomical. These habits require an orthodontic treatment approach, with the use of habit breaking appliances.
Results
General Study Results: Results showed that 150 (55.6%) of the parents are educated and 120 (44.4%) are not. 137 (50.7%) of the children are males and 133 (49.3%) are females. 122 (41.5%) children of them are the oldest between Finger sucking, blanket sucking, soother use, mouth breathing and other poor oral habits can displace the tongue from its normal resting position in the palate with associated negative affects on facial growth. This cross-sectional study aimed to assess oral health hygiene and practices of pre-school children (4-6 years old) and its correlation with their parent's education level, child gender and child order between his/her brothers in the family. All data generated or analyzed during this study are included in this published article.
Research methodology:
A cross-sectional research was carried out to study the role of parental education level, child gender and order in daily oral hygiene practices among pre-school children in Damascus, Syria.
Sampling:
The data for this cross-sectional study were collected in March 2020. The target population was parents having children between 4-6 years of age. Written parental consent was obtained for this study. This age group was chosen because children depend on their parents to have good dental health practices. In this age, it's considered that it is the parent's responsibility to make these practices as a habit in their children.
Survey's Questions:
A survey (Supplementary le 1) was conducted randomly among 270 parents of Damascus population. Access to the parents in the target age group was achieved through face-to-face interaction (14 parents), online (87 parents) and two different kindergartens in two different social areas (169 parents). A set of 17 questions was formulated, and the questionnaire was distributed. The questionnaire collected information about the (e.g., paternal education levels, child gender and order) and dental health practices (e.g., details about tooth-brushing duration and frequency, toothpaste amount used and toothbrush changing period). The questionnaire also asked about eating habits (e.g., sugar and soda intake frequency), deleterious oral habits (e.g., thumb sucking, nail and lip biting, mouth breathing, bruxism, tongue thrusting…), dental visit, current caries, the way of handling orthodontic problems and early tooth extraction due to caries.
Statistical analysis:
The information was collected and recorded. A comparison of the answers from the collected data was made on SPSS 24 using Chi-Square Tests. their brothers and sisters whereas 39 (14.4%) are midchilds and 119 (44.1%) are the youngest. (Table 1) It's found that daily tooth-brushing frequency was zero in 4.8% of the children. And 62.2% of them brush their teeth just once a day. 31.5% of them brush their teeth twice a day and only 1.5% brush their teeth three times a day. Also, it's found that tooth-brush duration was only one minute in 38.1%, 2 minutes in 35.9%, 3 minutes only in 13. 3% and less than one minute in 12.6%. On the other hand, it's found that toothpaste amount used was as rice size in only 15.2% and it was as peas size in 75.9%, less than rice size in 2.6% and more than peas size in 6.3%. The results showed that the duration of using toothbrush before changing it was 6 months in 57.0%, one year in 17.0%, less than 6 months in 21.1% and more than one year in 4.8%. (Table 2) About the dental visit, it was regularly only in 13.3% whereas 62.2% of the dental visit was only when the child complains of pain and 24.4% never taken to a dentist. Only 9.6% of the parents thought that treatment of primary tooth is unimportant whereas 90.4% thought that dental treatment in children was necessary. When the parents were asked about the way of handling any orthodontic problems 45% of them suggested that they will wait until the permanent dentition whereas 54.1% suggested that they will have an orthodontic consulted. Only 9.639% of children had applied local uoride on their teeth whereas 90.370% did not. (Table 3) The test showed a statistically signi cant difference between parental education level and dental visit, temporary teeth treatment, early extracted teeth due to caries, current caries. (Table 6) The dental visit was more regularly in children with educated parents, while taking children to the dentist only when they complain of pain was more in uneducated parents. More children with educated parents had never taken to the dentist before. (Table 6) More educated parents think that the treatment of temporary teeth is important. Whereas more uneducated parents think that it is not important. (Table 6) Children with educated parents had less extracted tooth due to caries than children with uneducated parents that had more extracted tooth. (Table 6) Children with educated parents had fewer caries than uneducated parents. (Table 6) Statistically there was no signi cant difference between parental education level and daily brushing frequency, brushing duration, toothpaste amount, toothbrush changing periods, applying local uoride, the way of handling orthodontic problems and sauger and soda intake. (Table 7) Fisher's exact test was used as 25% of cells had expected count less than 5.
-Child Gender: Statically there was no signi cant difference between child gender and daily brush frequency, brushing duration nor toothpaste amount, toothbrush changing periods, dental visit, the importance of temporary teeth treatment, local uoride appliance, the way of handling orthodontic problems, sugar and soda intake, the number of extracted teeth due to caries and current caries. (Table 8) Fisher's exact test was used as 25% of cells had expected count less than 5.
-Child's Order: Statically there was no signi cant difference between child order and the rest data (daily brush frequency, brushing duration, toothpaste amount, toothbrush changing periods, dental visit, the importance of temporary teeth treatment, local uoride appliance, the way of handling orthodontic problems, sugar and soda intake, the number of extracted teeth due to caries and current caries). (Table 9) Fisher's exact test was used as 25% of cells had expected count less than 5.
Discussion
The child's parents/guardians play an important role in the development of healthy habits in children and sustaining the same during the child's transition into adulthood. [15] Improvement in children's oral health depends on parent's awareness, knowledge and education level. It is essential to start basic good oral health habits from childhood so that the important dental norms are formed and then maintained into future. Family background plays an important role in adapting oral hygiene practices.
[16] The family is the rst institution that in uences child behavior and development, especially the mothers, who are the primary role model for developing behavior.
A signi cant association was observed in this study between scores of knowledge, attitude and practice and the educational level of the participants which was in accordance with the studies conducted by Jain et al., Suresh et al. [17] and Williams et al., [18] wherein it was shown that parents with lower education level had poor dental knowledge and attitude also. Probably, the parents with higher education level had better knowledge regarding the oral health care of their children which resulted in favorable attitudes and adoption of better practices to render oral health care to their child. [19] A randomized controlled trial done in the UK showed that mothers visit to trained dental educator (dentist) of those pre-school children at risk of caries increased the parental knowledge and improved the attitude toward the dental health of their offspring's. [20] The factors responsible for irregular visits and follow-up could vary depending on nancial status, fear and lack of awareness and motivation. [19] Despite the fact that majority of parents were aware of the fact that regular dental check-ups are necessary, only 13.3% of parents took their child for dental visit in every 6 months.
The family's in uence on COHB varies between girls and boys [4,5]; however, conclusions regarding these differences are inconsistent among existing studies. In this study, there was no difference between boys and girls.
Conclusion
From this study, it can be concluded that parental education level plays the most important role in the quality of their children's oral hygiene practices.
Most parents don't have the required knowledge about the correct oral hygiene practices such as the daily brushing frequency, brushing duration, the right toothpaste amount that should be used. Most parents don't know the importance of applying local uoride on their children's teeth which can be done in the regular dental checkups.
On the other hand, some of them were aware of the bad consequences for deleterious oral habits that can make orthodontic problems, eating habits, the importance of temporary teeth treatment and the correct duration to change their children's toothbrush.
Nowadays there is no differentiation between brothers if they are boys or girls or even the oldest or the youngest. All children take the same dental care from their parents, the same oral hygiene practices. What makes a real difference in the children's oral hygiene practices is their parent's education level. The more they are educated, the more they will be taking care of details such caries, uoride appliance and regular dental checkup which reduce the number of extracted teeth due to caries followed by orthodontic problems. 47. The questionnaire collected information about the (e.g., paternal education levels,child gender and order) and dental health practices (e.g., details about tooth-brushing duration and frequency, toothpaste amount used and toothbrush changing period). The questionnaire also asked about eating habits (e.g., sugar and soda intake frequency),deleterious oral habits (e.g., thumb sucking, nail and lip biting, mouth breathing,bruxism, tongue thrusting… dental visit, current caries, the way of handling orthodontic problems and early tooth extraction due to caries. Figure 1 The frequency of deleterious oral habits.
Supplementary Files
This is a list of supplementary les associated with this preprint. Click to download.
|
2020-07-09T09:09:08.874Z
|
2020-07-08T00:00:00.000
|
{
"year": 2020,
"sha1": "015d4541f87b0b86bd7054c807eda96474f523d4",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-32897/v1.pdf?c=1631859916000",
"oa_status": "GREEN",
"pdf_src": "Adhoc",
"pdf_hash": "d0c45c7a9c0a7b28ec6905e11d955a53d332af09",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
256918478
|
pes2o/s2orc
|
v3-fos-license
|
Relevance of DNA barcodes for biomonitoring of freshwater animals
The COI gene, colloquially named the DNA barcode, is a universal marker for species identification in the animal kingdom. Nevertheless, due to the taxonomic impediment, there are various proposals for molecular operational taxonomic units (MOTUs) because high-throughput sequencers can generate millions of sequences in one run. In the case of freshwater systems, it is possible to analyze whole communities through their DNA using only water or sediment as a sample. Using DNA barcodes with these technologies is known as metabarcoding. More than 90% of studies based on eDNA work with MOTUs without previous knowledge of the biodiversity in the habitat. Despite this problem, it has been proposed as the future for biomonitoring. All these studies are biased toward the Global North and focused on freshwater macrofaunae. Few studies include other regions of the world or other communities, such as zooplankton and phytoplankton. The future of biomonitoring should be based on a standardized gene, for example, COI, the most studied gene in animals, or another secondary consensual gene. Here, we analyzed some proposals with 28S or 12S. The studies on eDNA can focus on analyses of the whole community or a particular species. The latter can be an endangered or exotic species. Any eDNA study focused on a community study should have a well-documented DNA baseline linked to vouchered specimens. Otherwise, it will be tough to discriminate between false positives and negatives. Biomonitoring routines based on eDNA can detect a change in a community due to any perturbation of the aquatic ecosystem. Also, it can track changes along the history of an epicontinental environment through the analyses of sediments. However, their implementation will be complex in most megadiverse Neotropical countries due to the lack of these baselines. It has been demonstrated that a rapid functional construction of a DNA baseline is possible, although the curation of the species can take more time. However, there is a lack of governmental interest in this kind of research and subsequent biomonitoring.
Introduction
Since the proposal by Hebert et al. (2003), DNA barcodes have become a hot topic with many controversies from a philosophical background (Ebach and de Carvalho, 2010). Their failure to discriminate species, with more emphasis on plants and fungi, where the markers proposed (Hollingsworth et al., 2009;Schoch et al., 2012) have many limitations, has also been reported. Some proposals with other markers have also been made (Heeger et al., 2019;Liu et al., 2022).
Nevertheless, the studies incorporating DNA barcodes have increased steadily since their inception to nearly 1400 in the year 2020 , despite the predictions of some people about their end (Taylor and Harris, 2012). Accordingly, Elías-Gutiérrez et al. (2021) proposed that the advancement today has not been the same with aquatic organisms, and it has been less in freshwaters, with no more than 90 publications in the same year. Most probably, these limited results are due to problems amplifying the proposed standardized gene as DNA barcodes, the first half of cytochrome c oxidase I (COI or COX1), mainly in invertebrates . This methodological problem led to the proposal of alternative genes, reviewed in the following paragraphs. Today, we can say that we do not have severe limitations to amplifying the COI gene in almost any freshwater specimen of any group if we correctly apply the protocols proposed by Elías-Gutiérrez et al. (2018), and the zooplankton (Zplk) primers developed by Prosser et al. (2013), or other more specific primers. This result is reflected in the 871 diaptomids, 1381 cyclopoids, 2077 anomopods, and 211 ctenopods, among other freshwater zooplankters (in a broad sense) as mites or ostracods, already barcoded from Mexico (see the Taxonomy Browser available on BOLD: boldsystems.org). Currently, in the case of the zooplankton studies with these results, we are working on full descriptions of the unknown species highlighted after DNA barcoding. Several researchers consider the construction of these public databases an example (Makino et al., 2017).
Moreover, three good recent reviews show some of the significant tendencies of eDNA and metabarcoding studies (Pawlowski et al., 2022;Schenekar, 2022;Yao et al., 2022). They focused only on benthos, macroinvertebrates, and fish, and we will discuss them later.
This review aims to evaluate the DNA barcoding of aquatic life and current trends in metabarcoding with remarks on some limitations we have seen in developing, implementing, and applying these methods.
We also want to remark on its relevance in megadiverse countries, where the funds for science are limited.
Methods
We consulted the Web of Science (https://www.webofscience. com) on different dates in September 2022 using the search strings "eDNA" AND "metabarcoding" AND "freshwater." These combinations are used to construct Figure 1.
Each hit was analyzed, and the most relevant are cited in the following paragraphs. Our criteria are resumed in the following sections of this review.
However, we do not pretend to make an extensive assessment of the literature available.
For a better understanding, the review is divided into four sections and a conclusion, involving the main topics as the objectives for this work.
Metabarcoding, biomonitoring, and eDNA
After DNA barcoding, metabarcoding of environmental DNA (eDNA) has been one of the common applications developed. The word metabarcoding was first proposed in 2011 by Pompanon et al. (2011). This word refers to using DNA to identify many taxa within a sample, revealing the composition of the species. This term can be associated with biomonitoring, which is understood as measuring the diversity or presence of live organisms, with the primary goal of detecting changes or differences in any ecosystem (Yu et al., 2012). These changes can be natural in origin, seasonal or timeline changes in the environment or any perturbation or stress on it such as pollution, presence of exotic species, or to compare two localities. Ogram et al. (1987) proposed the term environmental DNA for the first time when working with microbial DNA from sediments collected near Pensacola, Florida, and Knoxville, Tennessee. Later, its first uses were for microbiology studies. Recently, it was resurrected by Taberlet et al. (2012) and Dejean et al. (2011). The word refers to DNA obtained from environmental samples, such as water or sediments. However, it is not restricted to aquatic ecosystems because it can be obtained from the air (eDNAir) (Clare et al., 2021), soil, or any other substrate where the flora or fauna can leave traces of their DNA (Kyle et al., 2022). It is essential to mention that in May 2019, a new journal was devoted to this field of research: Environmental DNA (ISSN: 2637-4943). It still needs to be indexed in Clarivate.
These terms can be combined in eDNA metabarcoding, a recent proposal for biomonitoring any epicontinental or marine ecosystem (or terrestrial). For aquatic environments, among the first uses of this term was in the detection of the diversity of marine fish fauna using a small fragment (<100 bp) of the cytb gene in a region named The Sound of Elsinore, Denmark, by Thomsen et al. (2012). The authors used cytb because, at that time, it had the best coverage of the local fish fauna.
Today, the most sequenced gene for all aquatic life is the first half of the mitochondrial cytochrome c oxidase I gene, totaling 14,525,551 animal specimens (Barcode of Life Data System, BOLD). Due to difficulties amplifying it in aquatic life, mainly crustaceans, some authors proposed other markers as DNA barcodes, such as the 28S (Hirai et al., 2013). However, their use lowers the accuracy of species identifications compared with COI (Elías- . This latter gene is not perfect, and some young aquatic species are not discriminated by it, as occurs with the Characidae fish from Mexico (Valdez-Moreno et al., 2009).
A simple comparison of the development of libraries can be made in GenBank: the search words "cytb Actinopterygii" provides 154,517 hits, meanwhile "COI Actinopterygii" provides 200,117 hits, and the BOLD database provides 293,659 public records, with a total (including the non-yet public) of 399,462 hits. For predominantly freshwater animals, such as the Anomopoda, cytb provides 75 hits vs. 4611 in GenBank.
Nevertheless, in the case of some groups such as fish, 12S outperforms COI for eDNA . This author concluded that it was a question of primers. There are some recommendations to work more with this gene (Weigand et al., 2019) because COI covers 87.9% of the freshwater fish fauna, while 12S only covers 36.4% with at least one sequence. Another recent effort for Neotropical fish included sequencing it for 67 species from Brazil (Milan et al., 2020). Moreover, Shogren et al. (2018) found that longer fragments of DNA degrade more rapidly than shorter ones in the environment. Some of these problems will be overcome once more standardized protocols arrive.
A problem using ribosomal mtDNA genes such as 12S is the failure to discriminate pseudogenes (known as NUMTs). Little has been studied, but in humans, the recovery of undiagnosable NUMTs has been demonstrated (Olson and Yoder, 2002). There is no study comparing the performance of the COI vs. 12S on a broad scale.
Although some libraries are being developed, they host material, in this case, fish, from limited biodiversity regions . In comparison to the Neotropics, for example, in the middle Amazon Basin, near Leticia (Colombia), in just 40 km 2 , Galvis et al. (2006) registered 344 fish species.
An additional advantage of using COI as the primary marker for metabarcoding is that a small fragment of up to 109 bp provides a reliable identification in most species (Hajibabaei et al., 2006). However, the accuracy will depend on the region of the 650 bp amplified it refers to which part within these 650 bp is amplified. With these ideas, many proposals arose to obtain faster sequencing results, from Sanger sequencing to new developments, such as the latest generation of MinION cells, involving thousands of specimens and providing up to 658 bp (Srivathsan et al., 2021).
Taxonomic impediment
Based on the previous paragraphs, we can say that, currently, metabarcoding-based biomonitoring should be centered mainly on the COI gene. However, an alternative marker would be needed sometimes, yet there is no consensus on any as a second universal marker. Second, it is easier to get thousands of sequences technologically, but the taxonomic impediment is the major problem. This problem is more marked with invertebrates (Coleman, 2015). In other words, that means technological developments are surpassing our ability to identify species.
There have been many proposals to "speed" up species discovery to overcome this problem. Among them, Sharkey et al. (2021) and Meierotto et al. (2019) proposed some minimalist approaches, although they are not exempt from controversy (Zamani et al., 2022). These discussions have focused on insects. In the case of aquatic life, it is not possible to use these "modern" minimalist proxies because many species are cryptic (García-Morales and Elías-Gutiérrez, 2013; Elías-Gutiérrez et al., 2019). Their description requires a more integrative approach, as proposed by Andrade-Sossa et al. (2020) or García-Morales et al. (2021).
Another way to overcome the taxonomic impediment has been elaborating different mathematical algorithms to distinguish molecular taxonomic operational units (MOTUs) that could correspond to the species. There are many ways to calculate these MOTUs; one of the most used is the Barcode Index Numbers (BINs), proposed by Ratnasingham and Hebert (2013). However, these clusters always require additional evidence to be supported, and they can change based on this knowledge. Others proposed taxonomy-free indexes (Apotheloz-Perret-Gentil et al., 2017). However, little congruence has been observed when these methods are compared with morphology-based methods at the species level in tropical environments (Kutty et al., 2022).
In our group study on zooplankton, we faced this problem with new non-conventional collection methods, such as using light traps (Montes-Ortiz and Elías-Gutiérrez, 2018). Zooplanktonic species increased dramatically, including many non-traditional zooplankters, such as Acari, chironomids, chaoborids, or ostracods . As a result, we are facing a fascinating new world of species that we consider "zooplankton in a broad sense." All these animals interact and have a role within this community, as we demonstrate with a mite predating Bosmina tubicen, a strict (Fisher et al., 2017). Our proposal is the construction of "rapid baselines" with recovering specimens, when possible, for later description (Montes-Ortiz et al., 2022). If the DNA extraction destroys the whole specimen, we deposit parallel vouchers in a biorepository. All material should be uploaded to a public database like BOLD. Later, they will allow biomonitoring, as proposed by Valdez-Moreno et al. (2021). They compared their eDNA data from tropical oligotrophic Lake Bacalar against a dataset of 3534 specimens representing 519 species of fish from Mexico. However, some doubtful records (false positives) appeared, which we will discuss later.
Accordingly, we should know the species dwelling in each freshwater system, allowing eDNA metabarcoding and comparing with the baseline for biomonitoring.
Finally, the only answer to speed up the process of species description in aquatic environments is to train more specialists to understand aquatic biodiversity, mostly devoted to invertebrates, and convince society about the importance of this job.
Nevertheless, Figure 1 shows that interest in metabarcoding is increasing much more rapidly than that in barcoding studies. We can say that most barcoding studies are the basis for working with metabarcoding, which will be discussed in the next section.
False positives and/or false negatives
We assume that the biodiversity of a freshwater system is unknown. In that case, it means that we cannot determine if the sequences we obtained using the eDNA techniques are false positives or if false negatives exist. These latter results are obtained from species present in the ecosystems but are not detected by these methods. A simple way to approach this lack of knowledge of biodiversity can be the first sight of the initial BOLD page, which shows 807,000 BINs but only 244,000 animal species (in addition to 72,000 plants and 24,000 fungi). Much less than half of the species have a scientific name.
DNA of false positives can sometimes be physically present in the aquatic environment due to different factors. For example, Valdez-Moreno et al. (2019) found a marine fish, Lachnolaimus maximus, near Lake Bacalar, far away from its typical habitat, the Mesoamerican Reef. A field survey explained its presence: remains of this fish were thrown from restaurants into the water. In many cases, the false positives are not as evident as the presence of a strict marine species in a freshwater ecosystem.
More problematic is the finding of false negatives because they can be rare or occasional in the surveyed environments. This case requires a significant field effort, replicate samples, and larger volume waters. It is easy to mention these points. However, the implementation can be challenging. For example, depending on suspended sediments, filters can collapse rapidly. The use of primers can also be challenging (Polanco-Fernandez et al., 2021). For example, related to primers, it is essential to consider the primer bias or the so-called amplification bias that is mostly related to universal primers in community studies. The result is a reduction to realize quantitative inferences to count the taxa (Bruce et al., 2021).
Current trends
Independently of the focus of the study, the studies of eDNA in aquatic environments for biomonitoring involved two main routes: studies based on analyses of the whole community involving the socalled metabarcoding and studies focused on a search for a particular species that could be endangered, an introduced exotic, or commercially valuable. These latter studies could be based on quantitative PCR or digital droplet PCR. Among the applications, these methods can be used to follow an invasion by exotic species (Takahara et al., 2013) or detect some aspects related to the biology of a species, such as the spawning season (Bylemans et al., 2017). In these cases, any specific marker can be used instead of the DNA barcodes.
Although most studies focus on methods developments, two recent reviews are devoted to analyzing all published information about eDNA in aquatic environments.
Schenekar's (2022) assessment of 381 eDNA-focused studies in freshwaters was limited to macro-organisms. It showed an increase in biomonitoring (64.8% of the total) and a diminution of purely ecological (19.9%) works. The growth of the studies was exponential, and most of them (88.5%) were conducted in the socalled Global North (North America, Europe, and Asia). However, the metabarcoding studies were only 36.5% of the total, the most studies based on qPCR (55.1%), where the authors rely only upon the MOTUs, and are mainly targeted to fishes that can be identified in this way if their marker is already known.
In the case of fish, they are the aquatic group with more DNA barcoding studies . Yao et al. (2022) analyzed all publications involving fish and eDNA. A total of 416 studies were found (from marine to freshwater), including biomonitoring to ecological interactions.
A novel development has been the analysis of lake sediments, where some species' colonization patterns can be followed through time (Olajos et al., 2018). However, false positives/negatives are still an issue. Nevertheless, biomonitoring the change through sediments is a promise for paleoecology (Capo et al., 2021). A recent review of methods, protocols, and recommendations for standardization was made by Pawlowski et al. (2022).
There are only a few works on other critical freshwater communities, such as the zooplankton, with no more than 10 hits on the Web of Science (using search strings "eDNA" AND "freshwater" AND "zooplankton"). The first study published was an analysis of spatial and temporal dynamics with eDNA, based on the 18S rRNA gene, in Harsha Lake (Ohio, USA) (Banerji et al., 2018). Although the authors found 1,314 unique MOTUs, it is impossible to elucidate the presence of false positives/negatives due to the lack of a baseline. The same situation was faced by Qiu et al. (2022) in Poyang Lake (China). Xie et al. (2021), working in Daqing River Basin (China), named 15 zooplankton species using the BLAST algorithm in GenBank. Several names were misidentifications. Yang and Zhang (2020) used the zooplankton to assess Thai Lake and its surroundings, and MOTUs were assigned using GenBank and its database (Yang et al., 2017). Although the authors used a Bayesian tool to assign the taxonomic groups found (Munch et al., 2008), the taxonomic impediment is present (see Figure 2 in Yang et al., 2017).
As a workflow, we resume our proposal and previous analyses about metabarcoding and DNA barcoding in Figure 2. As seen, we consider it crucial to have a baseline in the case of studies based on a community such as zooplankton, nekton, or benthos. It is also essential to consider the type of freshwater system to be studied (Bruce et al., 2021).
Frontiers in Environmental Science
frontiersin.org The situation in megadiverse countries A significant problem in megadiverse countries is not only the complexity of developing the methods and baselines for any group of aquatic life due to different environmental conditions and more complex biotic interactions. The development of science in any respect is compromised due to political factors and cuts in budgets. For example, the two leading countries in Latin America, and within the first places in biodiversity in the world, Brazil (first place) and Mexico (fifth place), recently suffered severe cuts for science (Elías-Gutiérrez et al., 2017;Lazcano, 2019;Thomaz et al., 2020;Kowaltowski, 2021;Quiroga-Garza et al., 2022), compromising not only the research but also the communication of it in open access journals (Smith et al., 2022). Our research on this topic (Valdez-Moreno et al., 2019) stopped due to the lack of funds and the prohibitions imposed by the government to purchase any equipment with resources obtained from foundations other than the governmental Mexican Council of Science and Technology (CONACYT). These problems and the loss of biodiversity should be a priority. Instead of that, the unsustainable policies (Overbeck et al., 2018;Ortega and Jaber, 2022) and lack of interest of the governments have caused a significant tragedy and an irrecuperable loss in the aquatic and terrestrial environments (Pelicice et al., 2017;Rico-Sanchez et al., 2020), leading to global consequences (Overbeck et al., 2018;Thomaz et al., 2020). Nevertheless, some of these countries have a firm (but small) scientific community (Aguado-Lopez and Becerril-García, 2021) with the ability to work on these topics. We urge international pressure to overcome this situation with no physical frontiers or barriers because they affect the entire world. An example can be the formation of the Atlantic Sargassum belt due to the discharges of nutrients in recent years of the Amazon River, among other factors (Wang et al., 2019), which seriously affects all countries with an Atlantic coast, such as Mexico (Rodríguez-Martínez et al., 2022).
Conclusion
We can conclude that eDNA metabarcoding is a promising technique for biomonitoring all kinds of epicontinental waters.
FIGURE 2
Workflow for metabarcoding and eDNA studies on freshwater ecosystems. We consider it essential to construct a baseline in the case of community biomonitoring.
Frontiers in Environmental Science
frontiersin.org However, the development of the baselines does not follow the same pace as the techniques on metabarcoding, and still, there is no standardization of methods for the latter (Mauvisseau et al., 2019). This lack of standardization of methods is usually because the primers, permanence of DNA in the environment, filtering methods, etc., are still in development (Schenekar, 2022). As it has been seen, in different environments, the permanence of eDNA will vary significantly, depending on different factors, such as water temperature and salinity. Because of these problems, we propose mock experiments. However, many main variables of freshwater, such as ultraviolet light, pH, dissolved oxygen, and biotic effects, are unknown (Lamb et al., 2022). We believe all these methods need development for each type of system, and they should be compared within it, not among them (see Figure 2). At least, in the actual status of knowledge and technical development, we consider that this approach is the most feasible.
Author contributions
All authors listed have made a substantial, direct, and intellectual contribution to the work and approved it for publication.
|
2023-02-17T16:02:19.335Z
|
2023-02-15T00:00:00.000
|
{
"year": 2023,
"sha1": "a9a4035571330855c1e40179f55f7eac7766ccbf",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fenvs.2023.1057653/pdf",
"oa_status": "GOLD",
"pdf_src": "Frontier",
"pdf_hash": "6e8c4495810ca80a55d0a6ecff41c3d4e4847eee",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": []
}
|
243785011
|
pes2o/s2orc
|
v3-fos-license
|
Expression Analysis and Mutational Status of Histone Methyltransferase KMT2D at Different Upper Tract Urothelial Carcinoma Locations
The gene coding for histone methyltransferase KMT2D is found among the top mutated genes in upper tract urothelial carcinoma (UTUC); however, there is a lack of data regarding its association with clinicopathologic features as well as survival outcomes. Therefore, we aimed to investigate KMT2D expression, mutation patterns, and their utility as prognostic biomarkers in patients with UTUC. A single-center study was conducted on tumor specimens from 51 patients treated with radical nephroureterectomy (RNU). Analysis of KMT2D protein expression was performed using immunohistochemistry (IHC). Customized next-generation sequencing (NGS) was used to assess alterations in KMT2D exons. Cox regression was used to assess the relationship of KMT2D protein expression and mutational status with survival outcomes. KMT2D expression was increased in patients with a previous history of bladder cancer (25% vs. 0%, p = 0.02). The NGS analysis of KMT2D exons in 27 UTUC tumors revealed a significant association between pathogenic KMT2D variants and tumor location (p = 0.02). Pathogenic KMT2D variants were predominantly found in patients with non-pelvic or multifocal tumors (60% vs. 14%), while the majority of patients with a pelvic tumor location (81% vs. 20%) did not harbor pathogenic KMT2D alterations. Both IHC and NGS analyses of KMT2D failed to detect a statistically significant association between KMT2D protein or KMT2D gene alteration status and clinical variables such as stage/grade of the disease or survival outcomes (all p > 0.05). KMT2D alterations and protein expression were associated with UTUC features such as multifocality, ureteral location, and previous bladder cancer. While KMT2D protein expression and KMT2D mutational status do not seem to have prognostic value in UTUC, they appear to add information to improve clinical decision-making regarding the type of therapy.
Introduction
Upper tract urothelial cell carcinoma (UTUC) is a rare disease with often a poor prognosis [1]. Indeed, two thirds of all cases are already detected at advanced tumor 2 of 12 stages [2,3]. For risk stratification, clinicians use established clinicopathological factors such as multifocality, tumor size, tumor grade, cytology, invasiveness on CT urography, previous bladder cancer, variant histology, and concomitant hydronephrosis [4][5][6][7]. This standardized risk stratification helps in the decision-making process between kidneysparing therapy and radical nephroureterectomy (RNU) [4]. However, current risk models and diagnostic approaches do not capture the biologic and clinical behavior of UTUC accurately [8]. Insights into the molecular mechanisms underlying UTUC could provide a rationale for different treatment approaches as well as uncovering novel biomarkers for tailored therapy development [9].
It is believed that epigenetic changes affect carcinogenesis, and mutations in epigenetic modifiers frequently show cancer-specific alteration patterns [10]. One of the players among epigenetic modifiers, with an assumed role in urothelial cancer development, is the histone-lysine N-methyltransferase 2 (KMT2) family of histone methylases comprising KMT2A, KMT2B, KMT2C, and KMT2D [11,12]. Recent studies reported that alterations in the gene coding for histone methyltransferase KMT2D are early events of urothelial carcinogenesis [13] along with several other malignancies, including breast cancer, and pancreatic ductal and lung adenocarcinoma [14][15][16][17]. KMT2D is, moreover, among the top mutated genes in both upper and lower tract urothelial carcinoma [18,19]. Despite KMT2D's putative significant role in urothelial cancer development, there is a lack of data regarding its function and association with clinicopathologic features and survival in UTUC patients.
Therefore, we aimed to investigate KMT2D protein expression, mutation patterns, and prognostic value in UTUC patients using immunohistochemistry (IHC) and targeted next-generation sequencing (NGS).
Data Source and Patient Cohort
This retrospective single-center study included a consecutive cohort of 51 patients treated with RNU for UTUC at the Department of Urology of the Medical University of Vienna between 1993 and 2014. Lymphadenectomies were performed at the surgeons' discretion. Adjuvant chemotherapy was administered to patients at the clinicians' discretion based on tumor stage and overall health status. No patient received adjuvant radiotherapy.
Pathologic Review and Follow-Up
All surgical specimens were processed according to standard pathological procedures. Genitourinary pathologists assigned a tumor grade according to the 2004 WHO grading system. The pathological stage was reassigned according to the 2002 American Joint Committee on Cancer TNM staging system. Specimens from FFPE material for IHC and DNA isolation were obtained after approval from the institutional review board. Only FFPE tumor samples with the presence of more than 80% of tumor tissue per sample were used for staining and DNA isolation.
Clinical and radiological follow-ups were performed in accordance with institutional protocols and current guidelines. Routine follow-up usually included physical examination, radiological imaging, and urinary cytology at three and six months, and then yearly. Disease-specific survival (DFS) time was calculated from the date of RNU to disease recurrence/progression or last follow-up. Cause of death was abstracted from medical charts and/or from death certificates [20]. Overall survival (OS) time was calculated from the date of RNU to death or last follow-up.
Immunohistochemistry
An analysis of KMT2D expression at the protein level was performed by IHC. Formalinfixed, paraffin-embedded slides were processed according to the standard methods for IHC. Heat-induced antigen retrieval was performed with citrate buffer (pH = 6). Staining was accomplished with a KMT2D antibody (1:500 dilution, abcam ab224156, Lot GR3254038-2) using the Histostain Plus Broad Spectrum Novex Protein Kit, Life Technology (Carlsbad, CA, USA) (Lot 1954379A), according to the manufacturer's instructions. AEC Single Solution Life Technology (Lot 1936895A) was used for development, and Hematoxylin Solution, Merck (Whitehouse Station, NJ, USA) (Lot HX86017674), was used for counterstaining. The histoscore (H-score) and staining intensity were evaluated by an experienced uropathologist. Microscopy was performed on a Zeiss Axio in 20× magnification for normal and tumor areas, and a 10× objective was used for normal/tumor takes.
Next-Generation Sequencing
Genomic DNA was purified from formalin-fixed, paraffin-embedded tissue using the Gene Read DNA Data were processed on the Ion Torrent Server via Ion Reporter (Thermofisher, Waltham, MA, USA), Torrent Variant Caller 5.2. Variants were filtered and analyzed according to clinical standards in cooperation with the Institute of Pathology, Medical University of Vienna. Only mutations with an allele ratio of 2% to 100%, a minor allele frequency of at least 5%, and a read coverage of >200 were included. Furthermore, only missense, nonsense, stoploss, and frameshift mutations were further analyzed.
Within the Ion Reporter software annotation tools, SIFT, PolyPhen-2, and ClinVar were implemented and directly analyzed. Mutations of interest were directly compared with the ClinVar database (https://www.ncbi.nlm.nih.gov/clinvar (accessed on 17 August 2021)). The PolyPhen-2 score was directly analyzed by the PolyPhen-2 prediction tool from Harvard University (website: http://genetics.bwh.harvard.edu/pph2/ (accessed on 17 August 2021)) [21]. Mutations with a PolyPhen-2 score of 0.0 to 0.15 were predicted as benign, a score from 0.15 to 0.85 as possibly damaging, and a score of >0.85 as confidently damaging. The SIFT score within the Ion Reporter software was directly analyzed using the PROVEAN Genome Variants annotation tool provided by JCVI J. Craig Venter Institute (website: http://provean.jcvi.org/index.php (accessed on 17 August 2021)) [22]. Mutations with a SIFT score between 0.0 and 0.05 were considered deleterious. Mutations with a SIFT score between 0.05 and 1.0 were predicted to be tolerated (benign).
A scoring system combining SIFT and PolyPhen-2 scores with the ClinVar database was established, with the ClinVar database as a main reference. In cases of mutations without any ClinVar database information, the SIFT and PolyPhen-2 scores were combined to determine potential pathogenic mutations. At least one of the above-mentioned scores contained a result for pathogenicity to register a potential pathogenic mutation. In cases of divergent results, the mutation was classified as uncertain.
Statistical Analysis
The report of categorical variables included frequencies and proportions. Continuous variables were reported as medians or means and interquartile ranges (IQRs) or ranges. The correlation and intensity of KMT2D gene alteration and protein expression patterns with clinical data were statistically analyzed using Wilcoxon rank-sum and Fisher's exact tests, as appropriate. Cox regression analyses were used to assess the correlation of KMT2D protein expression and KMT2D mutational status with survival outcomes such as DFS and OS. The risk of survival was expressed as the hazard ratio (HR) and 95% confidence interval (95% CI). Kaplan-Meier survival curves were used to depict the association between KMT2D expression and KMT2D gene alteration and survival. The log-rank test was used to determine the statistical difference between groups. All reported p-values were twosided, and statistical significance was set at 0.05. All statistical analyses were performed using R Version 4.0.4.
Results
A total of 51 UTUC patients treated with RNU were included. The median age of the entire cohort was 73 years (IQR 60-79). The patient characteristics are shown in Table 1. A total of 13 out of the 51 cases received chemotherapy as an adjuvant or in a palliative setting. The median survival and duration of follow-up for consecutively recruited patients were 29 months (IQR 7.5-79.5). The median follow-up of patients alive was 22.5 months (IQR 8.0-70.5).
KMT2D Expression
The H-score from tumor tissue and normal urothelium could be assessed in 31 patients, while in 20 patients, only tumor tissue was available for evaluation ( Figure 1). According to the IHC analysis, the median H-score in tumor tissue was 90 (range: 0-300). Tumors with an H-score of >0 were deemed KMT2D-positive. A total of 19 tumor specimens (37.3%) showed negative KMT2D expression, whereas 32 tumor specimens showed positive KMT2D expression (62.7%).
When looking at protein expression only in patient samples with both normal urothelium and tumor tissue available, we found a higher mean H-score in normal tissue compared to tumor tissue (127.3 vs. 74.5) in low-grade (LG) samples (Figure 2A). In high-grade (HG) samples ( Figure 2B), there was no difference in nuclear expression between normal urothelium and malignant tissue (mean values: 46.5 vs. 47.5). For LG tumor samples, the nuclear expression of KMT2D was low in pTa but increased with stage. We did not observe similar patterns in HG tumors. Median (IQR); n (%)
KMT2D Expression
The H-score from tumor tissue and normal urothelium could be assessed in 31 patients, while in 20 patients, only tumor tissue was available for evaluation ( Figure 1). According to the IHC analysis, the median H-score in tumor tissue was 90 (range: 0-300). Tumors with an H-score of >0 were deemed KMT2D-positive. A total of 19 tumor specimens (37.3%) showed negative KMT2D expression, whereas 32 tumor specimens showed positive KMT2D expression (62.7%). Significantly increased KMT2D expression was detected in patients with a previous history of bladder cancer (25% vs. 0%, p = 0.02). No significant association was observed between KMT2D expression and age, sex, pathological T stage, histological grade, tumor location and side, lymph node involvement, metastases, or site of recurrence (all p > 0.05).
KMT2D Alterations
To identify somatic mutations, NGS was successfully performed on 51 tumor samples from patients who had tumor-only and both tumor and normal tissues. A total of 22 samples were excluded due to the small number of reads, and 2 samples were excluded due to the presence of a high number of mutations (up to 100 mutations) per sample. From the remaining 27 patients, 5 specimens (18.5%) showed pathogenic KMT2D variants, whereas 22 specimens showed non-pathogenic KMT2D variants (81.5%). Overall, seven pathogenic/likely pathogenic variants were identified in our study. Detailed information on the validated variants with the ID of the pathogenic variants is presented in Figure S1. NGS analysis of UTUC tumors of different stages and grades revealed a significant association between the KMT2D variant and tumor location (p = 0.02). Pathogenic KMT2D variants were predominantly found in patients with non-pelvic or multifocal tumors (60% vs. 14%), while the majority of patients with a pelvic tumor location (81% vs. 20%) did not harbor KMT2D alterations. No significant association was noted in our NGS cohort with respect to the following clinicopathological features: age, sex, pathological T stage, histological grade, tumor side, lymph node involvement, metastases, site of recurrence, and previous history of bladder cancer (all p > 0.05). pared to tumor tissue (127.3 vs. 74.5) in low-grade (LG) samples (Figure 2A). In high-grade (HG) samples ( Figure 2B), there was no difference in nuclear expression between normal urothelium and malignant tissue (mean values: 46.5 vs. 47.5). For LG tumor samples, the nuclear expression of KMT2D was low in pTa but increased with stage. We did not observe similar patterns in HG tumors. Significantly increased KMT2D expression was detected in patients with a previous history of bladder cancer (25% vs. 0%, p = 0.02). No significant association was observed between KMT2D expression and age, sex, pathological T stage, histological grade, tumor location and side, lymph node involvement, metastases, or site of recurrence (all p > 0.05).
KMT2D Alterations
To identify somatic mutations, NGS was successfully performed on 51 tumor samples from patients who had tumor-only and both tumor and normal tissues. A total of 22 samples were excluded due to the small number of reads, and 2 samples were excluded due to the presence of a high number of mutations (up to 100 mutations) per sample. From the remaining 27 patients, 5 specimens (18.5%) showed pathogenic KMT2D variants, whereas 22 specimens showed non-pathogenic KMT2D variants (81.5%). Overall, seven pathogenic/likely pathogenic variants were identified in our study. Detailed information on the validated variants with the ID of the pathogenic variants is presented in Figure S1. NGS analysis of UTUC tumors of different stages and grades revealed a significant association between the KMT2D variant and tumor location (p = 0.02). Pathogenic KMT2D variants were predominantly found in patients with non-pelvic or multifocal tumors (60% vs. 14%), while the majority of patients with a pelvic tumor location (81% vs. 20%) did not harbor KMT2D alterations. No significant association was noted in our NGS cohort with respect to the following clinicopathological features: age, sex, pathological T stage, histological grade, tumor side, lymph node involvement, metastases, site of recurrence, and previous history of bladder cancer (all p > 0.05).
Discussion
According to our knowledge, this study is the first to investigate KMT2D expression along with alterations in UTUC. Our results show that both KMT2D protein expression and KMT2D alterations were not significantly associated with clinical variables such as stage or grade of the disease. KMT2D alterations and expression also failed to emerge as a prognostic marker for UTUC. However, pathogenic KMT2D alterations and expression were associated with features of clinically aggressive UTUC including multifocality, ureteral location, and previous bladder cancer.
The KMT2 family member KMT2D, which regulates the H3K4me methylation landscapes predominantly at enhancers, has been implicated in the development of cancer by dysregulation of enhancer activity and subsequent disruption of normal development programs [23][24][25][26]. In urothelial carcinoma, KMT2D has been found among the top mutated genes in several genomic characterization studies; it seems to be an early event in the pathogenesis of UTUC rather than a driver of disease progression [18,19,27,28]. Indeed, in our study, KMT2D alterations were not significantly associated with clinical variables such as stage or grade of the disease.
However, pathogenic KMT2D variants were predominantly found in patients with non-pelvic and multifocal tumors, while the majority of patients with a pelvic tumor location did not harbor KMT2D alterations. This finding is in agreement with previous studies reporting that KMT2C and KMT2D were more frequently altered in ureteral than in renal pelvic tumors [26]. Several studies have shown that the initial tumor location is a prognostic factor, as patients with ureteral and/or multifocal tumors seem to have a worse prognosis than patients diagnosed with renal pelvic tumors [29,30]. Additionally, according to our results, significantly increased KMT2D protein expression was detected in patients with a previous history of bladder cancer, which has been reported as a risk factor for bladder recurrence after RNU [31]. Furthermore, in LG disease, nuclear KMT2D protein expression was lower in tumors compared to adjacent normal urothelium, whereas in HG disease, reduced protein expression was observed both in tumor and adjacent normal urothelium, with no significant difference between them. We hypothesize that this reduced expression also found in normal urothelium may be a sign of a possible field effect in HG disease [13]. Thus, KMT2D alterations and expression were associated with features of biologically aggressive UTUC including multifocality, ureteral location, a cancer field effect, and previous bladder cancer. This association of reduced KMT2D protein expression and pathogenic alterations with aggressive clinicopathologic features could help physicians choose tailored perioperative treatment strategies with intensified therapy in those most likely to benefit from it.
In addition to the prediction of biologically aggressive disease, prognostication of survival outcomes also allows for personalized decision-making. According to our analyses, neither KMT2D protein expression nor mutational status was associated with survival outcomes. These findings might be due to the lack of statistical power of our analysis because of our limited sample size. However, data from a previous study comprising survival outcomes of 71 patients (22 with a pathogenic alteration in KMT2D) also do not show significantly different outcomes when stratified according to alteration status [27]. For further conclusions, KMT2D alteration and survival outcomes in UTUC have to be studied in large-scale cohorts.
Interestingly, the only feature significantly different between publicly available TCGA samples with and without pathogenic KMT2D alterations was the mutation count, indicating a higher mutational burden in samples with pathogenic KMT2D alterations. As mutational burden has been reported to be associated with better response to immune checkpoint inhibitor (ICI) therapy, this could mean that tumors with KMT2D alterations may be more likely to respond to ICI treatment. Indeed, it has been reported that the alteration status of KMT2 family members may serve as a potential predictor of favorable ICI response in multiple cancers [32,33]. Tumors with KMT2D mutations, as a major modulator of immune checkpoint blockade, were characterized by increased immune infiltration [34]. Future studies should assess this theory in the context of urothelial carcinoma patients receiving ICI therapies to evaluate whether KMT2D could serve as a predictive biomarker for ICI response.
The prevailing hypothesis is that alterations in KMT2 function contribute to carcinogenesis through the modification of histone methylation patterns; thus, novel therapeutic agents targeting other histone demethylases may be an option to inhibit disease recurrence and/or progression in tumors with KMT2D alterations [35,36]. Given that the data from UTUC patients from the TCGA show that KMT2D alterations are co-expressed with fibroblast growth factor receptor (FGFR3) alterations in more than half of the cases, a plausible treatment strategy may also be a combination of FGFR3 inhibitors with histone demethylase inhibitors [37]. However, this concept needs to be assessed in patients with treatment-naïve or previously treated advanced UTUC requiring systemic therapy.
The present study is, to our knowledge, the first to investigate KMT2D expression by IHC as well as the correlation of KMT2D protein expression and mutational status with survival outcomes. Nevertheless, our study is limited by its retrospective design and relatively small sample size, which limit the power of the study. However, several factors strengthen our study, as tissue samples were obtained from a single institution. Additionally, KMT2D expression and alterations in tumor tissue and normal urothelium were assessed at a single time point in samples obtained from RNU. Further well-designed studies should be conducted to test the prognostic and predictive biomarker potential of KMT2D expression and KMT2D alterations.
Conclusions
KMT2D expression and mutational status did not emerge as a prognostic marker for UTUC; however, KMT2D alterations and expression were associated with features of clinically aggressive UTUC such as multifocality, ureteral location, and previous bladder cancer. Therefore, determination of KMT2D expression and KMT2D alteration may hold potential for identifying the best treatment strategy for UTUC patients. Institutional Review Board Statement: This study was conducted according to the guidelines of the Declaration of Helsinki and approved by the Institutional Review Board. All patients gave informed consent, and study approval was obtained from the local ethics committees.
Informed Consent Statement: Informed consent was obtained from all subjects involved in the study.
|
2021-11-06T15:08:24.237Z
|
2021-11-01T00:00:00.000
|
{
"year": 2021,
"sha1": "e0e765e7230799806ca5b139f11fcd174cc9ace2",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2075-4426/11/11/1147/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c9caff23b2a7791f7429a36209cd98870d86ab0b",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
237581615
|
pes2o/s2orc
|
v3-fos-license
|
Weakly first-order quantum phase transition between Spin Nematic and Valence Bond Crystal Order in a square lattice SU(4) fermionic model
We consider a model Hamiltonian with two SU(4) fermions per site on a square lattice, showing a competition between bilinear and biquadratic interactions. This model has generated interest due to possible realizations in ultracold atom experiments and existence of spin liquid ground states. Using a basis transformation, we show that part of the phase diagram is amenable to quantum Monte Carlo simulations without a sign problem. We find evidence for spin nematic and valence bond crystalline phases, which are separated by a weak first order phase transition. A U(1) symmetry is found to emerge in the valence bond crystal histograms, suggesting proximity to a deconfined quantum critical point. Our results are obtained with the help of a loop algorithm which allows large-scale simulations of bilinear-biquadratic SO(N ) models on arbitrary lattices in a certain parameter regime.
Introduction -Extended symmetries often offer a way to realize new phases of matter in simple models of strongly correlated quantum systems. An important motivation for extended symmetries comes from studying the limit where the number of internal degrees of freedom N becomes large, an ubiquitous tool in theoretical physics [1][2][3]. Indeed this large-N limit is often tractable analytically, allowing a better physical understanding and giving a starting point for an expansion aimed to characterize the small-N , physical, cases. In quantum magnetism, this approach was pionereed by enlarging the symmetry group to SU(N ) where it was for instance predicted, using field-theoretical analysis [4,5], that the wellknown antiferromagnetic (Néel) ordered phase present on the square lattice at small N is replaced by a valence-bond crystal (VBC) that breaks lattice symmetries at large N . For several SU(N ) representations and different lattices, numerical studies have confirmed the existence of ground-states without magnetic long-range order [6][7][8][9][10][11]. Extended symmetries are not only useful as a theoretical knob, but are also meaningful to describe experimental systems: for instance, SU(4) symmetry is relevant for materials with strong spinorbit coupling [12,13] while SO(4) symmetry has been suggested for twisted bilayer graphene [14]. In atomic physics, alkaline-earth ultracold atoms show an almost perfect realization of SU(N ) symmetry groups with high values of N [15][16][17][18][19][20] while spin-3/2 fermions can realize SO(5) symmetry [21,22]. Recent experiments with ultracold atomic systems show that low temperatures can be reached for SU(N)symmetric alkaline-earth elements [23] while a filling of two fermions per site can be realized [24] as it avoids three-body losses.
The competition between different energy terms, compatible with extended symmetries, is another fruitful approach to engineer unconventional phenomena [25]. For instance, the competition between VBC and Néel ordered phases found in large-N theories triggered a large interest due to the possibility of a generically continuous deconfined quantum critical point (DQCP) [26][27][28] between these two phases of matter, in contradiction with naive expectations from Landau-Ginzburg theory. A continuous transition can be observed numerically by either artificially treating N as a continuous parameter [29], or due to the competition between terms involving two and four or more spins, for a large variety of SU (2) and SU(N ) models [28,[30][31][32][33][34]. An excellent agreement with large-N DQCP predictions is obtained as N is increased [32]. For magnetic systems hosting spins larger than 1/2, another important competing term compatible with SU(N ≥ 2) symmetry is a biquadratic coupling between two spins. Biquadratic terms are also relevant for cold-atomic systems [35][36][37][38][39]. For spin-1 systems in two dimensions (2D), it is possible to obtain a (spin) nematic (or ferroquadrupolar) ground-state that breaks SU(2) symmetry, without any local magnetization, but with a finite quadrupolar order [40]. For instance, the bilinear-biquadratic Heisenberg model on the square lattice exhibits a very rich phase diagram [41][42][43], including a nematic phase. For a quasi-one-dimensional spin-1 model, Harada et al. [44] found numerical evidence for a continuous transition between a nematic and a VBC phase. The VBC phase does not survive to the isotropic 2D limit, leading instead to a magnetically ordered phase which exhibits a first-order transition to the nematic phase. This system was analyzed with a bond-operator treatment in Ref. [39], predicting a generic first-order nematic-VBC transition, along with a discussion of possible spin liquid behavior for SO(N ) symmetry at large N . On the other hand, a general discussion of nematic behavior from the perspective of a continuum field theory incorporating the role of Berry phases [45] allows for a continuous DQCP to a VBC phase for quasi-one-dimensional SO(3) models. In a subsequent quantum Monte Carlo (QMC) numerical study, Kaul [46] showed that a pure biquadratic model on a triangular lattice, which is known to host a nematic ground-state and has an extended SO(3) symmetry [46,47], can exhibit VBC or spin-liquid ground-states when the symmetry is extended to SO(N ) for large-enough N and/or in presence of further competing interactions [48]. The phase transitions between spin nematic and VBC phases were found to be discontinuous.
In this work, we consider a square lattice model built out of two SU(4) fermions per site, showing a competition between bilinear and biquadratic terms. This model has been discussed earlier [49][50][51][52][53][54] Upper panels : representative configurations (see text) constructed from snapshots of Monte Carlo configurations. Bottom panels : realspace pattern of the connected bond correlator Dcorr(b)−D avg corr with respect to the bottom left bond (shown in black). Blue and red mean positive and negative respectively and the bond thickness denotes the magnitude. Data are presented for θ = −0.74π (nematic) and θ = −0.5π (VBC). metry broken phases [51], as well as of critical spin liquids phases from a projected entangled pair states (PEPS) ansatz [52]. We use an exact mapping to an SO(n c ) model with n c = 6 colors, and show that part of the phase diagram can be simulated exactly using QMC with no sign problem.
Model definitions -We first define the model with two SU(4) fermions per lattice site, which form a 6-dimensional space at each site [49][50][51][52], with the following Hamiltonian where J = cos(θ) and K = sin(θ). By analogy with the usual SU(2) spin case, the 15-dimensional vector S is formed by the generators of SU(4) and the "spin" interaction S · S can be expressed as a linear combination of symmetric projectors on different irreducible on-site representations (see Ref. [52]). The model exhibits an enlarged SU(6) symmetries at J = 0 (θ = ±π/2, with fundamental representation on one sublattice and conjugate on the other one) and J = K (θ = −3/4π, π/4, with fundamental representation on each lattice site). QMC studies of the Hubbard model at large interaction find a critical or weakly ordered Néel phase [55,56] at θ = 0. The Hamiltonian can be alternatively written in a basis with n c = 6 colors degree of freedom (c = 1 . . . 6), encoding the six possible states on each site (see Sup. Mat. [57]). Denoting byc = n c + 1 − c the complementary color of c, the Hamiltonian reads (up to an irrelevant constant): In this form, the model has non-positive matrix elements when J ≤ 0 and K ≤ J, resulting in the sign-problem free region {θ SF } = [−3/4π, −π/2] for QMC simulations in this color basis. A variational wave-function analysis [51] predicts the existence of a VBC (dimerized) and ferromagnetic phases in this region. Quite interestingly, PEPS computations [52] find in the same region indications for a lack of ordering, and two variational (critical) spin liquids wave-functions with very competitive energies. We adapt (see details in Sup. Mat. [57]) an efficient QMC loop algorithm for bilinear-biquadratic spin 1 models [58], to simulate the model Eq. 4 in {θ SF }. We perform simulations of square lattice samples with N = L 2 sites with linear size L up to 96, and up to inverse temperature β = 2L in units of 1/ √ J 2 + K 2 to reach ground-state properties. Our results can be summarized as follows (see Fig. 1). We find that the region {θ SF } hosts two ordered phases: a VBC phase (known [6] to exist at θ = −π/2) as well as a nematic phase defined by a spontaneous symmetry-breaking choice of color pairs, which appears to have been missed earlier. Cartoon representations of QMC configuration snapshots for these two phases are provided in Fig. 1, where states c andc, which form a nematic pair, are represented by different shades of the same color, and bonds of the same color are drawn between neighboring lattice sites hosting c andc. In the nematic phase, one of three possible colors dominates, whereas in the VBC phase, there is no dominance of a single color, but most neighboring lattice sites are connected by bonds. The VBC pattern is not easily discernible and a more detailed study of the dimer correlation in the VBC phase is presented later in this manuscript. We provide evidence for a very weak first-order transition between the VBC and the nematic phase at θ c = −0.5969(1)π. The VBC phase is furthermore found to exhibit an emergent U(1) behavior all along the range [θ c , −π/2] amenable to QMC, restricting our ability to classify this phase into columnar, plaquette or mixed order [59,60]. This emerging symmetry is strongly reminiscent of the behavior observed at or close to a DQCP [28,34,[61][62][63][64][65]. We suggest that our results could correspond to a runaway flow close to a potential DQCP fixed point, similar to the theory between nematic and VBC phases presented by Grover and Senthil [45] for an SO(3) quasi-one-dimensional model, calling for a similar analysis for the SO(6) case.
Long-range ordered phases -To motivate the presence of nematic ordering in the range θ ∈ [−0.75π, θ c ], we present Cartan and nematic correlation functions (defined below) for a system of linear size L. We use three Cartan operators symmetry and diagonal in the color basis, forming the vector C = (C 1 , C 2 , C 3 ) at any site. To identify simple (anti-)ferromagnetic ordering, we consider the Cartan correlator C C = C r=(0,0) · C r=(L/2,L/2) , whereas C N = Q 1,r=(0,0) Q 1,r=(L/2,L/2) , with the traceless operator Q 1 = C 1 C 1 − 1 6 , is used to identify nematic ordering. Details about the choice of Cartan operators and connections to the spin operators of SU(4) are provided in Sup. Mat. 57.
Large size behaviors of these correlators are displayed in Fig. 2(a,b) for θ = −0.65π (located in the nematic phase and relatively away from the critical point), where we clearly see that there is long-range ordering in the nematic correlator but not in the Cartan correlator. We now turn to the VBC phase, which we first illustrate by the real space pattern ( Fig. 1) of dimer correlations, defined as D corr (b) = (C 0,0 · C 1,0 )(C r b 1 · C r b 2 ) . Here b indicates a bond number connecting nearest neighbor sites r b 1 and r b 2 . Data in Fig. 1 are taken at the SU(6) point θ = −0.5π where previous simulations [6] showed the existence of long-range VBC order, but without specification of the type of crystal encountered. Note that we only present the connected correlation function, i.e. the value D avg corr = 1 is subtracted out to only show the non-trivial features. An analysis of the pattern in Fig. 1 along the lines of Ref. 66 reveals that it is different from the one expected in a pristine columnar state, but potentially compatible with plaquette order. We provide next a detailed analysis of the symmetry of the VBC ordering.
Emergence of a U (1) symmetry -For this, we define a vector order parameter D = (D x , D y ) with D x = i (−1) ix C ix,iy · C ix+1,iy and D y = i (−1) iy C ix,iy · C ix,iy+1 . We can build a 2D-histogram of D using the spatial configurations generated in the QMC sampling. This is shown for the same parameter values as in Fig. 2 where we clearly see a U(1) symmetry emerging. A similar U(1) symmetry is often observed for VBC phases close to DQCP [28,[62][63][64] but is generically not expected at the coexistence point between phases at a first order transition (see however recent works [67][68][69]). We find a finite order parameter for VBC order (characterized by a finite radius in Fig. 2) and a U(1) symmetry (circular shape in Fig. 2) in the entire range [θ c , −π/2] on the system sizes accessible to us. We expect that eventually on larger sizes the histograms would show peaks at specific angles characteristic of the type of crystal ordering (e.g. at 0, ±π/2, π for columnar order), but we are unable to reach this behavior. In the Sup. Mat. [57], we present an analysis of the persistence of this U(1) behavior for large L. We also expect the VBC to subsist for θ > −π/2, even though it is difficult to pinpoint where it vanishes as QMC is not longer available.
Weak first-order transition -We now present evidence for a weak first-order transition between the nematic and VBC phases. Its weak nature makes it difficult to probe numerically, as several standard indications of a continuous phase transition are observed on small to intermediate length scales, as we now show. As the nematic phase breaks a continuous SO(6) symmetry, it is illuminating to carry out simulations in a basis where the symmetry is made explicit. We call this basis the nematic basis (denoted by N) which is related to the signfree color basis as follows: The Hamiltonian in this basis and the explicit SO(6) symmetry are detailed in Sup. Mat. [57]. We can then define a 6-dimensional nematic order parameter , corresponding to "ferromagnetic" ordering in this basis. The VBC ordering is quantified by the amplitude of the VBC order parameter D 2 = D 2 x +D 2 y . Given these order parameters, a traditional way of inquiring about the order of the phase transition is to consider their Binder cumulants. We find (see Sup. Mat. [57]) that while they clearly indicate the existence of long-range order away from the critical point, Binder cumulants have a non-monotonic behavior near θ c which prevents for a conclusive determination of the nature of the phase transition. We further consider the nematic "color" stiffness ρ c defined using the spatial winding of loops in the QMC simulation as where i runs over all the loops in a particular space-time configuration. The spatial winding W α i of a particular loop i is an integer counting how many sites it wraps over the periodic boundary conditions of the system in the direction α. This definition follows from a similar treatment of an SO(3) system [46]. We expect this stiffness to be finite in the nematic phase, to vanish in the VBC phase and to scale as L −z (with z the dynamical critical exponent) at a continuous phase transition. Fig. 3 reveals a crossing of curves for different system sizes when rescaling the stiffness by L, which would be a signature of a continuous phase transition with z = 1 close to θ −0.5969(1)π. This behavior is seen up to length scales of L = 36. Further evidence for behavior consistent with a continuous transition is provided by studies of the second derivative of the local energy in Sup. Mat. [57] up to L = 36 along with an estimate for the correlation length (effective) critical exponent ν. Detailed histograms for L = 32 for the energy, nematic and VBC order parameters are also presented in Sup. Mat. [57] showing no discernible signatures of coexistence and hence compatible with a continuous transition up to this length scale.
However, for larger sizes, we find a clear coexistence of both phases at the transition. This is shown in Fig. 4 through a Monte Carlo time trace of the QMC data for a system with L = 40. It can be seen that the system transits abruptly between the two phases, consistent with the expectation for a first order transition. We have also simulated system sizes up to L = 72 and find that the jumps between phases become increasingly unlikely with increasing size. Note that the largest value that (D 2 x + D 2 y ) can take is 1 for perfect VBC ordering, compared to the value of ≈ 0.007 taken at the transition. This indicates that the transition is only weakly first order and that it cannot be identified for smaller sizes. Note that as the nematic phase breaks a continuous symmetry, the values for M 2 show a spread in Fig. 4 but also in the nematic phase. In the Sup. Mat. [57], we also provide a comparison with the same transition occurring for the model Eq. 4 with 5 colors, corresponding to an SO(5) symmetry.
Conclusion and perspectives -In conclusion, using largescale unbiased QMC simulations, we have shown the existence of a spin nematic phase bordered by a VBC phase (for θ > θ c ) and a ferromagnetic phase (θ < −3/4π) in a system of SU(4) fermions with two particles per site. While the ferromagnetic/nematic transition is strongly first order (level crossings can be observed in exact diagonalization of small clusters [52]), we showed that the transition between nematic and VBC phases is weakly first-order. The relevance of biquadratic terms in cold-atomic systems [35][36][37][38][39] suggests that this model and its corresponding quantum phase transition can be realized in ultracold atomic setups. Note that a spin nematic phase has been observed in spin-1 spinor condensates [70]. The field theory analysis of Ref. [45], written for SO (3) that our model is defined on a square lattice (where only fourfold instantons are allowed) and enjoys a higher SO(6) symmetry (suggesting a higher scaling dimension of instantons events) hints at an even more likely occurence of a DQCP described by a similar field theory. We note that Ref. [45] predicts a U(1) symmetry in the VBC order parameter, which we do observe in our simulations. There are several reasons for a flow away from a putative DQCP. As mentioned in Ref. [45], the U(1) symmetry breaking operator can be relevant, which would cause a deviation from the DQCP. In our case, we do not see any evidence for a broken U(1) at the length scales we can access. Another possibility would be that instabilities not present in the SO(3) theory of the nematic to VBC transition for spin-1 systems are to be considered for the extended SO(6) symmetry present in the Hamiltonian studied in this work, calling for such a field theoretical analysis. Based on the above considerations, further fine-tuning of the weak first-order transition to a potential DQCP may be achieved by using another lattice (e.g. honeycomb), or by including diagonal bonds (promoting plaquette order), or four-spin terms (favoring columnar order). While we were able to pinpoint the first-order nature of the transition in our work, in this perspective it would be useful to consider improved methods to probe weak first-order phase transitions, such as the recent proposal of Ref. 71. It is also interesting to contrast our results with those of recent studies [67][68][69] observing emerging symmetries at weak first-order transitions in other models: we have checked that we do not find an enhanced symmetry between the VBC and nematic order parameters at θ c (at least on the accessible lattice sizes). Finally, we mention that the QMC algorithm in Sup. Mat. [57] (see, also, references [72][73][74][75][76] therein) allows to efficiently simulate bilinear-biquadratic SO(n c ) models with arbitrary numbers of colors n c , and for all lattices (including frustrated ones), with no sign problem in the range {θ SF }. Given the wide variety of exotic phases of matter including spin liquids that were encountered in previ-ous studies of SO(N ) models with purely biquadratic interactions (θ = −π/2) [46,48,77,78], it thus paves the way for further fruitful explorations of exotic quantum physics in models with extended symmetries and competing energy scales.
We thank D. Poilblanc for useful discussions and collaboration on related work. This work benefited from the support of the project LINK ANR-18-CE30-0022-04 of the French National Research Agency (ANR). We acknowledge the use of HPC resources from CALMIP (grants 2020-P0677 and 2021-P0677) and GENCI (grant x2021050225). We use the ALPS library [79,80] for some of our QMC simulations.
Supplemental Material for "Weakly first-order quantum phase transition between Spin Nematic and Valence Bond Crystal Order in a square lattice SU(4) fermionic model" The 6-representation of SU(4), corresponding to the 6 Young tableau, can be interpreted as the onsite Hilbert space (2) -where the generators are S x (real symmetric), S y (imaginary antisymmetric) and S z (diagonal) -we use the alternative notation X 1 , . . . , X 6 for S (1) , . . . , S (6) , Y 1 , . . . , Y 6 for S (7) , . . . , S (12) and Z 1 , . . . , Z 3 for S (13) , . . . , S (15) . The convention used in this paper for the matrix representation of these generators is given in Tables I and II. The diagonal and off-diagonal matrix elements of the twosite Hamiltonian H − (K/4)1 in the basis B are respectively given by where we use the "state" notation s i for |i .
In this basis B, as seen in the above table, the Hamiltonian suffers from a sign problem except when J = K ≤ 0 which corresponds to one SU(6) point. From the above table one immediately notice that, for J = K, the Hamiltonian is just a six-color exchange model on the two interacting sites.
Interestingly, the range of parameters for which the model is sign-free can be extended to a finite range of {J, K} including the two SU(6)-symmetric points 6 − 6 (J = K < 0) and 6 −6 (J = 0, K < 0).
The sign-free basis C (for "color" basis) is constructed by a simple redefinition of the six states of B : Basically, this transformation merges the J − K and K − J off-diagonal amplitudes of the model in the original basis B, as can be seen from the matrix elements in the new basis C : A simple inspection at the right column of the above table shows that the sign free condition in C is now J < 0, K < 0, and K < J.
Let us remark that the basis change (3) is a uniform on-site transformation that does not require any hypothesis about the bipartite nature of the lattice.
Adding the blue parenthesized (summing to zero) terms in the above table, and introducing the notationc i = c 7−i leads to a more compact (and suitable for the quantum Monte-Carlo algorithm presented later) expression for the Hamiltonian in basis C: which is the form presented in the main text, up to the irrelevant constant K/4. In the case of SU(2) spins, (anti)ferromagnetic order can be probed using two-point S z correlations S z i S z j . The SU(4) generalization involves the 3 generators of the Cartan subalgebra. Among the possible choices, we can consider the natural set {Z 1 , Z 2 , Z 3 } or the one adopted in the main text {C 1 , C 2 , C 3 }. Of course these two sets carry the same information and are simply related by linear relations:
QUANTUM MONTE CARLO LOOP ALGORITHM FOR THE BILINEAR-BIQUADRATIC SO(6) 6-COLOR HAMILTONIAN
This section details how to implement an efficient cluster quantum Monte Carlo algorithm for the SO(6) Hamiltonian (4). The algorithm presented below is a simple adaption of the so-called non-binary loop algorithm proposed by Kawashima and Harada [58] for bilinear-biquadratic spin 1 models in the region θ ∈ [−3/4π, −π/2]. We present it using the Stochastic Series Expansion [72] framework, and considering an arbitrary number of colors n c in its construction, meaning that it can be applied directly for the same SO(n c ) Hamiltonian (we specialized to n c = 6 in the simulations presented in the main text).
As Stochastic Series Expansion calculates expectation values by sampling over operator strings generated upon expanding Tr[e −βH ], we seek a convenient representation for the operator string. We decompose the Hamiltonian given in Eq. (4) as where sums over indices are implied. In addition to these operators we add an identity operator, I i indexed by site number i, which allows us to implement efficient updating methods. Using this notation, an operator string such as . would map to a configuration of vertices dictated by the rules discussed above. This can also be seen as a loop configuration by connecting the legs of vertices occuring sequentially in the operator string. This is a well established procedure for quantum Monte Carlo and examples of such loop configurations can be found in Ref. [58]. Starting from a random operator string, we can sample relevant operator strings using the two following steps of the algorithm: Diagonal update: The diagonal elements of the Hamiltonian can be inserted/removed in the diagonal update. When an identity operator is encountered, one proposes to insert a diagonal operator on a random bond with a probability 2N Kβ M −m , where N is the number of lattice sites, m is the current number of non-identity operators, and M is the fixed cutoff for the operator string length which is set to be large enough to accomodate all fluctuations of m. [72].
Only the following two situations (for even number of colors n c ) for the colors of the currently propagated states lead to an insertion: For an odd number of colors, one must be careful to consider the case of c = c =c separately, as both types of operators have non-zero matrix elements in this case, and the probability of addition should be proportional to |K|.
When a diagonal operator is encountered, it is removed with probability M −m+1 2N Kβ . Loop update: A loop is sourced by picking a leg of a vertex at random (which has a color c 0 ), and propagating a loop of randomly selected color c l = c 0 . When the loop hits a vertex on a certain leg (e.g. leg with state c 1 as shown in the diagram above) of a vertex, it will first change the color c 1 → c l and continue its path using different moves depending on the type of vertex encountered: 1. Cross vertices: When c 1 = c 2 (but c 2 =c 1 ), the loop does a diagonal move c 2 → c l and continues propagating (with color c l ) 2. Horizontal vertices: When c 1 =c 2 (but c 2 = c 1 ), the loop reverses its direction and color c l →c l , switches c 2 →c l and continues propagating (with colorc l ) 3. Mixed vertices: When c 1 = c 2 =c 2 , then with probability p diag = J/K the loop does a diagonal move (move 1), and with probability 1 − p diag switches and reverses (move 2). The loop goes on until it reaches its initial starting point. This loop is accepted with probability one.
At θ = −π/2 and for bipartite lattices, the model is SU(n c )-symmetric and the algorithm is identical to the one derived for SU(N ) models [30,81]. For θ = −3/4π, the model is also SU(n c )-symmetric (with fundamental representation on each lattice site). Quite importantly, the algorithm is not dependent on the bipartite nature of the lattice and can thus be applied to any arbitrary lattice. A special case of the algorithm at θ = −π/2 has been used for studies of SO(3) triangular lattice models, and SO(n) models on kagome and triangular lattices [46,48,77]. Note also Ref. [74] which studies the spin-1 bilinear-biquadratic model on triangular lattice, using a similar 3-color loop algorithm in the region {θ SF }.
MAPPING TO NEMATIC HAMILTONIAN AND SO(6) SYMMETRY
To make the nematic ordering generated by Hamiltonian (4) more explicit, we first reproduce the transformation of the color states to the nematic basis from the main paper: Using these relations and noting that the second term in Eq. (4) can be written as c |cc s ss| , a simple substitution shows that |cc + |cc = |N c N c + |NcNc , leading to To transform the first term in Eq. (4), we first note that the 36 terms in the complete sum over c, c can be separated into sets of 4, each given by |cc c c| + |cc c c| + |cc c c| + |cc c c| . (8) Doing the transformation on the first two terms and only on the first site, we see that |cc c c| + |cc c c| = |N c c c N c | + |Ncc c Nc| . Following this with the same transformation for the last two terms, a subsequent transformation of the second site, and a careful counting of remaining terms leads to Eq. (8) being expressed in the nematic basis as As one can see from the above equation, this term retains the same form in the nematic basis. The complete Hamiltonian in this basis is expressed as To study the symmetries of this Hamiltonian, we first consider c |N c N c . Using an SU(6) transformation U on one sublattice and U * for its complementary sublattice. This leads to which reduces to b |N b N b as U is unitary, and thus preserves the form. For terms such as c |N c N c N c N c |, we transform using U on both sublattices, leading to a preservation of the form using similar arguments. The above statements imply that for a Hamiltonian with both terms invariant, we would require U * = U . This condition is satisfied by elements of the orthogonal group SO(6), which comprises of real matrices which generate proper rotations in six dimensions. We note that an almost identical transformation is found in Ref. [75] for a SU(4) antiferromagnet and in Ref. [46] for a spin-1 biquadratic model on the triangular lattice.
ENERGY, ORDER PARAMETERS AND THEIR BINDER CUMULANTS NEAR THE PHASE TRANSITION
In this section we present a detailed description of numerical data near the quantum phase transition located at θ c −0.5969(1)π for both the energy and Binder cumulants of order parameters.
Energy histograms -A first order phase transition can be detected, if strong enough, by the existence of two peaks in the histogram of energy (recorded during the Monte Carlo simulations) corresponding to energies of the two coexisting phases. In the top panel of Fig. 5, we present energy histograms for a system size L = 32 for different values of θ close to and across the quantum phase transition, where we observe no sign of such double-peak feature.
Nematic order parameter distribution -For the SO(6) version of the Hamiltonian, each site can take one of 6 colors. To study the nematic ordering we use a 6-dimensional nematic order parameter as defined in the main text. In the disordered phase, M c is expected to have a Gaussian distribution with mean zero and independent of all M c =c . This implies that a Binder cumulant defined as U Mc = (M c ) 4 (M c ) 2 2 evaluates to three in the disordered phase. In the ordered phase, U Mc evaluates to a finite value which is not unity due to the SO(6) symmetry. This can be observed in the histograms of the nematic order parameter shown in Fig. 5 (middle panel), as crossing the quantum phase transition. The distribution changes from Gaussian in the VBC phase where the nematic order parameter is disordered (right side of the panels), to a skewed distribution whose shape is dictated by the underlying SO(6) symmetry in the nematic phase (left side). Once again we find a lack of double-peak distributions, showing consistency with a continuous phase transition on length scale L = 32.
To understand the shape of this distribution, consider first a sample product state drawn from the Monte Carlo simulation in the nematic N basis. Note that in this basis the nematic phase corresponds to a simple SO(6) ferromagnet. Let us denote the fraction of sites hosting color N c as a c . As we expect nematic ordering, without loss of generality, a 0 can be assumed to be larger than all other a c , and all other a c equal due to the remnant symmetry between the non-dominant colors. Now consider the operator M 0 i = |N 0 N 0 | acting at site i. Using the shorthand |c = |N c only for this section, we see that c|M 0 i |c = 1 for c = c = 0 and 0 otherwise. As the product state of the system is representative of the ordering, we must include all states reached by SO(6) rotations starting from this state. This can be engineered in a straightforward manner by applying the rotation on M 0 i using an SO (6) Due to the constraints on M 0 i , this reduces to the matrix A kl = O 0k O 0l . As we are working with a product state, applying this at site i in state c, we get A i = (O 0c ) 2 . As we have assumed that the fraction of sites in state c is a c , i A i reduces to c a c (O 0c ) 2 . Using the conditions that all a c are equal except a 0 and c a c = 1, we can write a 0 = 1 6 + r and a c =0 = 1 6 − r 5 . This implies that c a c (O 0c ) 2 can be broken into ( 1 6 − r 5 ) c (O 0c ) 2 + (r + r 5 )(O 00 ) 2 . We can reduce the first term by using the identity OO T = I, which implies This leaves a dependency on the SO(6) matrix given only by (O 00 ) 2 , which must be aver- Combining these results, we can conclude that the value of Binder cumulant in a nematic ordered state is We find that the expectation U M0 = 114 25 = 4.56. is in agreement with the Monte Carlo simulations presented below in the region of parameter space where we expect nematic ordering.
VBC order parameter distribution -To detect VBC ordering, we use D 2 = D 2 x + D 2 y (with D x = i (−1) ix C ix,iy · C ix+1,iy and D y = i (−1) iy C ix,iy · C ix,iy+1 as in the main text) and similarly define the Binder cumulant as U D = (D 2 ) 2 D 2 2 . In the disordered phase (D x , D y ) form a twodimensional Gaussian distribution leading to U D = 2. In the ordered phase, U D = 1 as fluctuations in D 2 are small compared to its mean value. Note that D 2 is sensitive only to the development of non-zero VBC ordering and does not differentiate between various types of VBC orderings, such as columnar and plaquette.
The histograms for the VBC order parameter shown in the bottom panel of Fig. 5 all show a circular shape but with a finite radius that decreases as one moves towards the nematic phase (the finite value of the left panels located in the nematic phase are associated to the finite size L = 32). Binder cumulants -We finally present in Fig. 6 the values of Binder cumulants as a function of θ close the phase transition, for different system sizes. We observe a non-trivial non-monotonous behavior for both nematic U Mc (top panel), and VBC U D (bottom panel) Binder cumulants.
For the nematic Binder cumulant, data on small systems range within the disordered value 3 (reached for large enough θ) and the expected ordered value 4.56 (reached for θ < θ c ). On the other hand, starting from L 18, the Binder cumulant curve overshoots the ordered value as one approaches the transition point θ c from above, with curves showing a steeper overshoot as L is increased. For a first order transition, a somewhat similar behavior is predicted [76] on the basis of a two-peak distribution of the order parameter (which we do not observe, see above) resulting in a value of the Binder cumulant at the maximum scaling with volume L 2 . We have checked that the maximum of U Mc does not scale as the volume L 2 , at least on the lattice sizes accessible to us. Curves for different system sizes cross at different values of θ, which is usually indicative of a first order transition (but note however the very narrow range of θ displayed in Fig. 6). The non-monotonous behavior does not allow to conclude on the order of the phase transition (in particular a data collapse is not satisfying), but we note that the sharp overshoot feature is converging towards our estimate of θ c −0.5969π obtained from stiffness crossing (see main text).
A similar, albeit slightly different, non-monotonous behavior is observed for the VBC Binder cumulant, with a somewhat smoother overshoot over the disordered value of the Binder cumulant. Here again the maximum does not scale with volume, and could actually be converging to a finite value given the data on the largest systems that we could simulate (L = 36, 40). The maximum anomaly also converges towards our estimate of θ c .
Overall we conclude that the Binder cumulants of both order parameters do not display the behaviors expected either at a continuous phase transition (no clear unique crossing point) or at a (strong) first-order phase transition (with an anomaly scaling as the volume of the system size).
NATURE OF THE U(1) SYMMETRY IN THE VBC PHASE
Here we show evidence for the nature of the VBC phase by studying a large L = 96 lattice. In order to determine the nature of the phase, we display in the top panel of Fig. 7, the nematic Binder cumulant in the range [−0.54π, −0.5π] and we clearly see that it approaches close to the expected value of 3 in the disordered phase. On the other hand, the VBC Binder cumulant (middle panel is close to 1) in the same range, as expected for an ordered VBC state.
This preliminary check being performed, we now seek for the specific symmetry breaking pattern of the VBC. We find that even at this large system size L = 96, there is no obvi- ous discrete symmetry breaking, as we report in the bottom panel of Fig. 7 for θ = −0.52π. There the sample histogram of the VBC order parameter at a relatively low temperature of β = 12 clearly displays a U(1) symmetry. Note that we are unable to simulate lower temperatures for L = 96 due to ergodicity constraints and finite statistics of our simulations, as the system is able to sample only a portion of, and not the full, circle. As seen for the Binder cumulant of the VBC order parameter in Fig. 6, this finite statistics issue does not affect the estimation of the magnitude fluctuations. Recalling that a component of M is defined as M s = 1 N ( i |N s N s |) − 1 6 , we see that (M s ) 2 yields a non-zero value for nematic ordering. (5) and SO(6) respectively. A crossing point is visible with increasing system size, and a rough estimate of the thermodynamic value of the discontinuity in the order parameter is given by dashed constant lines. SO(5) symmetry for e.g. spin-3/2 fermionic cold atom systems [21,22].
In the phase space region defined by θ/π ∈ (−0.75, −0.5), we find a nematic and VBC phase, separated by a direct transition, similar to the SO(6) case studied in the main text. The behavior of both Binder cumulants is shown as a function of θ/π in Figs. 9 (a) and (b).
To identify the nematic phase, we calculate the Binder cumulant of the nematic order parameter defined similarly as in the SO(6) case. Repeating the argument above for the case of an SO(5) symmetry, we find p(x) ∝ (1 − x 2 ) for the distribution of the first component and that the Binder cumulant is re-expressed as 11 .
We find ( Fig. 9 (a)) that the nematic Binder cumulant tends to the predicted theoretical value 42/11 3.818 in the parameter range θ/π ∈ (−0.75, −0.5443(2)), beyond which we find a VBC phase, indicated by the approach of the VBC Binder Cumulant to unity ( Fig. 9 (b)) We also observe the development of a non-monotonic behavior with increasing size similar to the SO(6) case, indicating a possible first order transition.
While it is difficult to differentiate between weak and very weak first order phase transitions given the large scale lengths involved and the large number of components in these models, we now present two numerical observations which lead us to conclude that the first order nature for SO(5) is weaker than the same for SO (6).
The first of this is the ergodicity achieved by our QMC algorithm for sizes close to L = 48 for SO (5). As we have shown in the main text, the algorithm suffers from strong metastability for a size of L = 40 for SO(6), making it impossible for us to get reliable data for larger sizes. This feature is absent for SO(5) at least till sizes of L = 72. This shows that the transition is not of a strong first order nature, where we would expect the algorithm to oscillate between two qualitatively different phases.
The second observation involves the behavior of the VBC order parameter close to the transition as it approaches zero. Both SO(5) (Figs. 9 (c)) and SO(6) (Figs. 9 (d)) show crossing points in the VBC order parameter, which are not expected at a conventional continuous transition. This allows us to estimate the size of the discontinuity in the VBC order parameter at the transition (assuming that it is first order) and we show a rough estimation of the thermodynamic discontinuity in both plots using dashed constant lines. A comparison of Figs. 9 (c) and (d) shows that the discontinuity for SO(6) is roughly a factor of 2 greater than that for SO (5), also suggesting that the SO(5) symmetry realises a weaker first order transition.
|
2021-09-22T01:16:03.754Z
|
2021-09-21T00:00:00.000
|
{
"year": 2021,
"sha1": "2a2f6b7d9f416468be301803bdb1786271c2eb8d",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "63c68dde6aa7352166c33c993573f721c9d857d7",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
236936147
|
pes2o/s2orc
|
v3-fos-license
|
Early postoperative treatment of mastectomy scars using a fractional carbon dioxide laser: a randomized, controlled, split-scar, blinded study
Background Mastectomy leaves unsightly scarring, which can be distressing to patients. Laser therapy for scar prevention has been consistently emphasized in recent studies showing that several types of lasers, including fractional ablation lasers, are effective for reducing scar formation. Nonetheless, there are few studies evaluating the therapeutic efficacy of ablative CO2 fractional lasers (ACFLs). Methods This study had a randomized, comparative, prospective, split-scar design with blinded evaluation of mastectomy scars. Fifteen patients with mastectomy scars were treated using an ACFL. Half of each scar was randomized to “A,” while the other side was allocated to group “B.” Laser treatment was conducted randomly. Scars were assessed using digital photographs of the scar and Vancouver scar scale (VSS) scores. Histological assessments were also done. Results The mean VSS scores were 2.20±1.28 for the treatment side and 2.96±1.40 for the control side. There was a significant difference in the VSS score between the treatment side and the control side (P=0.002). The mean visual analog scale (VAS) scores were 4.13±1.36 for the treatment side and 4.67±1.53 for the control side. There was a significant difference in VAS score between the treatment side and the control side (P=0.02). Conclusions This study demonstrated that early scar treatment using an ACFL significantly improved the clinical results of the treatment compared to the untreated scar, and this difference was associated with patient satisfaction.
INTRODUCTION
Breast cancer is the most widespread type of cancer among women, affecting 2.1 million women each year and causing the most cancer-related deaths among women [1]. It has been estimated that 627,000 women died from breast cancer in 2018, accounting for about 15% of all cancer deaths in women [2]. The incidence of breast cancer is steadily increasing, with 22,550 new cases and 2,353 deaths reported in 2015, based on statistics from the Korea Central Cancer Registry [3]. Since the 5-year survival rate of women with breast cancer is relatively high, at 89% [1], breast reconstruction has become a crucially important part of the care plan for these patients, and the demand for post-mastectomy breast reconstruction is simultaneously in-creasing [4]. However, mastectomy leaves unsightly scarring, and cosmetically unappealing scars cause distress in patients [5]. In addition, there is growing concern about scarring and patients are increasingly focusing on cosmetic outcomes. Thus, the proper treatment of breast reconstruction scars is important, and many procedures have been used to reduce scar formation.
Laser treatment has been consistently emphasized in recent studies for scar prevention, showing that several types of lasers, including fractional ablation lasers, are effective for reducing scar formation [6][7][8][9]. Ablative CO2 fractional lasers (ACFLs) heat and vaporize superficial skin layers. Then, the healing process induces new collagen formation and collagen remodeling, which are responsible for scar improvement [9]. Nevertheless, few studies have evaluated the therapeutic effect of ACFLs. Moreover, recent research has begun to focus on early scar interventions. Therefore, the purpose of this study was to evaluate the efficacy of ACFL treatment for breast reconstruction scarring in the early postoperative period.
Study design and patients
This study had a randomized, comparative, prospective, splitscar design with blinded evaluation of mastectomy scars. Between April 2019 and November 2019, 15 patients who underwent total mastectomy and immediate, implant-based breast reconstruction were treated using an ACFL (10,600 nm carbon dioxide; Lutronic Corp., Goyang, Korea) Mastectomy was performed using the same method in all patients. The incision was made from the lateral aspect to the nipple in a linear shape, and closure was based on subcutaneous buried Vicryl sutures and nylon sutures for the skin. Treatment was initiated after suture removal. Half of the scar was randomized to "A, " while the other side was allocated to "B. " Laser treatment was conducted randomly (Fig. 1). The treatment parameters ranged from 22 to 38 mJ, at a density of 300 spots/cm 2 in the static operating mode, mainly starting at the time of suture removal. Only one pass was made using a scan area of 4 × 4 mm in the static mode, and no other treatment (e.g., tension-relieving devices or silicone gels) was administered. After laser treatment, the patients were advised to apply hydrocolloid dressing (Duoderm; ConvaTec, Oklahoma City, OK, USA) for a 1-week. All patients provided written informed consent before participating in the trial.
Scar assessment
With the same background, exposure, and light source, and using the same digital camera (750D; Canon, Tokyo, Japan), photographs of scars were taken both before and 6 months after la-ser treatment. Three blinded physicians independently graded the treatment outcomes using the Vancouver scar scale (VSS), which evaluates scar vascularity, pigmentation, and height from 0 to 3 and pliability from 0 to 5; a score of 0 indicates similarity to normal skin, while the maximum score represents the worst possible scar. Patient satisfaction was evaluated by overall scar appearance using a visual analog scale (VAS), ranging from 0 to 10, where a score of 0 indicates similarity to normal skin and 10 represents the worst possible scar. To investigate histological changes after ACFL treatment, patients who underwent tissue expander insertion were asked to undergo biopsies from the scar area. Three patients agreed, with a total of six biopsies at the time of implant change surgery, and three from ACFL-treated scars and three from untreated control scars. The biopsied tissues were formalin-fixed and paraffin-embedded. For routine tissue pathological evaluation, a section of 5-μm thickness was cut and stained with hematoxylin and eosin. Masson's trichrome stain was used to visualize collagen fibers.
Statistical analyses
Changes in VSS and VAS scores between the treatment and control halves of scars were compared using the paired t-test. A P-value < 0.05 was considered to indicate statistical significance. All analyses were performed with SPSS for Windows, version 20.0 (IBM Corp., Armonk, NY, USA).
RESULTS
Fifteen patients with 15 scars completed the treatment protocol and follow-up. Table 1 shows the demographic and clinical characteristics of the study group. The average age was 43.4 years. The average length of the scar was 8.3 cm, resulting in an average treatment length of 4.2 cm. ACFL treatment improved clinical outcomes (Figs. 2, 3). The mean VSS scores were 2.20 ± 1.28 for the treatment side and 2.96 ± 1.40 for the control side. There was a significant difference in the VSS score between the treatment side and the control side (P = 0.002). The mean VAS scores were 4.13 ± 1.36 for the treatment side and 4.67 ± 1.53 for the control side. There was a significant difference in VAS scores between the treatment side and the control side (P = 0.02) ( Table 2, Fig. 4). Post-therapy crusting and transient erythema were reported, but all cases resolved in 1 week. No other adverse events, including posttherapy blister formation or infection, were observed.
The biopsy showed well-formed dermal scar lesions in the untreated and treated cases. ACFL treatment revealed increased reticular dermal collagen deposition with normal architecture, more organized collagen fibers, and a thickened epidermis with granular layer hyperplasia and normal stratum corneum (Fig. 5). Masson's trichrome staining revealed increased dermal collagen in the treatment group, and the collagen fibers were more regularly arranged (Fig. 6). These histologic findings are consistent with the clinical outcomes.
DISCUSSION
This was a randomized, comparative, prospective, split-scar study with blinding, in which we evaluated the effects of early ACFL treatment for mastectomy scars using VSS scores, VAS Values are presented as mean ± SD or number (%). scores, and histological findings. We found that scar texture was improved by ACFL treatment, and patients' VAS evaluations of scar appearance also showed an improvement in the treated scars compared to the untreated scars. Histopathological improvements were also observed on the treated side.
Postoperative scarring with physical sequelae (pain, itching) and the resulting psychological stress can affect not only patients' quality of life, but also their overall satisfaction with the surgical outcomes. These scars remind patients of their illness and may be associated with functional and psychosocial morbidity [10]. Although scar remodeling and clinical improvement require at least a few years after surgery, many patients hope that the scar will be improved as soon as possible.
Many expert opinions exist regarding which laser is ideal for select indications, but there is no consensus regarding the laser type and protocol for the treatment of surgical scars. Tierney et al. [11] showed that fractional photothermolysis was superior to a pulsed dye laser for the treatment of surgical scars. Park et al. [12] reported that ablative fractional laser treatment showed profound skin changes and collagen remodeling on rats when compared to nonablative fractional laser treatment. Kim et al. [13] showed that ablative fractional laser treatment for fresh thyroidectomy scars was more effective than nonablative fractional laser treatment. We therefore hypothesized that an ablative fractional laser would be more effective for surgical scars, and chose ACFL treatment rather than a nonablative fractional laser or a pulse dye laser.
Ablative fractional lasers, which were developed to compensate for the existing shortcomings of ablative surfacing lasers, are gaining popularity in scar treatment. The fractioning characteristics of these laser beams allow healthy skin to be maintained, accelerating the healing procedure [9]. Fractional lasers create a microthermal treatment zone (MTZ). These are repeated rows of heat damage that also penetrate the upper dermis. Each ablative zone is surrounded by healthy tissue. Spared, viable keratinocytes in healthy tissue can migrate to the MTZs, where they promote the healing processes of reepithelization and collagen production [14]. Fractional lasers showed the same benefits as those obtained from ablative lasers, including dramatically reduced downtime and complications and straightforward resolution of common side effects such as erythema and edema [9]. We used pulse energy of 22-38 mJ with a total density of 300 spots/cm 2 , and no patients developed any complications. We chose a relatively low energy level because the mastectomy skin flap was thin, and we considered the presence of the implant under the skin.
Modulation of the wound healing process may hold the key to minimize the formation of scarring. Wound healing may be divided into three overlapping phases: inflammation, proliferation, and remodeling. Traditionally, mature scars were the target of laser treatment. While laser treatment of mature scars has been shown to lead to some remodeling of scar tissue, early laser intervention for immature scars and even during wound healing has been proposed as a preventative approach to improve scar appearance. No consensus yet exists regarding the precise definition of an early intervention, but a recent systematic review defined it as laser treatment within 3 months after wound formation and found some improvement of scar appearance in 40%-75% of studies [7]. However, the proper window for laser treatment remains controversial. The current study evaluated the potential of ACFL treatment as an early intervention to improve the appearance of post-surgical scars. We treated patients with lasers within 2 to 3 weeks of surgery, at the time of suture removal, because we believed that re-epithelization would be complete at this point.
Previous reports have suggested that ACFL treatment has an effect on surgical scarring [8,15,16]. However, no studies have evaluated the therapeutic efficacy and safety of ACFL on mastectomy scars. To our knowledge, this is the first study to demonstrate clinical outcomes of ACFL treatment for mastectomy scars and safety after implant-based breast reconstruction in the early postoperative period. Furthermore, through a uniform comparison of scars, we able to prevent potential errors that may arise from differences in the scar location and suture method. However, this study also has some limitations. First, the number of samples of the patient was small. Second, while the scar formation takes at least a few years, the follow-up observations period in our study was shorter than the usual period. Third, although the histological results indicate positive effects of ACFL treatment, it should be kept in mind that the histological evaluations were done for part of the specimens, not for the whole specimens. Therefore, future studies should consider longer follow-up periods, larger sample sizes, and biopsies of all scars. This study demonstrated that early scar treatment using ACFL showed a significant improvement in the clinical results of treatment compared to the untreated scars using VSS scores, VAS scores for patient satisfaction, and histological findings, and found that ACFL treatment had an impact on patient satisfaction. Therefore, early treatment of surgical scars with ACFL is recommended to achieve better scar cosmesis.
Conflict of interest
Hyun Woo Shin is an editorial board member of the journal but was not involved in the peer reviewer selection, evaluation, or decision process of this article. No other potential conflicts of interest relevant to this article were reported.
Ethical approval
The study was approved by the Institutional Review Board of Kangbuk Samsung Hospital (IRB No. 2019-05-047) and performed in accordance with the principles of the Declaration of Helsinki. Written informed consent was obtained.
|
2021-08-07T06:18:11.387Z
|
2021-07-01T00:00:00.000
|
{
"year": 2021,
"sha1": "7ed3b369363f23763d9a3b33976a38beb94e2ac4",
"oa_license": "CCBYNC",
"oa_url": "http://www.e-aps.org/upload/pdf/aps-2020-02495.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d33ea37a18ecc3243b80e7becc603fe00ffb8cb5",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
248050532
|
pes2o/s2orc
|
v3-fos-license
|
NT3 treatment alters spinal cord injury-induced changes in the gray matter volume of rhesus monkey cortex
Spinal cord injury (SCI) may cause structural alterations in brain due to pathophysiological processes, but the effects of SCI treatment on brain have rarely been reported. Here, voxel-based morphometry is employed to investigate the effects of SCI and neurotrophin-3 (NT3) coupled chitosan-induced regeneration on brain and spinal cord structures in rhesus monkeys. Possible association between brain and spinal cord structural alterations is explored. The pain sensitivity and stepping ability of animals are collected to evaluate sensorimotor functional alterations. Compared with SCI, the unique effects of NT3 treatment on brain structure appear in extensive regions which involved in motor control and neuropathic pain, such as right visual cortex, superior parietal lobule, left superior frontal gyrus (SFG), middle frontal gyrus, inferior frontal gyrus, insula, secondary somatosensory cortex, anterior cingulate cortex, and bilateral caudate nucleus. Particularly, the structure of insula is significantly correlated with the pain sensitivity. Regenerative treatment also shows a protective effect on spinal cord structure. The associations between brain and spinal cord structural alterations are observed in right primary somatosensory cortex, SFG, and other regions. These results help further elucidate secondary effects on brain of SCI and provide a basis for evaluating the effects of NT3 treatment on brain structure.
Traumatic spinal cord injury (SCI) refers to the trauma-induced damage to the spinal cord structure and function, which may lead to impairment of sensory and motor functions below the injury level 1 . After primary SCI (the mechanical trauma at the "epicenter" of the damage), resultant cascades of cellular and molecular events known as secondary SCI 2 , such as neuronal death 3 , demyelination, and nerve fiber degeneration 4 , may occur at and beyond the injury level 5 . These usually cause further effects on neural structures beyond the injury level, such as the brain, resulting in sensorimotor disorders, neuropathic pain (NP), and other complications 6 . The accurate observation of such structural and functional alterations facilitates the assessment of the injury/recovery status of clinical patients. MRI has been widely used for non-invasive inspection of the central nervous system and histological validations have demonstrated its reliability [7][8][9][10][11] . Where voxel-based morphometry (VBM) 12 is currently one of the most popular methods used for the quantitative investigation of brain meso-structure. It transforms the highresolution 3D MRI images into a unified standard space to conduct voxel-wise quantitative analysis, enabling the accurate detection of brain structural alterations non-invasively. Many researchers have used it to explore brain structural alterations post-SCI [13][14][15] . In 2004, Crawley et al. 16 explored the gray matter volume (GMV) alterations in the primary motor cortex (M1) of cervical cord injured patients by using VBM. The GMV of M1 in patients was reduced by only approximately 3.8% compared with that of the controls, without statistical significance. They therefore suggested that functional remodeling rather than anatomical alterations was more likely to occur within M1 post-SCI. Freund et al. 17 first used VBM to detect longitudinal neurodegenerative processes above the injury level after acute SCI. They conducted 1 year follow-up to explore gray and white matter atrophy in the corticospinal tract (CST) and sensorimotor cortex of the patients and the controls. At the level of the internal capsule and the right cerebral peduncle, the volume loss of CST white matter and left M1 gray matter was faster in the patients than in the controls. This longitudinal study explored the spatiotemporal pattern of the neurodegenerative process post-SCI and identified widespread upstream atrophy and meso-structural alterations that occurred within corticospinal axons and sensorimotor cortical regions of the patients within the first few months after the injury. Besides the intensive exploration of classical sensorimotor pathways 18 , researchers focusing on VBM post-SCI have recently been exploring common post-SCI complications, such as NP 19,20 , psychological, cognitive impairment 13,15 , and their related brain regions. Meanwhile, several studies have explored the possible effects of factors, such as the extent, level, and duration of SCI, on the cortical alterations in patients to address the discrepancy among outcomes reported in different publications 14,21 ; these studies further demonstrated the complexity of structural and functional changes within the brain post-SCI.
Although the VBM study post-SCI has been extensive, some problems still exist in the current research. First, most of the relevant studies were cross-sectional [22][23][24] , and longitudinal studies 17,25 were relatively few. The brain alterations post-SCI were a progressive process, and the cross-sectional studies targeting a single time point provided limited guidance for the exploration of the changing process. Second, many studies only considered the effects of injury-induced spontaneous plasticity on brain structure. Some of the publications involving rehabilitation treatment only focused on the improvement of motor function in patients 26,27 . Few studies focused on the effects of tissue regeneration in spinal cord on brain cortex. In addition, few investigators have explored the alterations in the brain and spinal cord structure simultaneously post-SCI 17,25,28 . Correspondingly, the potential association between the structural changes of brain and spinal cord is less intensively covered by current research.
The bioactive materials (neurotrophin-3 (NT3) coupled chitosan) previously developed by our group have been confirmed to elicit robust activation of endogenous neural stem cells (NSCs) in the injured spinal cord www.nature.com/scientificreports/ of non-human primates. The bioactive materials attracted NSCs to migrate into the lesion area through slow release of NT3, differentiate into neurons 29 . Enhanced angiogenesis and reduced inflammatory responses also provided an excellent microenvironment for neurons 30 . These biological effects promoted the form of functional neural networks, which interconnected severed ascending and descending axons, resulting in the recovery of sensorimotor function 31 . Therefore, we used this material to build a regenerative treatment model on the basis of rhesus monkeys SCI model in the current study. The meso-structural alterations in the brain of animals caused by SCI and treated with implanted materials were compared through VBM. This study aimed to observe the differences between the effects of spontaneous plasticity (caused by injury) and tissue regeneration (induced by NT3 treatment) on brain structure after SCI and to explore the possible association between the structural alterations in the brain and spinal cord.
Methods
Animal model preparation. The experimental animals used in this study were eight adult female rhesus monkeys (mean weight: 5 ± 1 kg, mean age: 5-6 years old). Female monkeys were chosen because they have better coordination and less aggression than males and are easy to care for after injury. Four of the animals were randomly designated as the lesion control (LC) group and underwent SCI. The other four were implanted with repair materials immediately post-SCI and were designated as the NT3 group. Model preparation was performed in a sterile operating room by professional surgeons. Anesthesia of animals was induced by intramuscular injection of ketamine hydrochloride solution (10 mg/kg) before the start of surgery and then maintained by intramuscular injection of xylazine hydrochloride (5 mg/kg). The spinal cord hemi-transection and remove model was used. First, the T7-9 segments of rhesus monkeys (approximately corresponding to the T10-12 segments of the thoracic cord) were located as the injury site. Then, the spinal cord tissue (1 cm long and 2-3 mm wide) was excised 0.5 mm to the right of the posterior central spinal vein to establish the spinal cord hemitransection and remove model of rhesus monkeys. After topical hemostasis, the repair materials with the corresponding size were implanted into the injury area of the NT3 animals, whereas no intervention was performed on the LC animals. Antibiotics (penicillin, 240 mg/D) were administered for 5 days postoperatively to prevent infection. All rhesus monkeys were individually housed in monkey cages at constant temperature and humidity. We ensured that they received adequate food and fresh fruit. Detailed procedures were presented in the previous publication 31 .
MRI scanning. All datasets were collected with the Siemens 3.0 T magnetic resonance system (Magnetom Trio Tim, Siemens, Erlangen, Germany). During scanning, a custom-made four-channel primate head birdcage coil was initially used to acquire brain structural images. Then, a human spinal standard array transmitter and receiver coil was used to obtain spinal cord structural images. Considering the stability of the physiological state of the animals, the scan time points were set as follows: healthy period (baseline); and 1, 2, 3, 6, and 12 months post-SCI.
To obtain brain images, animals were intramuscularly injected with ketamine hydrochloride solution (10 mg/ kg) prior to the scanning to induce anesthesia. An intramuscular injection of atropine sulfate (0.05 mg/kg) was administered to reduce salivary secretion. Then, venipuncture and continuous administration of a mixed saline solution of propofol (0.25 mg/kg/min) and ketamine (0.03 mg/kg/min) were conducted to maintain anesthesia 31 . Anesthetized rhesus monkeys were placed inside the scanning cavity in the "Sphinx" posture. The head of the animal was placed and fixed in the birdcage coil, and the breathing patency was guaranteed. The respiration and heart rate of animals were monitored throughout the scanning period to ensure that the respiratory rate was maintained at above 20 beats/min, and the heart rate was maintained at above 70 beats/ min 32 . High resolution sagittal T1 weighted images were acquired using the 3D magnetization-prepared rapid acquisition gradient echo (MPRAGE) sequence with the following imaging parameters: repetition time (TR) = 1850 ms; echo time (TE) = 4.85 ms; flip angle = 8°; inversion time (TI) = 800 ms; matrix = 256 × 256; field of view (FOV) = 120 × 120 mm, slices = 160; and resolution = 0.47 × 0.47 × 0.5 mm 3 . Parts of the raw brain images were shown in Fig. S1 (Supporting Information).
To obtain the spinal cord images, anesthetized rhesus monkeys were placed inside the scanning cavity in the supine posture. Proton-density weighted sequence was used to acquire axial spinal cord structural images with the following parameters: TR = 3050 ms; TE = 11 ms; flip angle = 15°; matrix = 320 × 320; FOV = 196 × 196 mm, and resolution = 0.6 × 0.6 × 2.0 mm 3 . Scanning center was located at the surgical position of the spinal cord, and 27 consecutive slices of axial images covering the SCI region were collected. A saturation belt was set at the thoracoabdominal region to suppress the susceptibility artifacts caused by the air tissue interface. Brain and spinal cord scanning took approximately 20 min in total.
Function evaluation. The alterations of sensorimotor function were evaluated by collecting the withdrawal thermal thresholds (WTT) and step height of LC and NT3 animals at baseline, 1 month and 12 months post-SCI.
WTT . The WTT of LC and NT3 animals were collected by stimulating the left (contralateral to the injury) hindlimbs with the DS2-21612-105 semiconductor laser (BWT Beijing Ltd, Beijing, CN). The infrared laser wavelength was 810 nm, and the distance from the fiber port of the ranging laser to the skin was 1 cm, with a spot diameter of about 5 mm. The current value was initially set to 6.5 A and gradually increased with a gradient of 0.75 A, with the maximum current value limited to 11 A. To avoid scald, the stimulation time for each gradient was not allowed to exceed 30 s. When a significant withdrawal reflex appeared in 30 s, the machine stopped delivering the stimulation and the current values were recorded. Otherwise, the machine terminated the trial and restarted it from the next gradient after a 5-min inter-trial interval. www.nature.com/scientificreports/ Step height. The stepping of the right (ipsilateral to the injury) hindlimbs was collected using the Vicon system (Vicon 8, Oxford Metrics Limited Company, Yamton, UK). A reflective marker was attached to the second metatarsophalangeal joint and its spatial position was recorded in real time by multiple cameras (recording frequency: 100 Hz). The animals were allowed to walk on a treadmill bipedally (with upper body restrained, speed: 0.5 m/s) and successive stepping (> 5 steps) data were collected for subsequent analysis.
Data processing. Statistical parametric mapping (SPM) version 12 (University College London, London, UK) was used to preprocess MRI images of the rhesus monkey brain. The VBM preprocessing pipeline for intergroup comparison of brain structures at each time point included the following steps. First, all T1 weighted structural images were registered with the INIA19 template 33 using the rigid body model for them to have the same standard space. The registered images were subjected to unified segmentation 34 using the tissue probability maps provided by the template to obtain separate gray matter segments. Then, Diffeomorphic Anatomical Registration Through Exponentiated Lie algebra (DARTEL) 35 was used to average and warp the gray matter images six times to generate a study-specific template (SST). The SST was transformed into the Montreal Neurological Institute space by affine registration. The gray matter images were normalized to the SST by using the flow field generated by DARTEL and were modulated by Jacobian determinant to preserve the actual tissue volume. Finally, the normalized images were smoothed using Gaussian kernels of 2 mm full width at half maximum (FWHM). The VBM preprocessing pipeline used to explore the spatiotemporal pattern of longitudinal brain structural alterations post-SCI was similar to that used in the abovementioned procedure, but some differences existed. First, following the initial registration with the template, the anatomical images of each animal at six time points were longitudinally registered to the generated mid-point average image. Considering the multiple time points in the research, a velocity field divergence (dv) map, instead of Jacobian determinant for each time point was generated to represent the rate of volumetric alteration of each time point relative to the mid-point average image; positive values indicated expansion, whereas negative values indicated contraction. Then, the mid-point average images were segmented, and the SST was created. The dv maps were then multiplied by the gray matter segment of mid-point average images to obtain the GMV alteration images, which could indicate the longitudinal alteration tendencies of GMV with time and were proved to be more sensitive in small sample size studies 36 . Afterward, the GMV alteration maps were normalized, modulated, and smoothed (2 mm FWHM) as described above. We explored brain regions with different longitudinal alteration tendencies of GMV in the LC and NT3 animals for over 1 year and extracted the overall GMV of these brain regions.
Spatially adaptive non-local means filter was used to denoise while maintaining the detail features of spinal cord images to the maximum degree 37 . To measure the cross-sectional spinal cord area (SCA), the regions of interest (ROI) were selected at the epicenter, 2 cm rostral, and 2 cm caudal of the injury area. Considering the irregular structure of spinal cord in the injury area, automatic extraction of spinal cord was relatively difficult. Therefore, the SCA was delineated and calculated by manual outlining in Photoshop CC 2018 (Adobe Systems Inc., California, USA). All manual outlining processes were completed by a medical imaging professional who was not involved in this experiment. After SCA measurement, the cross-sectional left-right width (LRW) and anterior-posterior width (APW) of the spinal cord were further extracted for subsequent analysis.
Gait datasets were processed and computed using custom software based on Matlab (MathWorks, Natick, MA, USA). Gait cycle were first automatically divided 38 . The step height for each gait cycle was then calculated, which was used for subsequent evaluation of the motor functional recovery in both groups.
Statistics. SPM12 and Data
Processing & Analysis for Brain Imaging (DPABI) 39 were used for the statistical analysis of brain quantitative parametric maps. Two sample t-test implanted in DAPBI was used to compare the GMV and its alteration between LC and NT3 groups at each time point. Total intracranial volume was set as the covariate to exclude the interference of heterogeneity in brain size among individuals. Gaussian random field (GRF) theory was used for multiple comparison correction, with voxel-level set to p < 0.005 and cluster-level set to p < 0.05.
To analyze the longitudinal GMV alteration tendencies in LC and NT3 animals, we adopted the two-step analysis procedure commonly used in fMRI and longitudinal image analysis 25 . In the first step, we evaluated the images of each animal at all time points using regression analysis implanted in SPM12 to establish the quadratic trajectory model of GMV and time since injury (t): y(t) = β 0 + β 1 t + β 2 t 2 . This process would generate the parametric maps of β 1 and β 2 for each animal, where β 1 represented the linear effect of volume change with time, i.e., the expansion or contraction of volume. β 2 represented the quadratic effect of volume change with time, i.e., acceleration or deceleration of expansion or contraction. In the second step, parametric maps of β 1 and β 2 were input into the one sample t-test in DPABI to detect the brain regions with significant volume alterations over time in LC and NT3 animals. In addition, two sample t-test was used to compare the difference between the two groups in terms of GMV alteration tendency over time. The settings of the covariates and multiple comparison correction were the same as those described in the previous paragraph.
To analyze the association between structural alterations in the brain and spinal cord, correlation analysis in DPABI was conducted to detect the brain regions where significant correlations existed between the GMV changes relative to baseline and the SCA, LRW, and APW alterations relative to baseline. Data from all time points were included in the analysis. GRF theory was used for multiple comparison correction. The voxel-level was set to p < 0.001, and the cluster-level was set to p < 0.05. SPSS 20 (SPSS Inc., Chicago, IL) software was used for the quantitative analysis of the extracted overall GMV of brain regions, spinal cord structural parameters, WTT and step height. Regression analysis was used to test the linear or quadratic alterations of GMV, SCA, LRW, and APW with time. The linear effect represented the www.nature.com/scientificreports/ progressive change in the brain or spinal cord structure. The quadratic effect represented the change in the alteration rate. Chow breakpoint test was used to compare the differences in longitudinal spinal cord structural and WTT alteration tendency over time between LC and NT3 groups. This step was conducted through an in-house function in SPSS. To avoid individual heterogeneity, the spinal cord structural parameters and WTT values were divided by the corresponding individual baseline values to obtain the comparable normalized results. One sample Kolmogorov Smirnov (K-S) test was initially used to test the data distribution normality. Then, the variability of brain and spinal cord structural alteration rate between two groups and the step height at 1 month and 12 months in each group were compared (two sample t-test, normal; two sample K-S Z test, non-normal). The WTT values between different timepoints were also compared (paired t-test, normal; Wilcoxon signed rank test, non-normal; both with Bonferroni correction). To explore whether the spinal cord atrophy in NT3 group had reversed that in LC group to some degree, the parameters at 12 months of LC and NT3 groups and baseline were compared (one-way analysis of variance (ANOVA) with Bonferroni correction). The Spearman correlation analysis was used to verify the correlation among the GMV changes relative to baseline and spinal cord structural alterations relative to baseline. The association between the GMV and WTT was also analyzed by Spearman correlation analysis. The significance threshold was set to p < 0.05. The results were given as mean ± s.e.m.
Ethics approval and consent to participate. All experimental procedures were approved by the Biological and Medical Ethics Committee of Beihang University (BM20180046). All experiments were performed in accordance with relevant named guidelines and regulations. The authors complied with the ARRIVE guidelines.
GMV differences between LC and NT3 treatment animals.
There was no significant difference in GMV between the two groups of NT3 and LC at the baseline (Fig. S2, Supporting Information). After SCI, significant differences were found between the GMV of animals in the LC and NT3 groups across a wide range of brain regions, including the cerebral cortex and basal ganglia. The GMV of the LC in the right caudate nucleus (Cd) was significantly greater than that of the NT3 group at 2 months after injury. This tendency also appeared in the left Cd at 3 months after injury. The differences in Cd were no longer significant at 6 months after injury. Meanwhile, the regions where the GMV of the LC was smaller than that of the NT3 animals appeared for the first time in left middle frontal gyrus (MFG) and inferior frontal gyrus (IFG). At 12 months after injury, the GMV of the LC on the right visual cortex (VC) was significantly greater than that of the NT3 group, whereas the GMV of the LC on the left insula (Ins) and secondary somatosensory cortex (S2) was significantly lower than that of the NT3 animals (Fig. 1). Table 1 presents the details of significant clusters.
Longitudinal alterations in brain structure. Within 1 year after injury, a significant linear effect of regional GMV alteration over time was found on LC and NT3 animals, i.e., the rate of regional GMV alteration in each group was stable. No significant quadratic changes were found. The GMV of left VC decreased significantly over time in LC (Fig. 2a), whereas that of left superior parietal lobule (SPL) increased significantly over time in NT3 group (Fig. 2b). The GMV alterations of the LC in the superior frontal gyrus (SFG), IFG, anterior cingulate gyrus (ACC), and right SPL were significantly lower than those of the NT3 animals (Fig. 2c). Specifi- Figure 1. GMV differences between LC and NT3 animals overlaid on the T1 template image. Only clusters with significant differences were displayed (two-sample t-test with GRF correction). The voxel level was set to p < 0.005, and the cluster level was set to p < 0.05. The brain regions where the significant clusters were located were presented above the images; m represented the month; a positive value in the Z-bar indicated that LC > NT3; L indicated the left side; year. (c) The brain regions in which the GMV alteration showed significant differences between the two groups. Only clusters with significant differences were displayed. One-sample t test was used for intra-group modeling, and two-sample t test was used for inter-group comparison, with GRF correction. The voxel level was set to p < 0.005, and the cluster level was set to p < 0.05. The brain regions where the significant clusters were located are presented above the images. A positive value in the Z-bar indicated that LC > NT3; L indicated the left side. d) The results of longitudinal alterations in GMV in the significantly different brain regions. e) The results of inter-group comparisons on the variability of GMV alteration rate in the significantly different brain regions (two-sample K-S Z test www.nature.com/scientificreports/ cally, the GMV of these clusters decreased in the LC, while it increased in the NT3 group. The results of longitudinal alterations in GMV in the significantly different brain regions were shown in Fig. 2d. The comparison of the GMV alteration rate variability in these brain regions showed that in the left ACC, the alteration rate of the LC was much more varied than that of the NT3 group (p = 0.6 × 10 -5 ). No significant differences were found in the variability of left SFG, IFG, or right SPL between the two groups (Fig. 2e). Table 2 (Fig. S3b, Supporting Information). This indicated that the rate of spinal cord structural alteration was not constant, but showed a significant acceleration or deceleration. A significant difference was found in the longitudinal spinal cord structural alteration tendencies between the two groups, which was manifested in SCA (p = 0.0223) at the epicenter of injury area and LRW (p = 0.0004) at the caudal of injury area, while no significant difference was found in APW (Fig. 3c). Inter-group comparison of the variability of the spinal cord structural alteration rate revealed that in the LRW (p = 0.0047) at the rostral and in the APW (p = 0.0348) at the caudal of injury area, the alteration rate of the LC was much more varied than that of the NT3 group (Fig. 3d). In the SCA (p = 0.0006) and APW (p = 0.0069) at the epicenter of injury area, significant differences were found between LC group at 12 m and baseline, while NT3 group showed no difference with baseline. Significance was also found in the epicenter SCA (p = 0.0421) between LC and NT3 animals at 12 months (Fig. 3e).
Correlation between structural alterations in brain and spinal cord.
In the LC and NT3 animals, the distribution of brain regions where a significant correlation existed between GMV alterations and spinal cord structural alterations was relatively wide. For the LC, these regions included the following: right S1, SFG (positive correlation with alteration in rostral SCA); right S1, SFG, MFG, and IFG (positive correlation with alteration in rostral APW); and left lateral globus pallidus (LGP), right Cd, bilateral putamen (Pu) (positive correlation with alteration in caudal APW), and right VC (negative correlation with alteration in caudal APW) (Fig. 4a). For the NT3 animals, these regions included the following: right VC (positive correlation with alteration in caudal SCA); right primary motor cortex (M1) and S1 (positive correlation with alteration in rostral LRW); right M1 and SFG (positive correlation with alteration in caudal SCA); right SFG and VC (positive correlation with alteration in caudal LRW); and bilateral Cd and thalamus (Th) (positive correlation with alteration in caudal APW) (Fig. 4b). Table 3 presents details of significant clusters. To explore whether the association can be generalized to the overall brain regions where these clusters were located, we extracted the GMV alterations of the overall brain regions, and calculated their correlations with the alterations of spinal cord structural parameters. Some of the abovementioned significant correlations were still retained (Fig. 4c). For the LC, the GMV alterations in right S1 and SFG were positively correlated with the alteration in rostral SCA (r = 0.5788, p = 0.0038; r = 0.7724, p < 0.0001); the GMV alterations in right S1 and SFG were positively correlated with the alteration in rostral APW (r = 0.3841, p = 0.0472; r = 0.6141, p = 0.0020); and the GMV alteration in left LGP was positively correlated with the alteration in caudal APW (r = 0.4459, p = 0.0244). For the NT3 group, the GMV alteration in right VC was positively correlated with the alteration in rostral SCA (r = 0.7282, p = 0.0002); the GMV alteration in right S1 was positively correlated with the alteration in rostral LRW (r = 0.3950, p = 0.0424); the GMV alterations in right M1 and SFG were positively correlated with the alteration in caudal SCA (r = 0.5297, p = 0.0082; r = 0.6007, p = 0.0025); and the GMV alteration in bilateral Cd was positively correlated with the alteration in caudal APW (r = 0.5703, p = 0.0054). www.nature.com/scientificreports/ Verification of sensorimotor functional alterations after SCI. After SCI, the WTT and step height changed in both LC and NT3 groups, and correlation existed between these changes and the GMV alterations in some brain regions.
In the LC group, the mean WTT in the healthy period, 1 month and 12 months post-SCI were 9.6875 ± 0.4719 A, 8.3750 ± 0.2165 A, and 8.0000 ± 0.6847 A, respectively. The thresholds at 1 and 12 months after SCI were significantly lower than that in the healthy period (1 month: p = 0.0354; 12 months: p = 0.0374). The mean threshold in the NT3 group at baseline was 8.9375 ± 0.3590 A, with no difference with that in LC animals. At 1 and 12 months post-SCI, the thresholds of NT3 animals were 8.1875 ± 0.1875 A and 8.7500 ± 0.5303 A, respectively, and no significant difference was detected among the three periods (Fig. 5a). After SCI, the longitudinal alteration tendency of WTT showed a significant difference between the LC and NT3 animals (p = 0.0196). The WTT of LC group gradually decreased, while NT3 treatment alleviated the tendency (Fig. 5b). Within both groups, significant positive correlation existed between the overall GMV of Ins and the WTT post-SCI. (LC: r = 0.5855, p = 0.0455; NT3: r = 0.6137, p = 0.0338) (Fig. 5c). www.nature.com/scientificreports/ For the LC group, the mean step height was 3.0815 ± 0.4673 mm at 1 month and 8.1625 ± 2.1739 mm at 12 months after SCI, with no significant difference (p = 0.0802). For the NT3 group, the mean step height was 2.6973 ± 0.4668 mm at 1 month and 15.1245 ± 1.8539 mm at 12 months, indicating a significant increase (p = 0.1 × 10 -5 ) (Fig. S4). Only clusters with significant correlation were displayed. Correlation analysis with GRF correction; the voxel level was set to p < 0.001, and the cluster level was set to p < 0.05. The brain regions in which the significant clusters were located are presented above the images or indicated by the arrow. The correlated spinal cord structural parameters were presented below the images. A positive value in the Z-bar indicated that LC > NT3. L indicated the left side. (c) The results of the correlation analysis between the overall GMV alterations in the above brain regions and the spinal cord structural alterations. Only brain regions with significant correlation were presented (Spearman correlation analysis). LC lesion control, NT3 neurotrophin-3, SCA spinal cord area, LRW left-right width, APW anterior-posterior width, S1 primary somatosensory cortex, www.nature.com/scientificreports/
Discussion
This study described progressive structural alterations in the brain and spinal cord post-SCI and during NT3 treatment in the spinal cord hemi-transection and remove model of rhesus monkeys. NT3-induced regeneration had unique effects on the structure of the brain and spinal cord compared with SCI. Significant correlations were found between structural alterations in the brain and spinal cord. Given that the clinical imaging of a patient's injured spinal cord is often constrained by implanted fixtures, the assessment of brain structure may provide additional useful information for monitoring the clinical patient's spinal cord injury/repairment status. It should be noted that after SCI, a certain degree of spontaneous plasticity existed in the structure and function of central nervous system, rather than invariable [40][41][42] . Previous publications have shown that synaptic remodeling, axonal sprouting, and neurogenesis would occur in the spinal cord to some extent after injury 43,44 . Cortex structures would also be remodeled, such as the eliminating of old dendritic spines and the forming of new ones 45 . In this study, the spontaneous plasticity was the main reason for the structural alterations of cerebral cortex in the lesion group. The effects of tissue regeneration were revealed by the changes in the NT3 group, such as the more stable structural alteration rate in spinal cord and brain compared to that in the lesion animals, which may demonstrate the unique changing progress resulting from the implantation of NT3/chitosan.
Though VC and spinal cord were not directly connected, VC still played an important role in the process of performing complex motor tasks, as well as guiding and shaping the movement online. Consequently, there were numerous studies on the functional changes of VC after SCI. For example, Chen et al. 46 reported decreased functional connectivity (FC) within the medial vision network (mVN) of the brain in patients with incomplete cervical cord injury. Hawasli et al. 47 found a decrease in FC between VC and M1, as well as VC and sensory parietal cortex post-SCI. These findings were consistent with the results of the present study to some extent. The GMV of the left VC in the LC animals gradually decreased over time, which might reflect the structural atrophy caused by the long-term decline of functional connectivity in VC post-SCI.
Differences were found between the LC and NT3 animals in terms of brain structure at each time point after injury. At 12 months after injury, the GMV of the LC in the right VC was significantly greater than that of the NT3 animals. This phenomenon might be due to the potential effect of VC in pain processing. Preissler et al. 48 investigated the GMV alterations in the brain of patients with chronic phantom limb pain after amputation, and they found that the GMV in the dorsal and ventral VCs of the patients was significantly larger than that in the healthy control. This phenomenon might reflect a visual compensation mechanism to compensate for the lack of sensorimotor feedback, as SCI patients might need an increased use of visual information to aid in the completion of movement. The same kind of inverted mechanism could be observed in blind subjects, i.e., the loss of visual input enhanced the development of other sensory abilities. In this experiment, the right hemitransection of the spinal cord affected the input of sensory information to the right brain. Thus, the differences in the GMV of VC between the two groups of animals may also be attributed to visual compensation. In addition, significant structural differences were found in the Ins and S2 between the two groups of animals. Ins and S2 are both main components of the brain network for acute pain 49 , and patients with NP post-SCI usually show structural atrophy in Ins 50-52 and S2 50,53 . Considering the NT3-induced alleviation of increased sensitivity to hot pain in this study, and the significant association between the GMV of Ins and WTT in both groups. The brain structural alterations in these areas may be related to pain processing after SCI, indicating that the animals in the LC may suffer from more severe NP compared with those in the NT3 group, and this phenomenon can be further explored in future studies.
Structural differences also existed in Cd, MFG, and IFG between the two groups of animals. Cd plays an important role in the maintenance of trunk and limb posture, as well as the accuracy and speed of directional movement 54 . At the same time, it is part of the basal ganglia, which affects movement by thalamic cortical projection 55 . Rao et al. 56 investigated the alterations of regional homogeneity (ReHo) in rhesus monkeys post-SCI, and they found that compared with the healthy period, the ReHo in the right Cd of rhesus monkeys increased significantly at 12 weeks after SCI. Therefore, the structural differences in Cd might be due to the compensatory cortical activation following motor impairment in the right lower extremity of the animals. MFG is involved in executive function and emotional regulation 57,58 , whereas IFG is involved in resolving the conflict between motor intention and sensory feedback 59 . Many studies have reported the atrophy of GMV of MFG and IFG in SCI patients compared with the control 14,21,46,60 . Our previous publication has demonstrated that NT3/chitosan treatment can promote the recovery of motor function in animals 31 , and these structural alterations in motionrelated brain regions may contribute to this process.
In addition, within 1 year after injury, significant differences occurred in the spatiotemporal patterns of longitudinal brain structural alterations between the animals in LC and NT3 groups. The GMV of the left SPL in NT3 animals showed a significant linear increase with time. A functional overlap existed between SPL and the areas involving motor execution, motor observation, and motor imagination, such as dorsal premotor area, supplementary motor area (SMA), and M1 61 . Therefore, the alterations in GMV of the left SPL in this experiment may be due to the following reasons. After the right hemi-transection of the spinal cord, more intense neuronal activity occurred in the left SPL to maintain the motor function of the right lower extremity 62 . Similar results were reported by Nakanishi et al. 63 , who found that patients with complete SCI showed greater grip strength control over their entire upper extremity compared with healthy controls. Meanwhile, the GMV of bilateral SPL in patients was larger, and the functional connectivity with M1 was stronger. In addition, SPL primarily receives fibers from neighboring S1. The right hemi-transection of spinal cord may result in a decrease in the afferent input to the right S1, thereby affecting the neuronal activity of the right SPL 64 . This phenomenon may have caused the significant increase in GMV alteration of the NT3 group in the right SPL compared with that of the LC, indicating that the ascending sensory conduction pathway in NT3 animals might have been relatively well recovered. www.nature.com/scientificreports/ The GMV alterations of left SFG, IFG, and ACC in NT3 animals were significantly higher than those of the LC animals. SFG is involved in various cognitive and motor control tasks 65,66 , whereas IFG is involved in resolving the conflict between motor intention and sensory feedback 59 . Structural atrophy in these regions after SCI has been widely reported 14,21,46 . ACC is involved in the encoding of emotional information on pain and descending pain modulation 49 , and its structural atrophy is common in NP patients post-SCI 20,50 . Asemi et al. 67 indicated that ACC played a role in motion control through directional or non-directional interaction with the SMA. The differences in the longitudinal brain structural alteration tendencies in these regions between the two groups indicated that NT3-induced regeneration might have a great impact on the motion control and NP post-SCI. The structural alteration rate of ACC in NT3 animals was more stable compared to LC. Considering the wide structural plasticity in brain during rehabilitative treatment after SCI 26,27 and the protective effect of the microenvironment created by NT3 treatment on the spinal cord structure 30 , it might indicate that NT3 provide some indirect protective effect on the brain structure.
Differences in spatiotemporal patterns of longitudinal spinal cord structural alterations were found between the animals in LC and those in NT3 group within 1 year after injury. SCA atrophy in the epicenter of injury area in NT3 animals was significantly lower than that in LC, which was consistent with the findings of our previous publication 31 . The variability of the alteration rate in the rostral LRW and caudal APW in NT3 animals was also significantly lower than those in LC animals, which indicated that the spinal cord structural changes in NT3 group were relatively stable, thereby reflecting the protective effect of the microenvironment created by NT3 treatment on the spinal cord structure 30 . Notably, compared with the progressive atrophy in animals of the LC after injury, the caudal LRW of the NT3 group first increased significantly and then gradually returned to pre-injury levels. This finding might be due to the fact that the descending motor pathways of rhesus monkeys were located on the left and right sides of the spinal cord. After the right hemi-transection of spinal cord, the motor conduction pathways in animals of the LC were damaged below the injury level, and the caudal LRW was also reduced to a certain extent. However, under the protection of the implanted bioactive materials, the inflammatory responses were inhibited in NT3 animals. Moreover, neuronal death was reduced, and new neurogenesis was enhanced 30 , which may have led to the increase in the left-right width of the spinal cord in the early stage after injury. Then, with the development of new nerves, the ineffective structural connections gradually disappeared, and only a part of completely myelinated axons capable of effective neurotransmission was preserved 31 . This might be the reason for the gradual reduction of the left-right width of the spinal cord in the latter stage after injury.
A significant correlation exists between the GMV alterations and the spinal cord structural alterations in LC and NT3 animals after injury, which may be attributed to the close structural and functional association between the brain and the spinal cord. Brain regions significantly correlated with the spinal cord structural alterations in both groups, including right S1, SFG, Cd, and VC. In this study, no significant alterations in GMV of S1 were detected after injury, which was consistent with the findings of previous publications 18,68,69 . However, in LC, we detected the significant correlation between the alterations in right S1 and rostral SCA and APW. However, in NT3 group, a significant correlation was found between the alterations in right S1 and rostral LRW. This finding may indicate the feasibility of evaluating the structural status of the spinal cord by S1. One of the reasons for the significant correlation between GMV alterations in the right SFG and spinal cord structural alterations may be the fact that SFG holds a complete somatotopical representation of body movements through direct connections with the M1 and spinal cord 70 . Therefore, structural alterations in the spinal cord may affect GMV changes in SFG. Cd and Pu are collectively referred to as the neostriatum, and LGP is referred to as the old striatum. As they are important conduction centers of extrapyramidal system, they cooperate with pyramidal system to complete the management and control of somatic skeletal muscle movement. Therefore, the significant correlation between Cd and spinal cord structural alterations may be due to their structural association through the descending motor pathways 56 .
The specific brain regions in NT3 animals that were significantly correlated with spinal cord structural alterations included right M1 and Th. One of the basic functions of M1 is to control voluntary movement 71 . The right hemi-transection of spinal cord destroys the descending corticospinal tract, which in turn affects the neuronal activity in the contralateral M1. We expected to detect significant alterations in GMV at the left M1, but we did not find any significant difference in GMV at bilateral M1 intra-or inter-groups. According to the review of Nardone et al. 60 , results of studies on gray matter atrophy in M1 post-SCI were relatively divergent. Wrigley et al. 22 investigated patients with complete thoracic spinal cord injury and found that a significant decrease in GMV could be detected in the M1 of patients with complete SCI. They suggested that the difficulty in detecting significant GMV alterations in M1 in previous VBM studies might be due to the patient's mixed injury severity, i.e., both complete and incomplete. In the current experiment, we only hemi-transected the spinal cord. Thus, the injury severity was significantly lower than that in the case of complete SCI. This might be one of the reasons why we did not detect a significant alteration. At the same time, we detected a significant correlation between the GMV alterations in the right M1/S1 and M1/SFG and the alterations in rostral LRW and caudal SCA in the NT3 group; however, the peak voxels of these clusters were located at the right S1 and SFG, respectively. As pointed out in the previous discussion, S1 and SFG play an important role in the processing of sensorimotor tasks 18,65 . M1 is a key node in the brain regional network that is responsible for voluntary movement; moreover, it receives and integrates sensory information from different parts of the body to produce appropriate responses 72,73 . Therefore, the clusters that structurally span M1, S1, and SFG may reflect the complex mixing effect of sensorimotor processing post-SCI.
Th is the most important site for converting and integrating sensory afferent information. The input from somatosensory fibers, such as spinothalamic tract, is received at Th; the input is then converted and projected to S1 18 . Although a clear association between Th and spinal cord structural alterations was detected in the NT3 animals, no significant changes in GMV of Th were found after injury. This finding was consistent with the results of previous publications that did not report significant alterations in the Th structure 69 www.nature.com/scientificreports/ post-SCI. Chen et al. 18 suggested that apart from the classical somatosensory pathway, another pathway may exist in the process of sensory-related brain changes after SCI, namely, the cortico-ponto-cerebellar pathway 76 . Sensory function is reorganized after SCI. Although sensory input from the spinothalamic tract to Th is limited, Th may still play an important role in sensory transmission through the cortico-ponto-cerebellar pathway 18 . Therefore, the GMV of Th may not change significantly. Further investigations on the GMV alterations in overall brain regions showed that for both groups, a significant correlation was preserved in the right S1 and SFG. Therefore, the GMV changes in S1 and SFG may have a certain degree of prediction effect on the spinal cord structural alterations.
This study has some limitations. First, this research was based on a specific injury model (spinal cord hemitransection and remove model of rhesus monkeys), and the sample size was relatively small, which was different from the more complex and diverse injury scenarios in clinic. Therefore, experimental verifications are needed to determine whether the conclusions drawn in this study can be converted to clinical practice. Second, the voxel size of 0.5 mm 3 was employed in the study, which was common used in similar research. However, a smaller resolution would lead to more accurate results. And only the alterations of gray matter structure were studied, whereas the white matter could be included in the subsequent studies to reflect the changes in brain structure more comprehensively. Third, the implantation of NT3/chitosan would make it difficult to extract spinal cord automatically at the epicenter of the injury area. However, manual outlining processes ensured the accuracy of spinal cord measurements. Finally, the pathophysiological verification may further reveal the structural alterations in both brain and spinal cord. The combination of MRI and pathophysiological results will help to elucidate the potential biological significance of the extensive MRI characterization in the microenvironment created by the NT3/chitosan-induced regenerative treatment.
Conclusion
In conclusion, we showed that within 1 year post-SCI, significant differences were found in the effects of NT3induced regeneration and injury on the brain of animals. This was reflected in the differences in gray matter structure at each time point and in the differences in gray matter structural alteration rate within 1 year after injury. Moreover, differences were found in brain regions related to the integration of motor information and neuropathic pain. Particularly, the decreased GMV of insula was significantly associated with the increased pain sensitivity. Spinal cord structural changes were also influenced by injury and NT3 treatment, and the alterations were significantly associated with the GMV changes in S1, SFG, and other regions. This study may further elucidate the process of structural alterations in the brain after SCI and during its regeneration. Results provide the basis for revealing the unique effects of NT3/chitosan-induced regeneration on the brain structure after SCI.
Data availability
The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.
|
2022-04-10T06:22:48.277Z
|
2022-04-08T00:00:00.000
|
{
"year": 2022,
"sha1": "16b3796d7b3933517a4d1ef3eee6150bcfc84ab8",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "47a6ea72ddbd8d5b27e6a9f064bf5d6d00301bbd",
"s2fieldsofstudy": [
"Biology",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
56060021
|
pes2o/s2orc
|
v3-fos-license
|
Cost-Reducing R & D Investment , Occupational Choice , and Trade ∗
In this paper, I construct a two-country general equilibrium model in which oligopolistic firms export goods and undertake cost-reducing R&D investment. Each country imposes tariffs. When the cost of education is sufficiently high, an increase in the tariff rate decreases the level of R&D investment. However, when the cost of education is sufficiently small, an increase in the tariff rate increases the level of R&D investment.
Introduction
Trade liberalization has been occurring in recent decades.Wacziarg and Welch (2008) showed that the number of countries having open trade policy increased from 22% of all countries in 1960 to 46% in 2000.1However, over the last two decades, R&D investment has increased sharply and the wage gap between skilled and unskilled workers has widened sharply.Using data for the US, according to Braun (2008), the ratio of industrial R&D expenditures to GDP increased from about 1% in 1979 to 1.43% in 1990 and 1.7% in 2004.Moreover, the wage gap between skilled and unskilled workers has increased in many countries recently.Acemoglu (2002) pointed out that percentage of US workers with a college education increased sharply from 6% in 1939 to 28% in 1996.He also pointed out that in the US, the college premium increased from about 0.4 in 1980 to about 0.6 in 1995.
On the basis of the above data, this paper has three objectives.First, this paper investigates the relationship between the level of cost-reducing R&D investment and trade liberalization.When trade liberalization occurs, do firms increase the level of R&D investment?Few papers investigate the relationship between R&D investment and trade liberalization empirically.Funk (2003) concluded that US manufacturing firms that sell their product to the US market decrease their R&D investment when trade liberalization occurs.However, US manufacturing firms with foreign sales increase their R&D investment.Scherer and Huh (1992) showed that average US high-tech firms reduce their R&D investment in the short run when trade liberalization occurs.
The second and third objectives of this paper are to consider whether the number of skilled workers and the wage gap between skilled and unskilled workers increases or not when trade liberalization occurs.Many researchers have investigated the relationship between the wage gap and trade liberalization.Wood (1994), Leamer (1996) and Kurokawa (2010) argued that there is positive relationship between the wage gap and trade liberalization.However, when trade liberalization occurs, does the wage gap between skilled and unskilled workers widen and does the number of skilled workers increase?
In this paper, I construct a two-country general equilibrium model in which oligopolistic firms export goods and undertake cost-reducing R&D investment.The governments of the countries impose tariffs on imported goods.The ability of individuals is heterogeneous.Individuals choose to become skilled workers by paying the cost of education or remain unskilled workers, which involves no cost.In Braun (2008) and Morita (2009), there are two types of workers: skilled workers and unskilled workers.However, the numbers of skilled and unskilled workers are exogenously given.In this paper, the number of skilled workers is determined endogenously through the individual's choice.
In this paper, I obtain two results regarding the relationship between the tariff rate and the level of R&D investment.One is that a decrease in the tariff rate reduces the level of R&D investment when the cost of education is sufficiently high.However, a decrease in the tariff rate raises the level of R&D investment when the cost of education is sufficiently low.This paper also shows that a decrease in the tariff rate increases the wage gap between skilled and unskilled workers when the cost of education is sufficiently high.When the cost of education is sufficiently low, the effect of decreasing the tariff rate on the wage gap is ambiguous.Furthermore, this paper shows that a decrease in the tariff rate increases the number of skilled workers when the cost of education is sufficiently high.When the cost of education is sufficiently low, the relationship between trade liberalization and the number of skilled workers is ambiguous.
Many papers have investigated the relationship between trade liberalization and costreducing R&D investment.Braun (2008) and Haaland and Kind (2008) constructed a simple model of international oligopoly.In their papers, consumers are homogeneous agents.They do not consider the labor market for simplification.The result of these papers illustrates that trade liberalization increases R&D investment.In contrast with these papers, Morita (2009) constructed a general equilibrium model by incorporating the labor market into the model of Braun (2008) and Haaland and Kind (2008).I show that trade liberalization decreases R&D investment.In this paper, the cost of education determines the effects of trade liberalization on the level of R&D investment.Then, this paper summarizes the results of these papers.
The remainder of this paper is organized into four sections.The next section presents the basic structure of the model.Section 3 obtains the equilibrium condition of this model.I conclude in Section 4.
The model
There are two countries, Home and Foreign, indexed by l ∈ {H, F } and these countries are symmetric.The population size in each country is equal to L. There are two types of workers; skilled and unskilled.Individuals choose to become either a skilled or an unskilled worker.This determines the number of skilled and unskilled workers.There are two types of goods, X and Y. Good X is chosen to be the numeraire.Good X and good Y can be produced in both countries.The firm producing good Y in the Home country is named Firm H.The firm producing good Y in the Foreign country is named Firm F. I assume that Firm H and Firm F compete strategically by using their product quantities, that is, they engage in Cournot competition.The governments of both countries levy tariffs on their imports of good Y and the tariff rate is denoted by τ .
Individual
The utility function of individual i in each country is given by where x i,l is consumption of good X in country l by individual i, q i,l is consumption of good Y, and a and b are positive parameters.The budget constraint of consumer i in country l is as follows: where p l is the price of good Y, and E i,l is expenditure in country l by consumer i.From the first-order condition of the individual, I obtain the following inverse demand function: where q l denotes the average consumption level of good Y in country l.Therefore, the inverse demand function in country l is as follows: where Q l denotes the aggregate consumption level of good Y in country l.
Occupational choice
I assume that individuals can choose their occupation; skilled or unskilled worker.i l denotes the ability of individual i l in country l and i l ∈ [0, 1].Ability is distributed uniformly over the unit interval.When individual i l ∈ [0, γ] becomes a skilled worker, he or she does not have to pay good X.When individual i l ∈ [γ, 1] decides to become a skilled worker, he or she has to pay D(i l − γ) unit of good X.However, when individual i l wants to become an unskilled worker, he or she pays nothing.Each individual has one unit of labor and supplies one unit of labor inelasticity.When individual i l > γ becomes a skilled worker, his or her income becomes w l − D(i l − γ), where w l denotes the wage rate of a skilled worker in country l.However, when individual i l > γ becomes an unskilled worker, his or her income is the wage rate of an unskilled worker in country l, w U,l .I assume that an individual îl is indifferent between becoming a skilled worker and an unskilled worker.
Then, the threshold of ability îl becomes: Thus, îl L individuals become skilled workers and (1 − îl )L individuals become unskilled workers.When the cost of education D is zero, the economy is similar to those of Braun (2008) and Haaland and Kind (2008).On the contrary, when the cost of education D is infinity, the proportion of skilled workers is γ.Then, this economy is the same as those of Morita (2009).
Good X sector
Production of one unit of good X requires one unit of unskilled workers in both countries.I assume that perfect competition prevails in the good X market and good X can be traded freely.Thus, the wage rate for unskilled workers in both countries equals to unity, that is, w U,H = w U,F = 1.
Good Y sector
Each firm produces good Y and conducts cost-reducing R&D investment to decrease their marginal cost of production.Production of good Y requires both skilled and unskilled workers.Production of one unit of good Y requires θ units of skilled workers and α(k l ) ∈ [0, ᾱ] units of unskilled workers in country l.k l denotes the number of skilled workers that is allocated to cost-reducing R&D investment in country l.I assume that ∂α(k l )/∂k l < 0 and ∂ 2 α(k l )/∂k 2 l ≥ 0. The profit of Firm H is then given by where y t,s denotes the output of firm t that is sold in country s.Hence, the good Y market clearing conditions in both countries is as follows: The left hand side of these equations represents the supply of good Y and the right-hand side of these equations represents the demand for good Y. Substituting the inverse demand function, (4), (7), and (8) into the profit function (6), I rewrite the profit function of Firm H as follows: Firms maximize their profits by simultaneously choosing the quantity of good Y in the two markets and the level of cost-reducing R&D investment.Then, the profit maximization of the firms leads to the following levels of output and R&D investment: I assume that the firms take the wage rate of skilled workers and the wage rate of unskilled workers as being constant.In the same way, the output levels of Firm F are as follows: Because I assume that Home and Foreign countries are symmetric and that both firms have the same unit cost function, Firms H and F produce the same output level.Thus, the level of R&D investment, the wage rate for skilled workers, and the proportion of skilled workers are the same in both countries: k 1 = k 2 ≡ k, w H = w S ≡ w, and îH = îF = î.From (10), ( 11), (13), and ( 14), the output levels of Firm H and Firm F are given by I assume that the parameter a is sufficiently large in order that the output levels of the firms have positive values.Because the purpose of this paper is to investigate the effects of tariffs, I focus on the case in which positive amounts of good Y are traded between the countries.
Labor market equilibrium conditions
The demand for skilled workers is derived from R&D investment and production of good Y.The demand for unskilled workers comes from production of good X and good Y.
Because the supply of skilled workers is îL and that of unskilled workers is (1 − î)L, the labor market equilibrium conditions in country H are given by: îL = k H + θ(y HH + y HF ), ( 17) where x P H denotes the labor demand for the good X sector in country H.
Equilibrium
From ( 12), ( 15), and ( 16), I can obtain the wage rate for skilled workers as follows: From ( 19), I obtain the output level of Firm H as follows: where From ( 5) and ( 19), I can obtain the threshold of ability î as follows: Inserting ( 19), ( 20) and ( 21) into the skilled worker equilibrium condition (17), I can obtain the excess labor demand function as follows: When H(k, τ ) = 0, I can obtain the optimal level of R&D investment.Hereafter, I assume that α(k) = ᾱe −k and γ = θ ᾱL for simplicity.At k = 0, there is positive excess labor demand when the cost of skilled labor is relatively high, that is D > ᾱL θ , and τ < τ 1 where However, at k = 0, there is negative excess labor demand when the cost of skilled labor is relatively low, that is D < ᾱL θ , and τ < τ 1 .In addition, the slope of the excess labor demand function at k = 0 has a negative slope when τ > τ 2 where: The stability condition of this equilibrium is τ < τ where Comparing τ 1 with τ 2 and τ , I can obtain τ 1 < τ < τ 2 when 3b > 2ᾱ2 L holds. 2 Hereafter, I focus on τ < τ .Then, I can obtain the following proposition (see Appendix for the proof).
The excess labor demand function of H(k, τ ) can be depicted in Figures 1(a) and 1(b) when D > ᾱL θ .When D > ᾱL θ and τ < τ 1 in Figure 1(a), the intercept of H(k, τ ) has a positive value.However, when D > ᾱL θ and τ 1 < τ < τ in Figure 1(b), the intercept of H(k, τ ) has a negative value.When D > ᾱL θ and τ < τ < τ 2 in Figures 1(a) and 1(b), the slope of H(k, τ ) at k = 0 has a positive value.Therefore, when D > ᾱL θ and τ 1 < τ < τ , there exists a unique and positive level of R&D investment in Figure 1 When the cost of education, D, is sufficiently small, the excess labor demand function of H(k, τ ) can be depicted in Figures 2(a The relationship between the tariff rate and the level of R&D investment is given by following proposition (see Appendix for the proof).
Figure 3 describes the case when the cost of education is sufficiently high, that is, D > ᾱL θ .Then, a decrease in the tariff rate roatates the excess labor demand function of H(k, τ ) around k = ln ᾱL θD < 0 in a counterclockwise direction.Therefore, a decrease in the tariff rate decreases the level of R&D investment.However, Figure 4 describes the case when the cost of education is sufficiently low.Then, a decrease in the tariff rate roatates the excess labor demand function of H(k, τ ) around k = ln ᾱL θD > 0 in a counterclockwise direction.Therefore, a decrease in the tariff rate increases the level of R&D investment.
The effects of the tariff rate can be divided into three effects: trade effect, wage effect, and occupational choice effect.Firstly, the trade effect is where a decrease in the tariff rate increases cost-reducing R&D investment.When the tariff rate decreases, both firms increase their output levels and the marginal benefit of R&D investment increases.Therefore, both firms increase their level of R&D investment and the labor demand for skilled workers increases.Secondly, the wage effect is where a decrease in the tariff rate reduces cost-reducing R&D investment.When the tariff rate decreases, both firms increase their output.Then, the demand for skilled workers increases and the wage rate of skilled workers increases.An increase in the wage rate of skilled workers raises the cost of R&D and decreases the level of R&D investment.Finally, the occupational choice effect is where a decrease in the tariff rate increases cost-reducing R&D investment.When the tariff rate decreases, both firms increase their output levels.Then, the demand for skilled workers increases and the wage rate of skilled workers increases.Consequently, the income of skilled workers increases and the number of skilled workers increases.Then, the number of skilled workers hired in R&D activities increases and the demand for skilled labor increases.
Intuitively, when the cost of education is sufficiently high, individuals are less likely to become skilled workers.Then, the number of skilled workers is scarce and the wage effect is sufficiently high.The wage effect overcomes the sum of the trade effect and the occupational effect.Therefore, when the cost of education is sufficiently high, a decrease in the tariff rate decreases the level of R&D investment.When the cost of education is sufficiently low, individuals easily becomes skilled workers and the supply of skilled workers is abundant.Then, the wage effect is sufficiently small.Therefore, the wage effect is smaller than the sum of the trade effect and the occupational effect.Hence, when the cost of education is sufficiently small, a decrease in the tariff rate increases the level of R&D investment.
The next proposition shows the relationship between the tariff rate and the output level of good Y (see Appendix for the proof).
PROPOSITION 3. When D < ᾱL
θ , a decrease in the tariff rate increases the output level.
When the tariff rate decreases, there is a direct effect and an indirect effect.The direct effect is that when the tariff rate decreases, the cost of exports decreases and the firms increase their exports and output levels.The indirect effect is that when the tariff rate decreases, the level of R&D investment changes.An increase in the level of R&D investment raises productivity and the output level.A decrease in the level of R&D investment reduces productivity and the output level.Therefore, when the cost of education is sufficiently low, a decrease in the tariff rate increases the level of R&D investment as shown in Proposition 2 and increases the output level.However, when the cost of education is sufficiently high, a decrease in the tariff rate decreases the level of R&D investment.Then, the direct effect is opposite to the indirect effect.The relationship between the tariff rate and the output level is ambiguous.
When the tariff rate decreases, does the wage rate of skilled workers increase and the number of skilled workers increase?As for the relationship between the tariff rate and the number of skilled workers, I can obtain the following proposition.PROPOSITION 4. When D > ᾱL θ , a decrease in the tariff rate increases the wage gap between skilled and unskilled workers and increases the number of skilled workers.
Proof.The effect of the tariff rate on the wage rate of skilled workers can be divided into two parts: direct effect and indirect effect.Differentiating ( 19) with respect to τ , I obtain the following equation: The first term of (26) has a negative value.From the stability condition, ∂w ∂k < 0. When D > ᾱL θ holds, ∂k ∂τ > 0 as shown in Proposition 2.Then, the second term of (26) has a negative value and ∂w ∂τ < 0. Therefore, when the cost of education is sufficiently high, a decrease in the tariff rate increases the wage rate of skilled workers.When D < ᾱL θ holds, ∂k ∂τ < 0 as shown in Proposition 2.Then, the second term of ( 26) has a positive value and the sign of ∂w ∂τ is ambiguous.Therefore, when the cost of education is sufficiently low, the relationship between the tariff rate and the wage rate of skilled workers is ambiguous.I explain the above proposition intuitively.There is a direct effect and an indirect effect.The first term of (26) represents the direct effect and the second term represents the indirect effect.The direct effect is when the tariff rate decreases given the level of R&D investment, the cost of exports decreases, and both firms increase their volume of exports and increase their output level.Then, the demand for skilled workers increases.Hence, the direct effect has a positive effect on the wage rate of skilled workers.However, the indirect effect is that the level of R&D investment affects the price of good Y .When the cost of education is sufficiently high, a decrease in the tariff rate decreases the level of R&D investment.Then, the cost of good Y increases and the relative price of good Y increases.Then, the demand for skilled workers increases relatively.Then, when the cost of education is sufficiently high, the indirect effect also has a positive effect.Therefore, when the cost of education is sufficiently high, a decrease in the tariff rate increases the wage rate of skilled workers.However, when the cost of education is sufficiently small, a decrease in the tariff rate increases the level of R&D investment and decreases the cost of good Y .Then, the relative price of good Y decreases, the demand for unskilled workers increases, and the wage rate of skilled workers decreases relatively.Therefore, when the cost of education is sufficiently small, the indirect effect has a negative effect.Then, the relationship between the tariff rate and the wage rate of skilled workers is ambiguous when the cost of education is sufficiently small.
Conclusion
In this paper, I constructed a two-country general equilibrium model in which the ability of individuals is heterogeneous and oligopolistic firms produce goods and undertake costreducing R&D investment.There are two main results.The first result is the relationship between trade liberalization and the wage gap.Trade liberalization increases the wage gap between skilled workers and unskilled workers when the cost of education is sufficiently high.When the cost of education is sufficiently low, the relationship between trade liberalization and the wage rate of skilled workers is ambiguous.The second result is the relationship between trade liberalization and the level of R&D investment.This paper investigated the effects of trade liberalization on R&D investment.There are three effects: trade effect, wage effect, and occupational choice effect.First, the trade effect is that a decrease in the tariff rate increases R&D investment.This effect is focused on by Braun (2008) and Haaland and Kind (2008).They concluded that a decrease in the tariff rate increases R&D investment.Second, the wage effect is that a decrease in the tariff rate decreases R&D investment.When the tariff rate decreases, both firms increase their output level.Morita (2009) considered these two effects: trade effect and wage effect.He constructed a general equilibrium model by incorporating the labor market into the model of Braun (2008) and Haaland and Kind (2008).He concluded that the wage effect dominates the trade effect for any tariff rate.
The result of this paper is separated into two cases.First, when the cost of education is sufficiently low, the trade effect plus the occupational choice effect dominate the wage effect and a decrease in the tariff rate increases cost-reducing R&D investment.This case is similar to Braun (2008) and Haaland and Kind (2008).Second, when the cost of education is sufficiently high, the wage effect dominate the trade effect plus the occupational choice effect and a decrease in the tariff rate decreases cost-reducing R&D investment.This case is similar to Morita (2009).Therefore, the cost of education determines the effects of trade liberalization on the level of R&D investment.
Comparing this paper and Morita (2009), this paper provided a long-term analysis and Morita (2009) provided a short-term analysis.In the short term, it is difficult for workers to acquire skills.Therefore, in the short term, the ratio of skilled workers to unskilled workers is constant.However, the ratio of skilled workers to unskilled workers is endogenous.This paper and Morita (2009) concluded that trade liberalization decreases the level of R&D investment in the short term and increases the level of R&D investment in the long term.
A.1 Proof of Proposition 1
This proof can be solved in four steps.In the first step, I analyze the value of H(k, τ ) at k = 0.The Second step investigates the value of H(k, τ ) when k approaches to infinity.The third step examines the gradient of H(k, τ ) at k = 0. Finaly, the fourth step shows that the equilibrium is stable when τ ≤ τ 3 .In addition, I show that τ 1 < τ < τ 2 .
First step: Investigating the value of H(0, τ ), I obtain the following lemma.
Proof.Remember that I assumed that α(k) = ᾱe −k and γ = θ ᾱL .The intercept of H(k, τ ) is given by: H( 0 Therefore, g(k, τ ) is increasing function of k.Then, when k is sufficiently small, the sign of (A.3) is negative.However, when k is sufficiently large, the sign of (A.3) is positive.Then, the sign of (A.Thus, when τ > τ 2 , the value of ∂H(k,τ ) ∂k | k=0 has a negative value.
Fourth step: In this step, I show that the equilibrium is stable when τ ≤ τ and τ 1 < τ < τ 2 .To analyze the stability condition, I differentiate the wage rate with respect ) and 2(b).When D < ᾱL θ and τ < τ 1 in Figure2(a), the intercept of H(k, τ ) has a negative value.However, when D < ᾱL θ and τ 1 < τ < τ in Figure2(b), the intercept of H(k, τ ) has a positive value.When D < ᾱL θ and τ < τ < τ 2 in Figures 2(a) and 2(b), the slope of H(k, τ ) at k = 0 has a positive value.Therefore, when D < ᾱL θ and τ < τ 1 , there exists a unique and positive level of R&D investment in Figure 2(a).
|
2018-12-05T12:31:05.055Z
|
2014-11-21T00:00:00.000
|
{
"year": 2014,
"sha1": "079d2dfd1ef4056d06bc2a1f59cdd0f6e1f952d3",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=52317",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "449d75f5f582dbb958a245d980100aaa31c8cc5d",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Economics"
]
}
|
787225
|
pes2o/s2orc
|
v3-fos-license
|
Expression of c-erbB-2 protein in papillary thyroid carcinomas.
c-erbB-2 protein expression was investigated immunohistochemically in frozen thyroid tissue specimens from 42 patients using a polyclonal sheep antibody. c-erbB-2 immunoreactivity was detected in 12 out of 17 papillary carcinomas, while no c-erbB-2 protein immunostaining was seen in cases of follicular adenoma (five cases), follicular carcinoma (five cases) or medullary carcinoma (one case), nor in cases of non-neoplastic tissue, including normal thyroid tissue from tumour-bearing glands. RNA was extracted from 51 thyroid tissue samples from 34 of the above patients, and c-erbB-2 mRNA was analysed by slot-blot hybridisation. c-erbB-2 mRNA was detectable in all samples, but papillary carcinomas and lymph node metastases showed significantly higher levels of c-erbB-2 mRNA compared to non-neoplastic tissue. The present demonstration of positive c-erbB-2 immunostaining in papillary thyroid carcinomas is contradictory to previous findings on formalin-fixed, paraffin-embedded material, and emphasises the importance of tissue quality for c-erbB-2 protein detection. ImagesFigure 1Figure 3
During recent years much has been done to elucidate the role of growth factors and oncogenes in the growth and function of normal thyroid follicular cells and in the development and maintenance of thyroid tumours.
The c-erbB-2 oncogene encodes a 185 kilodalton transmembrane glycoprotein with tyrosine kinase activity Akiyama et al., 1986). This protein is closely related to, but yet distinct from the EGF-receptor, encoded for by the c-erbB proto-oncogene (Schechter et al., 1985). Recently, a ligand has been proposed for the putative c-erbB-2 growth factor receptor (Lupu et al., 1990). The c-erbB-2 oncogene has been found to be amplified and/or overexpressed at mRNA or protein level in a number of human adenocarcinomas, those of the breast being most extensively studied (Slamon et al., 1987;1989;van de Vijver et al., 1987;Venter et al., 1987). c-erbB-2 protein overexpression is currently being evaluated as a potential risk factor in breast cancer patients Lovekin et al., 1991;O'Reilly et al., 1991;Winstanley et al., 1991).
An analysis of c-erbB-2 and c-erbB mRNA expression in thyroid tumours by RNA slot blot hybridisation demonstrated two-to three-fold higher levels of c-erbB-2 and c-erbB RNA in three out of five papillary carcinomas and three papillary lymph node metastases, as compared to nontumour tissue (Aasland et al., 1988). The higher levels of expression of c-erbB and c-erbB-2 mRNA in the papillary carcinomas were much lower than the levels associated with gene amplification. Southern blot analysis showed no amplification or rearrangements of the c-erbB-2 gene (Aasland et al., 1988). The results were pursued in a comprehensive analysis of the c-erbB and c-erbB-2 proto-oncogenes in human thyroid neoplasia, using Southern blot hybridisation to detect gene amplification or rearrangement, and immunocytochemistry to detect overexpression of c-erbB-2 oncoprotein (Lemoine et al., 1990a). The Southern blot study showed no abnormality of either structure or gene copy number of the c-erbB or c-erbB-2 proto-oncogenes in 38 thyroid tumour samples, including 17 papillary carcinomas. Immunohistochemical staining of paraffin sections of 106 tumour specimens (24 papillary carcinomas) from pathological archives showed no cases of overexpression of c-erbB-2 proto-oncogene (Lemoine et al., 1990a).
We have now extended these investigations by studying c-erbB-2 protein expression immunohistochemically in a series of thyroid tissue samples using fresh, frozen tissue, considering that a modest expression might be lost due to tissue processing when immunostaining is performed on formalin-fixed, paraffin-embedded material. In a number of the same tumours, c-erbB-2 mRNA expression has been analysed by slot blot hybridisation.
Materials and methods
Tissue samples Fresh thyroid tissue was obtained from 42 patients subjected to either partial or total thyroidectomy between January 1990 and May 1991 at Haukeland University Hospital, Bergen, Norway. Immediately after excision, samples were cut from the surgical specimen(s), and each sample divided in two parts. One part was frozen directly in liquid nitrogen and stored at -80°C for use in nucleic acid analysis.
The other part, intended for frozen sections, was immersed in Histocon transport medium (Histolab, Gothenburg, Sweden), and the following freezing procedure was completed within one hour. The pieces of tissue were embedded in Tissue Tek (Miles Scientific, Naperville, II.) and frozen on cryostat bolts in isopentane precooled to liquid nitrogen temperature. The samples were stored at -80°C.
The biopsies were classified according to conventional histopathological criteria, as defined by WHO (Hedinger, 1988), and the lesions included in this study, are listed in Table I. Two or more samples were obtained from each patient, the series comprising 115 samples from 42 patients in all. The histology of all specimens was examined by one of the authors (LAA). Immunohistochemistry Sections were cut 6 gim thick in a frozen microtome, fixed in cold acetone for 10 min, and air dried. After a short rinse in PBS (0.01 M phosphate buffered 0.15 M saline, pH 7.3), sections were treated with 1% hydrogen peroxide (H202) in methanol for 30 min to block endogenous peroxidase activity, then rinsed in PBS again. After incubation for 30 min at room temperature with normal rabbit serum (Dakopatts, Copenhagen, Denmark) diluted 1: 10 in PBS, sections were incubated overnight (18-22 h) at 4°C with a polyclonal sheep antibody to human c-erbB-2 oncoprotein (OA-1 1-854, batch no. 02846, Cambridge Research Biochemicals, Cambridge, UK). The antibody was used at dilution 1:500. After rinsing in PBS, sequential incubations with secondary antibody (biotinylated rabbit anti sheep IgG from Vector Laboratories, Burlingame, CA) at dilution 1: 100 and AB-complex (10 jg ml-' avidin and 2.5 Lg ml1' biotinlabelled peroxidase from Vectastain ABC kit, Vector Laboratories) followed, 30 min each, at room temperature. The sections were immersed in DAB colouring solution (0.03% 3'3 diaminobenzidine tetrahydrochloride [Sigma, St. Louis, Missouri] and 0.02% H202 in PBS) for 5 min, counterstained with haematoxylin, dehydrated and mounted. All dilutions of antibodies, normal serum and AB-complex were made with PBS containing 5% BSA (bovine serum albumin) as the diluent.
A negative control was included for each specimen, exchanging the primary antibody with PBS in duplicate sections ( Figure 1f). Positive control specimens were used routinely to check the procedure. The immortalised human thyroid epithelial cell line SGHTL-34 (Whitley et al.,1987;Aasland et al., 1990) was used as positive control. In our laboratory, this cell line has been shown to express c-erbB-2 (G.O. Ness and JRL, personal communication). Incubation of the primary antibody with the corresponding c-erbB-2 peptide (OP-11-3540, Cambridge Research Biochemicals) before applying on sections was done to ensure antibody specificity (Figure le). All controls gave satisfactory results.
To confirm the results, a second, monoclonal antibody to c-erbB-2 protein (OP15, lot no. 7900305, Oncogene Science, Manhasset, NY) was used. The staining procedure was performed as described above, except that the sections were incubated with the primary antibody at dilution 1:40 for 1 h, and the secondary antibody was biotinylated rabbit anti mouse IgG (Dakopatts) diluted 1:200. Figure 1 Immunohistochemical localisation of c-erbB-2 protein in frozen sections of papillary thyroid carcinomas, by the avidin-biotin-peroxidase method employing a polyclonal sheep antibody, as described in Materials and methods. a, Primary tumour with membrane reactivity; b, primary tumour with cytoplasmic reactivity; c, primary tumour without reactivity; d, lymph node metastasis with membrane and cytoplasmic reactivity; e and f, sections from the same tumour as in d, showing e, absence of staining following preincubation of antibody with peptide immunogen, and f, control in which primary antibody was omitted. Reduced by one third from x 560.
i RNA extraction RNA was extracted from 51 biopsies taken from 34 of the above patients, providing the following cases: six samples of histologically normal thyroid tissue (one sample taken from a patient with a follicular adenoma, the others taken from patients with papillary carcinomas), five diffuse hyperplasias, 12 colloid goitres, five follicular adenomas, four follicular carcinomas (three primary tumours and one metastasis), 11 papillary carcinomas (primary tumours) and eight lymph node metastases from papillary carcinomas.
Tissue samples were evacuated from liquid nitrogen, instantly minced and lysed in 4 M guanidinium isothiocyanate, as described by Aasland et al. (1988). Ultracentrifugation through a cesium chloride cushion was carried out at 27,000 r.p.m. for 18 h. Pelleted RNA was further processed as described (Aasland et al., 1988).
Slot blot analysis RNA was denatured at 56°C for 15 min in 20% formaldehyde and 30% 20 x SSC (standard saline-citrate) and applied to nylon membranes (NY 13N by Schleicher & Schuell, Dassel, Germany) in a vacuum slot blot apparatus (Schleicher & Schuell). For each case, a set of 2, 6 and 12 jig total RNA was applied, if there was a sufficient amount of RNA available. RNA concentrations were determined spectrophotometrically. As as internal control, some samples were included on all slot blot membranes. Prehybridisation and hybridisation were carried out in 50% formamide at 42°C in a hybridisation oven (Hybaid Ltd., Teddington, UK) as described (Sambrook et al., 1989). DNA fragments were prepared from plasmids and 32P-labelled ([X-32P]dCTP from Amersham, Aylesbury, UK) to high specific activity using the oligo-labelling technique (Feinberg & Vogelstein, 1983). The probe was a purified fragment of the cloned human c-erbB-2 gene, a partial c-erbB-2 cDNA 1.6 kbp EcoRI fragment of pCER 204 . Blots were washed to high stringency (65°C, 0.2 x SSC with 0.1% NaPPi and 0.1% SDS [sodium dodecyl sulphate]) and exposed on X-ray films (XAR 5 by Kodak, Rochester, NY) in the presence of intensifying screens at -800C.
After stripping of slot blot membranes in 0.1% SDS at 90°C for 7 min, hybridisation with a 28S rRNA probe was performed according to the same procedure. The 28S rRNA probe was the 1.4 kbp BamHI fragment of pA (Dr I.L. Gonzales, personal communication). Analysis of autoradiograms was performed by densitometric scanning using an Enhanced Laser Densitometer (LKB Products, Bromma, Sweden). The relative levels of c-erbB-2 mRNA expression were estimated from scanning results as the amount of radioactive probe hybridised to each RNA sample relative to the amount of 28S rRNA in each sample.
Immunohistochemnistry
Employing the polyclonal sheep antibody OA-11-854, no c-erbB-2 protein immunostaining was seen in cases of colloid goitre, diffuse hyperplasia, follicular adenoma or follicular carcinoma, in the only medullary carcinoma included in the series nor in normal thyroid follicular epithelium, including microscopically normal tissue from tumour-bearing thyroid glands.
In the papillary carcinoma group, c-erbB-2 immunostaining was present in tumour samples from 12 of the 17 patients ( Figure 1). Details on these patients are given in Table II.
The positively stained samples included 11 out of 14 primary tumours. From three patients, tissue from the primary tumour was not available, but samples from lymph node metastases were obtained. In two of these cases, no immunoreactivity was found, while the third patient had immunopositive epithelium in all three nodes available. Lymph node metastases were provided from five of the 14 primary tumours. Lymph nodes with positive c-erbB-2 immunostaining also had immunopositive primary tumours (four cases). In contrast, neither the primary tumour nor the metastasis showed c-erbB-2 immunoreactivity in one case.
Immunoreactivity was confined to tumour cells. The staining was specific and reproducible, although staining intensity was uniformly rather weak. In most cases, specific membrane staining as well as a weak cytoplasmic positivity of tumour cells were seen. Two cases showed a predominantly cytoplasmic reaction, while two cases demonstrated almost exclusively membrane staining. Two or more positive samples from the same patient (from different parts of the tumour or from metastases) showed the same staining pattern. The staining was homogeneously distributed in the sections. Apart from tumour cells, staining was seen in the colloid of thyroid follicles, independent of tissue diagnosis. Colloid staining was in most cases relatively strong.
Corresponding results were obtained using the monoclonal antibody OP15, with the exception that a weak cytoplasmic reaction was seen in one papillary carcinoma (patient no. 125) which did not show immunoreactivity when examined with the polyclonal antibody OA-1 1-854. With the monoclonal antibody, only cytoplasmic staining was present, while membrane staining as well as cytoplasmic reactivity were detectable with the polyclonal antibody.
RNA analysis
Relative amounts of c-erbB-2 mRNA from RNA slot blot hybridisation analysis are presented in Figure 2. An example of autoradiograms of slot blot membranes is given in Figure 3. The hybridisation experiments showed that papillary carcinomas and lymph node metastases expressed higher levels of c-erbB-2 mRNA relative to non-neoplastic tissue (normal thyroid tissue, diffuse hyperplasia and colloid goitre) (t-test: P<0.00l).
In two samples showing increased c-erbB-2 mRNA expression, the corresponding protein was not detected with the immunohistochemical assay (patients no. 91 and 114). The other cases showing increased c-erbB-2 mRNA expression also showed c-erbB-2 protein product expression immunohistochemically.
Three patients (no. 102, 117 and 122) showed c-erbB-2 protein immunoreactivity even though their c-erbB-2 mRNA levels were not different from those of the reference group ( Figure 2).
In the papillary carcinoma group, two lymph node metastases (patients no. 136 and 139) demonstrated the lowest levels of c-erbB-2 mRNA. These were the only two papillary metastatic deposits included in the mRNA analysis in which no c-erbB-2 protein was detected.
Discussion
In this study we provide evidence that c-erbB-2 protein expression is a feature of papillary thyroid carcinomas, in contrast to non-neoplastic thyroid tissue. We have investigated c-erbB-2 protein expression immunohistochemically in a series of thyroid tissue samples, using fresh, frozen tissue obtained directly during surgery, and a polyclonal antibody to pl85crbB-2. Lemoine and coworkers were not able to detect any c-erbB-2 overexpression in 24 papillary carcinomas using formalin-fixed, paraffin-embedded material from pathological archives (Lemoine et al., 1990a). Natali et al. (1990), however, demonstrated c-erbB-2 expression in two out of nine thyroid carcinomas (no further classification given) using frozen tissue. For breast cancer tissue, where c-erbB-2 expression has been most widely studied, the discrepancy between results from formalin-fixed and frozen material has been emphasised by Slamon et al. (1989), who reported that in virtually every case there was some reduction in immunohistochemical staining with polyclonal antiserum when comparing fixed to frozen tissue. In tumours expressing very high levels, the protein was visible by immunostaining in tissue prepared by either method. The problem was more significant in samples expressing moderate levels of protein since many of them completely lost their immunohistochemical reactivity during formalin fixation and paraffin embedding. Slamon concludes that the problem of loss of antigenic immunoreactivity during fixation can be overcome by using frozen tissue samples (Slamon et al., 1989).
The difference between fixed and frozen material for c-erbB-2 oncoprotein detection has also been demonstrated for bladder cancer . Recently, a novel monoclonal antibody to c-erbB-2 protein, NCL-CBl1, has been introduced, and reported to be highly effective for immunohistochemistry using paraffin-embedded material. Even with this antibody, however, the authors cannot exclude that some immunopositive cases might be lost due to fixation .
Although several studies, using different antibodies, have demonstrated a significant correlation between c-erbB-2 gene amplification and immunohistochemical staining of c-erbB-2 protein (Venter et al., 1987;Slamon et al., 1989;, evidence of overexpression has also been detected in breast tumours in which the gene copy number was determined to be single (Slamon et al., 1989). In thyroid tumours, no c-erbB-2 gene amplification has been found (Aasland et al., 1988;Lemoine et al., 1990a), and it is therefore crucial to q1 r.
-1 III I I I I I I have optimal tissue quality and to exclude protein deteriorating procedures when looking for c-erbB-2 protein expression. In the present study, the staining intensity was uniformly rather weak, although specific and reproducible. The majority of immunopositive cases showed a specific membrane staining as well as a weaker and more diffuse cytoplasmic reaction. Two cases demonstrated almost exclusively cytoplasmic staining, of stronger degree, and two cases exhibited membrane staining only. The membrane staining has been regarded as specific for c-erbB-2 protein expression in breast carcinomas, and this staining pattern is associated with gene amplification and of prognostic significance in these tumours Lovekin et al., 1991).
In other human tumours, however, different staining patterns have been observed. Diffuse cytoplasmic immunoreactivity was predominant in c-erbB-2 protein positive cases of pancreatic cancer (Hall et al., 1990). In transitional cell carcinomas of the urinary bladder, cytoplasmic reactivity predominated, even in tumours with high levels of gene amplification (Coombs et al., 1991). The significance of cytoplasmic staining has not yet been established. de Potter et al. (1989) showed that the cytoplasmic reacting protein was a protein of molecular weight 155 kD, different from the known pls8erb52. In bladder tumours with high c-erbB-2 gene copy number and mRNA expression and cytoplasmic staining, high levels of the 155 kD protein were detected (Coombs et al., 1991). The close correlation of c-erbB-2 gene amplification and cytoplasmic immunoreactivity in transitional cell tumours argues that the cytoplasmic product does represent a form of the c-erbB-2 protein, possibly reflecting some alteration in processing or stability of the oncoprotein or its mRNA. In the present study, cytoplasmic as well as membrane staining was abolished when the primary antibody was preincubated with the immunising c-erbB-2 peptide.
Our series includes five follicular carcinomas. None of these stained positively in the immunohistochemical assay, but the number of cases is too small to draw any conclusions on c-erbB-2 expression in follicular thyroid carcinomas. The only medullary carcinoma included was also negative. Roncalli and coworkers found that none out of 28 medullary thyroid carcinomas displayed c-erbB-2 immunoreactivity using the monoclonal antibody N3, but the authors comment that fixation regimes might have adversely affected tumour immunoreactivity (Roncalli et al., 1991).
c-erbB-2 mRNA was detected in all the samples analysed, and the levels of c-erbB-2 mRNA in papillary carcinomas and lymph node metastases were higher than the levels observed in non-neoplastic tissue, comprising the groups normal thyroid tissue, colloid goitre and diffuse hyperplasia. This observation is consistent with the previous findings by Aasland et al. (1988). The higher levels of c-erbB-2 mRNA in the papillary carcinoma group were much lower than the levels associated with gene amplification, in agreement with what would be expected from the findings by Lemoine et al. (1990a). It should, however, be kept in mind that our tumour samples contain variable amounts of non-neoplastic tissue. The contribution of c-erbB-2 mRNA and rRNA from nonneoplastic tissue to the total RNA isolated must therefore vary from specimen to specimen. Since rRNA most likely is increased in proliferating tumour cells compared to nonneoplastic cells, the increase in c-erbB-2 mRNA expression that has been found in the papillary carcinoma samples, may be underestimated. Consequently, the true increase in c-erbB-2 mRNA per tumour cell may be higher than we report. The significant increase in c-erbB-2 mRNA expression in the papillary carcinoma group adds support to the protein expression data from the immunohistochemical analysis. The evidence of c-erbB-2 overexpression in the papillary carcinomas is, however, more clear from the immunohistochemical results than from the RNA slot blot hybridisation experiments. This is in agreement with Coombs and coworkers who reported that 40% of tumours with no detectable c-erbB-2 amplification or overexpression which could be detected by Northern or Western analysis, showed positive c-erbB-2 immunostaining (Coombs et al., 1991). The conclusion from their work on bladder cancer is that immunocytochemistry may be the most sensitive assay for detection of c-erbB-2 expression. Slamon et al. (1989) also found that immunohistochemical analysis of c-erbB-2 protein in frozen tissue sections correlated best with all other analytic data for both breast and ovarian cancer.
The function of the c-erbB-2 protein in cell growth and development remains unknown. The protein has tyrosine kinase activity, and is postulated to be a transmembrane receptor , for which a ligand has not yet been fully established. The homology and close relationship to the EGF-receptor suggest that the c-erbB-2 protein may convey potent growth stimulatory signals. In human tumours, overexpression and not mutation of the c-erbB-2 gene seems to contribute to tumour development (Slamon et al., 1989;Lemoine et al., 1990b). In thyroid tumours also, no activating point mutations of the transmembrane-encoding region of the c-erbB-2 gene have been revealed (Lemoine et al., 1990a).
Investigations on the role of growth factors and oncogenes in the development of thyroid tumours have revealed that activation of ras oncogenes Suarez et al., 1990) and autocrine production of IGF-I (Williams et al., 1988) occur in the early stages of thyroid follicular cell tumourigenesis, and TGF-P expression is associated with the malignant stages (Jasani et al., 1990). The nuclear oncogenes c-myc and c-fos have been found to be expressed at varying levels in both non-tumour and tumour tissue, but neither rearrangements nor amplifications of these oncogenes have been observed in several studies (Aasland et al., 1988;Terrier et al., 1988;Wyllie et al., 1989). The PTC and trk tyrosine kinase oncogenes are activated in a number of papillary carcinomas (Fusco et al., 1987;Bongarzone et al., 1989), the PTC oncogene being a rearranged form of the ret oncogene (Grieco et al., 1990).
The present work presents c-erbB-2 protein expression as a feature of papillary thyroid carcinomas, extending the list of human adenocarcinomas expressing this protein. The increased expression of c-erbB-2 protein in papillary thyroid carcinomas is not due to gene amplification, and no other genetic aberration explaining this increased expression has been identified. However, the large proportion of papillary thyroid carcinomas expressing the c-erbB-2 protein indicates a biologically significant mechanism involving this receptor system in papillary carcinomas. Further investigations will be needed to assign the biological significance and prognostic implications of c-erbB-2 protein expression in these thyroid tumours.
|
2014-10-01T00:00:00.000Z
|
1992-06-01T00:00:00.000
|
{
"year": 1992,
"sha1": "54431f118b58bd0abd5a0b8eeba8d62f959ba2d3",
"oa_license": "CCBY",
"oa_url": "https://europepmc.org/articles/pmc1977754?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "54431f118b58bd0abd5a0b8eeba8d62f959ba2d3",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
119394200
|
pes2o/s2orc
|
v3-fos-license
|
LPS's Criterion for Incompressible Nematic Liquid Crystal Flows
In this paper we derive LPS's criterion for the breakdown of classical solutions to the incompressible nematic liquid crystal flow, a simplified version of Ericksen-Leslie system modeling the hydrodynamic evolution of nematic liquid crystals in $\mathbb R^3$. We show that if $0<T<+\infty$} is the maximal time interval for the unique smooth solution $u\in C^\infty([0,T),\mathbb R^3)$, then $|u|+|\nabla d|\notin L^q([0,T],L^p(\mathbb R^3))$, where $p$ and $q$ safisfy the Ladyzhenskaya-Prodi-Serrin's condition: $\frac{3}{p}+\frac{2}{q}=1$ and $p\in(3,+\infty]$
The above system is a simplified version of the Ericksen-Leslie model, which reduces to the Ossen-Frank model in the static case, for the hydrodynamics of nematic liquid crystals developed during the period of 1958 through 1968 [2,3,10]. It is a macroscopic continuum description of the time evolution of the materials under the influence of both the flow field u(x, t), the macroscopic description of the microscopic orientation configurations d(x, t) of rod-like liquid crystals. Roughly speaking, the system (1.1) is a coupling between the non-homogeneous Navier-Stokes equation and the transported flow harmonic maps. Due to the physical importance and mathematical challenges, the study on nematic liquid crystals has attracted many physicists and mathematicians. The mathematical analysis of the liquid crystal flows was initiated by Lin [11], Lin and Liu in [12,13]. For any bounded smooth domain in R 2 , Lin , Lin and Wang [14] have proved the global existence of Leray-Hopf type weak solutions to system (1.1) which are smooth everywhere except on finitely many time slices (see [5] for the whole space). The uniqueness of weak solutions in two dimension was studied by [15,20]. Recently, Hong and Xin [6] studied the global existence for general Ericksen-Leslie system in dimension two. However, the global existence of weak solutions to the incompressible nematic liquid crystal flow equation (1.1) in three dimension with large initial data is still an outstanding open question.
In this paper, we are interested in an optimal characterization on the maximal interval T that is scaling invariant. So let us first introduce the following definition: We will consider the short time classical solution to (1.1) and address the Ladyzhenskaya-Prodi-Serrin's criterion that characterizes the first time finite singular time. The local well-posedness of the Cauchy problem of system (1.1) is rather standard (see [5,7,14]). More precisely, if the initial velocity u 0 ∈ H s (R 3 , R 3 ) with ∇ · u 0 = 0 and d 0 − a ∈ H s+1 (R 2 , S 2 ) such that system (1.1) has a unique, classical solution (u, d) in for any 0 < T < T 0 . At present, there is no global-in-time existence theory for classical solutions to system (1.1). Thus if we assume T * > 0 is the maximum value such that (1.4) holds with T 0 = T * , we would like to characterize such a T * . Motivated by the famous work [1], Huang and Wang [7] have obtained a BKM type blow-up criterion (see also [16]). However, the techniques involved in this paper are much different from [7], which we believe that the result may have its own interest. When d is a constant vector field, the system (1.1) becomes an incompressible Navier-Stokes equation. Recall that the scaling invariant space which has played an important role in the regularity issue of Navier-Stokes equation. Leray [9] first established the existence of a global weak solution for Navier-Stokes equation, now called Leray-Hopf weak solution, that satisfies an energy inequality: Although the regularity issue for Leray-Hopf weak solutions of Navier-Stokes equation remains open, it is well-known that both uniqueness and smoothness for the class of weak solutions of Navier-Stokes equation, in which u ∈ L q ([0, T ], L p (R 3 )) for some p ∈ (3, +∞] and q ∈ [2, +∞) satisfying Ladyzhenskaya-Prodi-Serrin's condition (1.5) have been established through works by Prodi [18], Serrin [19], and Ladyzhenskaya [8] in 1960s. On the other hand, for the end point case p = 3, q = +∞ , only until very recently Escauriaza et al. [4] have finally proved the smoothness for weak solution Motivated by these results for the Navier-Stokes equation, we are going to use scaling considerations for system (1.1) to guess which spaces may be critical. We observe that system (1.1) is invariant by the following transformation: Our main results are formulated as the following theorem: where (p, q) satisfies the Ladyzhenskaya-Prodi-Serrin's condition (1.5) and p ∈ (3, +∞].
Notations. We denote by L p , W m,p the usual Lebesgue and Sobolev spaces on R 3 and H m = W m,2 , with norms · L p , · W m,p and · H m respectively. For the sake of conciseness, we do not distinguish functional space when scalar-valued or vector-valued functions are involved. We denote We assume C be a positive generic constant throughout this paper that may vary at different places and the integration domain R 3 will be always omitted without any ambiguity. Finally, ·, · denotes the inner-product in L 2 (R 3 ).
Remark 1.1. It is standard that the condition (1.3) is preserved by the flow. In fact, first notice that the divergence free of the velocity field u can be justified by the initial assumption that ∇ · u 0 = 0. Indeed, this can be easily and formally observed by take ∇· to the momentum equation. Moreover, applying the maximum principle to the equation for |d| 2 , one also can easily see that |d| = 1 under the initial assumption that |d 0 | = 1.
Proof of Theorem 1.1
We prove our theorem in this section. Without loss of generality, we assume ν = 1. The first bright idea to reduce many complicated computations lies in that we just need to do the lowest order and highest order energy estimates for the solutions. This is motivated by the following observation: This inequality (2.1) can be easily proved by combing Young's inequality and Gagliardo-Nirenberg's inequality.
Now we are in a position to prove our Theorem 1.1.
Proof of Theorem 1.1 First of all, we note that if p = +∞, Theorem 1.1 has been proved in [16], thus let us concentrate on p ∈ (3, +∞). Now for classical solutions to (1.1)-(1.2), one has the following basic energy law: Let's concentrate on the case s = 3. For each multi-index α with |α| ≤ 3, by applying ∂ α x to (1.1a) and ∂ α+1 x to (1.1b), multiplying them by ∂ α x u, ∂ α+1 x d respectively and then integrating them over R 3 , we have where I |α|,i are the corresponding terms in the above equation which will be estimated as follows. Now for |α| = 1 in (2.4), integrating by parts and using the divergence free condition ∇ · u = 0 and (2.2), we arrive at (2.5) Combining Cauchy's inequality, Sobolev's inequality and the fact |∇d| 2 = −d · △d (since |d| = 1) gives (2.8) Taking the above estimates (2.5)-(2.8) in (2.4) for |α| = 1, we arrive at Next we derive an estimate for u p L p and ∇d p L p . First of all, we multiply (1.1a) by |u| p−2 u and integrate over R 3 to obtain that (2.10) Observe that △d · ∇d = ∇ · (∇d ⊙ ∇d − 1 2 |∇d| 2 I), (2.11) where ∇d ⊙ ∇d denotes the 3 × 3 matrix whose (i, j)−the entry is given by ∂ i d · ∂ j d for 1 ≤ i, j ≤ 3. And taking div to (1.1a), we arrive at An application of the L p -estimate of elliptic systems to the above equation, there exists P (t) such that . Similarly, applying (2.11) and by Sobolev's inequality and Cauchy's inequality, we get Putting the above two inequalities into (2.10) we have (2.12) Next differentiating (1.1b) with respect to x, we have We multiply (2.13) by |∇d| p−2 ∂ x d and integrate over R 3 to obtain that (2.14) Combining (2.12) and (2.14) gives By Gronwall's inequality, we have We add (2.17) to (2.9) to obtain By Gronwall's inequality, we get for all t ∈ [0, T ]. Next for |α| = 3. For I 3,1 , we need to use the following Moser-type inequality (see [17, p. 43 ]): Thus we have (2.20) For I 3,2 , we apply (2.11) and (2.19) to obtain that (2.21) Similar in the proof of (2.21), I 3,3 , I 3,4 can be bounded as follows: and
|
2012-11-24T16:26:47.000Z
|
2012-11-24T00:00:00.000
|
{
"year": 2014,
"sha1": "3792e445240c66317c4090f85fdf63ad2c22394d",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1211.5682",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "3792e445240c66317c4090f85fdf63ad2c22394d",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Physics",
"Mathematics"
]
}
|
88511682
|
pes2o/s2orc
|
v3-fos-license
|
Nonparametric estimation of multivariate extreme-value copulas
Extreme-value copulas arise in the asymptotic theory for componentwise maxima of independent random samples. An extreme-value copula is determined by its Pickands dependence function, which is a function on the unit simplex subject to certain shape constraints that arise from an integral transform of an underlying measure called spectral measure. Multivariate extensions are provided of certain rank-based nonparametric estimators of the Pickands dependence function. The shape constraint that the estimator should itself be a Pickands dependence function is enforced by replacing an initial estimator by its best least-squares approximation in the set of Pickands dependence functions having a discrete spectral measure supported on a sufficiently fine grid. Weak convergence of the standardized estimators is demonstrated and the finite-sample performance of the estimators is investigated by means of a simulation experiment.
Introduction
Extreme-value copulas arise in the asymptotic theory for componentwise maxima of independent random samples. They provide the dependence structures for the class of multivariate extreme-value or max-stable distributions. More generally, they constitute a flexible class of models for describing positive association; see Gudendorf and Segers (2010) for a survey.
In this paper we will focus on the nonparametric estimation of extreme-value copulas in general dimensions. In particular, we aim at multivariate extensions of the rank-based estimators in Genest and Segers (2009) and the projection methodology in Fils-Villetard et al. (2008).
Let X i = (X i,1 , . . . , X i,p ), i ∈ {1, . . . , n}, be an independent random sample from a p-variate, continuous distribution function F with margins F 1 , . . . , F p and copula C, that is, where F(x) = P(X x) (componentwise inequalities), F j (x j ) = P(X j x j ), and C is the joint distribution function of (F 1 (X 1 ), . . . , F p (X p )). We are interested in nonparametric estimation of C in the model where the margins F 1 , . . . , F p are completely unknown (but continuous) and C is known to be an extreme-value copula.
Here and further on, we frequently identify ∆ p−1 with {(w 1 , . . . , w p−1 ) ∈ [0, 1] p−1 : w 1 + · · · + w p−1 1}. The extreme-value copula C can be expressed in terms of A via C(u) = exp see Pickands (1981) and Zhang et al. (2008). The function A is convex as well and satisfies max(w 1 , . . . , w p ) A(w) 1 for all w ∈ ∆ p−1 . Nonparametric estimators for A have initially been developed in Pickands (1981), with modifications in Deheuvels (1991) and Hall and Tajvidi (2000), and in Capéraà et al. (1997). These estimators will be referred to as the Pickands and CFG estimators, respectively; see Section 3 for definitions. In the previously cited papers, the marginal distributions were assumed to be known. The more realistic case of unknown margins has been treated in the bivariate case in Jiménez et al. (2001) for a submodel and in Genest and Segers (2009) for the general model. Multivariate extensions have been proposed in Zhang et al. (2008) and Gudendorf and Segers (2011) for the case of known margins. In Section 3, we will provide a proof for the convergence of these estimators in case of unknown margins being estimated by the empirical distribution functions, thus generalizing the main results in Genest and Segers (2009) to arbitrary dimensions. As in Kojadinovic and Yan (2010) and Genest et al. (2011), the estimators could also be used as a starting point for goodness-of-fit tests, but for brevity we do not pursue this here. Finally, a new type of nonparametric estimator has been proposed in Bücher et al. (2011) for the bivariate case.
In the proofs of the asymptotic normality of the Pickands and CFG estimators, a certain expansion of the empirical copula process due to Stute (1984) and Tsukahara (2005) plays a crucial role. The second-order derivatives of extremevalue copulas typically exhibit explosive behaviour near the corners of the hypercube, violating the assumptions in the two papers just cited. In Segers (2011), it was shown that the same expansion continues to hold under much weaker conditions on the partial derivatives. In Section 2, these issues are considered for multivariate extreme-value copulas.
As the estimators for A considered here fail to be Pickands dependence functions themselves, it is natural to ask how to enforce the shape constraints on such functions in the estimation procedure. In dimension p = 2, it is sufficient to ensure that A is convex and takes values in the range max(w) A(w) 1, for instance by truncation and convexification (Deheuvels, 1991). In dimension p 3, however, this procedure is no longer sufficient (Beirlant et al., 2004, page 257) and one needs to rely on the spectral representation in (1.5). In Section 4 we will apply an projection methodology (Fils-Villetard et al., 2008) to obtain valid estimates: an initial estimate is replaced by its best least-squares approximation in the set of Pickands dependence functions corresponding to discrete spectral measures supported on a fine grid.
The results of a simulation experiment aimed at investigating the finite-sample performance of the original and projected Pickands and CFG estimators are reported in Section 5. All proofs are relegated to the Appendix.
Throughout the article, we will apply the following notations. For a space W , let ℓ ∞ (W ) and C (W ) denote the spaces of real-valued bounded and realvalued continuous functions respectively, where we endow both spaces with the uniform norm · ∞ : f → sup x∈W | f (x)|. Furthermore, the indicator function of the event E is denoted by 1(E). The arrow ' ' will stand for weak convergence. For any p−variate real-valued function f with values in R, the first and secondorder partial derivatives will be denoted byḟ ∂x i ∂x j f (x 1 , . . . , x p ).
Empirical copula processes
Let X 1 , X 2 , . . . be an iid sequence of random vectors from a p-variate multivariate distribution F with continuous margins F 1 , . . . , F p . If the margins F 1 , . . . , F p are known, we can define the empirical cumulative distribution function C n of the (unobservable) random sample with associated empirical process α n = n 1/2 (C n − C) . (2.2) For ease of notation, we will write α n, j (u j ) = α n (1, . . . , 1, u j , 1, . . . , 1) for j ∈ {1, . . . , p}. (2.3) In practice, the marginal distributions will need to be estimated. If we are not willing to make any assumptions (except for continuity) we can estimate them by the empirical distribution functions where we divided by n + 1 in order to avoid later problems at the borders. By so doing, we can construct n vectorsÛ i = (Û i,1 , . . . ,Û i,p ) viâ for i ∈ {1, . . . , n} and j ∈ {1, . . . , p}. The empirical copula will be denoted bŷ with associated empirical copula process In Stute (1984) and Tsukahara (2005) it was established that if all secondorder derivatives of C exist and are continuous on [0, 1] p , the processes α n in (2.2) and C n in (2.7) are related via |R n (u)| = O n −1/4 (log n) 1/2 (log log n) 1/4 almost surely. (2.9) Let ℓ ∞ ([0, 1] p ) denote the space of bounded real-valued functions on [0, 1] p , equipped with the topology of uniform convergence. Weak convergence of maps taking values in ℓ ∞ ([0, 1] p ) will be understood as in van der Vaart and Wellner (1996, Definition 1.3.3) and will be denoted by ' '. By classical empirical process theory, we have α n α as n → ∞, the limiting process being a mean-zero tight Gaussian process with continuous trajectories and covariance function and α j (u j ) = α(1, . . . , 1, u j , 1, . . . , 1). Like many other copulas, extreme-value copulas do in general not have uniformly bounded second-order partial derivatives. For instance, in the bivariate case, every copula having a positive coefficient of upper tail dependence will have first-order partial derivatives that fail to have a continuous extension to the upper corner (1, 1); see Segers (2011, Example 1.1). As a consequence, the only bivariate extreme-value copula whose density is uniformly bounded is the independence copula. However, as shown in the same paper, for copulas satisfying Assumption 1 below, the expansion (2.8)-(2.9) of the empirical copula process remains valid. Assumption 1. (C1) For every j ∈ {1, . . . , p}, the first-order partial derivativė C j exists and is continuous on the set (C2) For every i, j ∈ {1, . . . , p} (i and j not necessarily distinct),C i j exists and is continuous on V p,i ∩ V p, j and In fact, for weak convergence C n C in ℓ ∞ ([0, 1] p ) to hold, condition (C1) is already sufficient. In the context of multivariate extreme-value copulas, it will be of interest to have a readily verifiable condition on the stable tail dependence function ℓ for Assumption 1 to hold.
Assumption 2. (L1) For every j ∈ {1, . . . , p}, the first-order partial derivativel j exists and is continuous on the set (L2) For every i, j ∈ {1, . . . , p} (i and j not necessarily distinct),l i j exists and is continuous on W p,i ∩ W p, j and
For completeness, we want to mention that in the above references, the empirical copula is not defined as in (2.6) but rather aŝ Straightforward calculus shows that, in the absence of ties, As a consequence, Stute's expansion (2.8) is valid forĈ n if and only if it is valid forĈ D n .
Nonparametric estimation of the dependence function
Among the most popular nonparametric estimators for A figure the Pickands estimator P n (Pickands, 1981) and the estimator CFG n proposed by Capéraà et al. (1997), referred to as the CFG estimator from now on. Writinĝ for w ∈ ∆ p−1 , withÛ i, j as in (2.5), these estimators are defined as with γ = 0.5772 . . . the Euler-Mascheroni constant. Explanations on the construction of these estimators are provided for instance in the original references given before, in Genest and Segers (2009) and in the survey paper Gudendorf and Segers (2010). The multivariate extension of the CFG estimator was presented in Zhang et al. (2008), albeit under a different but equivalent form.
In order to improve the small-sample properties of the above estimators, the endpoint constraints A(e j ) = 1 for j ∈ {1, . . . , p}, with e j the jth standard unit vector in R p , can be imposed as follows. Given continuous functions λ 1 , . . . , λ p : In case of known margins, variance-minimizing weight functions λ j can be determined adaptively by ordinary least squares (Segers, 2007;Gudendorf and Segers, 2011). However, if the marginal distributions are unknown, these endpoint cor-rections are asymptotically irrelevant (Genest and Segers, 2009) as n → ∞. Nevertheless, in finite samples, the simple choice λ j (w) = w j can make quite a difference. Similarly, for unknown margins, the multivariate extension of the estimator in Hall and Tajvidi (2000) simplifies to HT The next lemma establishes a functional relationship between P n and CFG n on the one hand and the empirical copulaĈ n on the other hand. Recall the empirical copula process C n in (2.7).
The proof is not different from the one in dimension two and can be found in Genest and Segers (2009, Lemma 3.1 as n → ∞ in the space C (∆ p−1 ) equipped with the topology of uniform convergence.
The main idea of the proof consists in substituting C n in (3.3) and (3.4) by Stute's expansion and to conclude using a refined version of the continuous mapping theorem. As the proof follows the same lines as the one in Genest and Segers (2009), we will just point out the main adjustments.
Projection estimator
The estimators of the Pickands dependence functions considered so far are in general not valid Pickands dependence functions themselves. In this section, we adapt the methodology in Fils-Villetard et al. (2008) to project a pilot estimatê A n onto the set A of Pickands dependence functions of p-variate extreme-value copulas. To this end, we view A as a closed and convex subset of the real Hilbert space L 2 (∆ p−1 ) with ∆ p−1 equipped with (p − 1)-dimensional Lebesgue measure when viewed as a subset of R p−1 . The inner product and the norm on L 2 (∆ p−1 ) are denoted by f, g = f g and f 2 = ( f, f ) 1/2 respectively.
The orthogonal projection of an initial estimator n for A, for example the Pickands or the CFG estimator, onto A is then defined aŝ Projections being contractions, it follows that  pr − A 2  n − A 2 for all A ∈ A : the L 2 -risk of the projection estimator is bounded by the one of the initial estimator.
For practical computations, we are obliged to refer to finite-dimensional subclasses A m ⊂ A , yielding the approximate projection estimator (4.1) For each positive integer m, the class A m will be defined as the set of Pickands dependence functions characterized by discrete spectral measures H with fixed and finite support depending on m. Specifically, let V p,m be the (finite) set of points v = (v 1 , . . . , v p ) ∈ ∆ p−1 such that k j = mv j is integer for every j ∈ {1, . . . , p}, so that in fact v = (k 1 /m, . . . , k p /m) where k j ∈ {0, . . . , m} and k 1 + · · · + k p = m. The cardinality of V p,m is of the order O(m p−1 ) as m → ∞. Let H p be the set of spectral measures on ∆ p−1 and let H p,m be the set of (discrete) spectral measures H ∈ H p supported on V p,m , that is,
Lemma 2. For every H ∈ H p and every positive integer m, there exists H m ∈ H p,m such that the Pickands dependence functions A and A m of H and H m respectively satisfy
The bound in (4.4) implies that sup A∈A infà ∈A m à − A 2 = O(m −1 ) as m → ∞. This rate is perhaps not sharp, for in case p = 2, Lemma 2 in Fils-Villetard et al. (2008) states the rate O(m −3/2 ). It remains an open problem whether the latter rate can also be achieved in general dimension p.
In practice, the task is to compute the vectorĥ such that the function
solves (4.1). The vectorĥ is given as the solution to the least-squares problem
with h subject to the constraints (4.2). The optimisation problem in (4.5) is a quadratic program with linear constraints, which in matrix notation readŝ The matrix D and the vector d regroup all the scalar products of the form respectively, for v, v ′ ∈ V p,m . The p equality constraints v∈V p,m h v v j = 1 are encoded by means of the matrix C and the vector c.
For implementation, we used the R-package quadprog (Turlach and Weingessel, 2010) for solving quadratic programs under linear constraints. Although there exist multiple packages for numerical multivariate integration, we preferred to compute all the integrals appearing in D and d using Riemann sums on the same fine grid. By so doing we reduce the risk of numerical problems as we impose D to be positive definite.
The derivation of the asymptotics of the projection estimator follows the same lines as in Fils-Villetard et al. (2008). Assume that ε −1 n ( n − A) ζ in L 2 (∆ p−1 ) where ζ is a random process in L 2 (∆ p−1 ) and 0 < ε n → 0; for the Pickands and CFG estimators, we have ε n = n −1/2 and we have weak convergence with respect to the uniform topology, which implies convergence with respect to the L 2 -topology. By Interestingly, equation (4.7) implies that the choice of m is not to be seen as a bias-variance trade-off problem but rather as a discretization problem. As soon as m = m n converges to infinity faster than 1/ε n , the finite-dimensional projection estimator pr m has the same limit behaviour as the 'ideal' projection estimator pr . In practice, we will choose m sufficiently large so that any further increase of m does not make any significant difference, of course subject to constraints on computing time and numerical stability.
Finally, note that the convergence in (4.7) is with respect to the L 2 -topology only, even if originally the weak convergence of ε −1 n (Â n − A) took place in the stronger ℓ ∞ -topology. The asymptotic distribution of the projection estimator under the ℓ ∞ -topology remains an open problem.
Simulation study
A simulation experiment was conducted to compare the finite-sample performance of the following four estimators: PD -the endpoint-corrected Pickands estimator in (3.1) with λ j (w) = w j , in the spirit of Deheuvels (1991); PD-pr -the projection estimator in (4.1) with the previous end-point corrected Pickands estimator as initial estimator; CFG -the endpoint-corrected CFG estimator in (3.2) with λ j (w) = w j ; CFG-pr -the projection estimator in (4.1) with the previous end-point corrected CFG estimator as initial estimator. The set-up of the experiment was as follows. Following Zhang et al. (2008) and Gudendorf and Segers (2011), random samples were generated from a trivariate extreme-value distribution with asymmetric logistic dependence function A (Tawn, 1990): for w ∈ ∆ 2 , with parameter vector (α, θ, φ, ψ) ∈ (0, 1] × [0, 1] 3 . For this model, Assumption 2 can be verified by direct calculation. The dependence parameter α ranged from 0.3 (high dependence) to 1 (independence, A ≡ 1) and the vector (φ, ψ, θ) was set equal to either (0, 1, 0) (symmetric logistic copula or Gumbel copula) and (0.3, 0, 0.6) (an asymmetric logistic copula). For each distribution, 1000 samples were generated of size n ∈ {50, 100, 200}. Simulations were performed using the R-package evd (Stephenson, 2002), which implements the algorithms presented in Stephenson (2003). The discretization parameter m was set to 20, at which value the grid V 3,20 contains 231 points.
Monte-Carlo approximations for the mean integrated squared error (MISE) E[ (Â − A) 2 ] for the four estimators considered above are reported in the tables below. The three main findings are the following: 1-The projection step yields a gain in efficiency, especially in case of weak dependence. 2-Without projection step, the CFG estimator outperforms the PD estimator. 3-After the projection step, the PD-pr estimator is more efficient than the CFG-pr estimator in case of independence and weak dependence (α 0.9), but less efficient otherwise (α 0.7). Further, as the dependence increases, all estimators tend to perform better. In accordance with asymptotic theory, the MISE is roughly proportional to 1/n. van der Vaart, A. W. and J. A. Wellner (1996). Weak Convergence and Empirical Processes. Cambridge Series in Statistical and Probabilistic Mathematics. New York: Springer.
Appendix A. Proofs
Appendix A.1. Proof of Proposition 1 The assumptions on ℓ imply continuity ofĊ j on the set (0, 1] p . If u ∈ [0, 1] p with u j > 0 and u i = 0 for some i ∈ {1, . . . , p} \ { j}, thenĊ j (u) = 0 and continuity ofĊ j at such u follows from the fact that 0 l j 1 and 0 C(v) min(v). If (L2) holds, then also that is, without the condition x 1 + · · · + x p = 1. This result is based on the fact that the function ℓ is homogeneous of order one: ℓ(sx) = s ℓ(x) for all s ∈ [0, ∞) and x ∈ [0, ∞) p . Hence ifl j exists on W p, j , then for all s ∈ (0, ∞) and x ∈ W p, j we have and thusl j (sx) =l j (x). Taking partial derivatives again, we find for all s ∈ (0, ∞) and x ∈ W p,i ∩ W p, j that that is, the map x → max(x i , x j )l i j (x) is constant on rays through the origin. Next, we show the equivalence of (L2) and (C2). Fix i, j ∈ {1, . . . , p}, not necessarily distinct and let u ∈ V p,i ∩ V p, j . On the one hand, if u ∈ V p,i ∩ V p, j ∩ (0, 1] p (meaning that every component of u is different from 0), then − log u ∈ W p,i ∩ W p, j and with the convention that the partial derivatives of ℓ are evaluated in − log u. On the other hand, if u ∈ (V p,i ∩V p, j )\(0, 1] p (i.e. at least one coordinate of u vanishes), thenC i j (u) = 0. We have to verify two things: first, the continuity ofC i j at points in the set (V p,i ∩ V p, j ) \ (0, 1] p ; secondly, the finiteness of the supremum in (C2).
First, let u ∈ V p,i ∩ V p, j ∩ (0, 1] p . Let K be a positive constant not smaller than the supremum in (L2). By assumption (L2) and the fact that 0 l j 1, we have .
Continuity ofC i j at points in the set V p,i ∩ V p, j \ (0, 1] p follows. Secondly, as min(u)/(u i u j ) min(1/u i , 1/u j ) and − log x 1 − x for all positive x, which is equivalent to condition (C2).
Lemma 3. For every θ ∈ (0, 1/2), the trajectories of G θ are continuous almost surely and Proof (Lemma 3). The proof is entirely analogue as the one of Theorem G.1 in Genest and Segers (2009). For completeness, we sketch the main lines. Fix u ∈ E and define the mapping f u : E → R by and consider the class where 0 of course stands for the zero function. The space F will be endowed with the metric Here, we adopt the notations of van der Vaart and Wellner (1996): P denotes the probability distribution on E corresponding to C and P n denotes the empirical measure of the sample (U i1 , . . . , U ip ) for i ∈ {1, . . . , n}, that is Moreover, put G n = n 1/2 (P n − P), viewed as a random function on F . We will show that the collection F is a P-Donsker class, i.e. there exists a P-Brownian bridge G such that G n G in ℓ ∞ (F ) as n → ∞.
It is sufficient to verify the conditions of Theorem 2.6.14 of van der Vaart and Wellner (1996). The function F on E defined by is a suitable envelope function for F . The fact that F is a VC-major class and is pointwise separable follows from the same arguments as in Genest and Segers (2009). For the moment G n is defined on F with the metric ρ in (A.3). Consider the map φ : : z → z•φ being continuous, the continuous mapping theorem permits to conclude that G n,θ G θ in ℓ ∞ ([0, 1] p ). Since the trajectories of G are ρ-continuous almost surely and since φ is continuous, it follows that the sample paths of G θ are continuous almost surely as well. This concludes the proof of Lemma 3.
We now proceed with the proof of Theorem 1. Define for w ∈ ∆ p−1 . Applying the change of variables u = e −s in Lemma 1, we find that the processes B P n and B CFG n can be written as in terms of a function h on (0, ∞) which is h P (s) = 1 for the Pickands estimator and h CFG (s) = 1/s for the CFG estimator. In what follows, the function h denotes either h P or h CFG . Put l n = 1/(n + 1) and k n = p log(n + 1) and split the integral on the right-hand side of (A.4) into three parts: B n (w) = l n 0 + k n l n + ∞ k n = I 1,n (w) + I 2,n (w) + I 3,n (w). (A.5) We will first prove that with probability one, the first and the third term on the right-hand side converge to zero uniformly in w.
|
2011-11-29T11:17:12.000Z
|
2011-07-12T00:00:00.000
|
{
"year": 2011,
"sha1": "431645fe6470dcb99e69c9fa6e452444ab2cb1cd",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1107.2410",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "431645fe6470dcb99e69c9fa6e452444ab2cb1cd",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
248759120
|
pes2o/s2orc
|
v3-fos-license
|
Light-induced activation of boron doping in hydrogenated amorphous silicon for over 25% efficiency silicon solar cells
Recent achievements in amorphous/crystalline silicon heterojunction (SHJ) solar cells and perovskite/SHJ tandem solar cells place hydrogenated amorphous silicon (a-Si:H) at the forefront of photovoltaics. Due to the extremely low effective doping efficiency of trivalent boron in amorphous tetravalent silicon, light harvesting of aforementioned devices is limited by their fill factors (FFs), a direct metric of the charge carrier transport. It is challenging but crucial to develop highly conductive doped a-Si:H with minimal FF losses. Here we report that light soaking can efficiently boost the dark conductance of boron-doped a-Si:H thin films. Light induces diffusion and hopping of weakly bound hydrogen atoms, which activates boron doping. The effect is reversible and the dark conductivity decreases over time when the solar cell is no longer illuminated. By implementing this effect to SHJ solar cells, we achieved a certified total-area power conversion efficiency of 25.18% with a FF of 85.42% on a 244.63 cm2 wafer. Low effective doping of boron limits the performance of solar cells based on hydrogenated amorphous silicon. Liu et al. show that light induces the diffusion of hydrogen atoms, which activates boron doping, enabling a power conversion efficiency of over 25%.
H ydrogenated amorphous silicon (a-Si:H) is a technologically important semiconductor for transistors, batteries and solar cells [1][2][3][4] . It has a long history of use in photovoltaic applications as it offers a low defect density and tunable conduction type [5][6][7] . These optoelectronic advantages strongly rely on configurations of hydrogen and silicon in the three-dimensional space (described by the radial distribution function 8 ) and thus precise control of its microscopic structure 9-11 is a critical factor towards achieving good devices. As boron is a trivalent element, it is challenging to establish four-coordinated B−Si 4 compounds in the disordered a-Si:H matrix; reported approaches, which are focused on eliminating invalid Si x −B−H y doping configurations ( Supplementary Fig. 1), include optimizing the B 2 H 6 flow rate and post-deposition annealing. However, a lack of understanding of the complicated conduction mechanism of boron-doped a-Si:H (p-a-Si:H) has obstructed the full potential of relevant optoelectronic devices.
We find that light soaking is a fast means to improving dark conductance (σ dark ) of p-a-Si:H thin films. Our results indicate that a portion of hydrogen atoms is captured by tetravalent-coordinated boron atoms in the silicon network to form weak B−H−Si components, which diminishes the efficient B−Si 4 doping. We demonstrate that the key function of light soaking is to promote the diffusion and hopping of these weakly bound hydrogen atoms, so that efficient B−Si 4 doping is activated. As a consequence of the improved field passivation and conductivity of p-a-Si:H, we achieve a high power conversion efficiency (PCE) of 25.18% with an open-circuit voltage (V oc ) and fill factor (FF) of 749 mV and 85.42%, respectively, on a 244.63 cm 2 amorphous/crystalline silicon heterojunction (SHJ) solar cell. Moreover, our 60-cell modules exhibit a robust operating stability that successfully passes IEC 60068-2-78 (damp-heat degradation at 85 °C and 85% relative humidity, DH85) and IEC 61215-2:2016 (thermal cycle degradation between −40 °C and 85 °C with applied current at 100% I mpp (current at the maximum power point) at the rising edge of temperature) even after threefold-time-long aging standards.
Observation of light-induced dark conductivity increase
Since 1977, light soaking of micrometre-thick a-Si:H films has been widely studied in the research field of a-Si:H thin-film solar cells, but only a small number of works pay attention to its effect on 'thin' a-Si:H films, particularly in the research field of SHJ solar cells 12,13 . Although a few researchers report that light soaking improves the FF of SHJ solar cells by a magnitude of ~0.7% abs (ref. 14 ), the fundamental underlying mechanisms are still unclear, which attracts broad interest in the research fields of optoelectronics. We use in situ methods to monitor the time-dependent changes of p-a-Si:H thin films during illuminations. The films are deposited on quartz glasses, followed by evaporating silver strips to form the transfer-length-method structures. The in situ current-voltage data (Fig. 1a) show that σ dark of the p-a-Si:H thin film steadily increases during 1 sun illumination, reaching σ dark /σ dark0 ≈ 4.71 (σ dark0 is dark conductance before light soaking) after 30 min. This phenomenon is strikingly in contrast to the light-induced degradation of σ dark observed in thick intrinsic, p-and n-type a-Si:H films [15][16][17] . It supports the perspective that accumulated stress in thick films plays an important role in the Staebler-Wronski effect 18 , as the maximum stress is roughly proportional to the film thickness. This indicates the effect of light soaking exhibits a scaling behaviour, where the Staebler-Wronski effect gradually transitions to a different effect as the thickness declines. After turning off the illumination, σ dark gradually decays (close) to its initial value after more than 1,000 min ( Fig. 1b). Such a decay behaviour fits well to a combination of the Debye and Williams-Watts models (Fig. 1c), The terms Δσ D , Δσ WW and τ D , τ WW are constant coefficients and characteristic time constants of the Debye and Williams-Watts models, respectively. The term t is the decay time in the dark. Detailed parameters are summarized in Supplementary Table 1. The Debye model with β D = 1 describes free diffusion, whereas the Williams-Watts model with 0 < β WW < 1 describes a continuous-time random walk composed of alternating steps and pauses 19 . Examples of the Williams-Watts model include the spin-correlation in Cu-Mn and (V) 0 15 30 45 60 75 90 105 Voltage (V) 0 min 33 min 3 min 63 min 6 min 123 min 9 min 213 min 12 min 743 min 15 min 1,143 min 18 min 21 min 24 [20][21][22][23][24][25] ). The good fitting in Fig. 1c suggests an effect that is different to the Staebler-Wronski effect and mediated by two independent mechanisms that control the fast Debye relaxation and the slow Williams-Watts relaxation, respectively.
Mechanism underlying the light-induced changes
To determine the implicit mechanisms of aforementioned Debye and Williams-Watts relaxation, we investigate the hydrogen distributions in p-a-Si:H thin films by time-of-flight secondary ion mass spectrometry (TOF-SIMS). The H − spectra (Fig. 2a) show that 30 min annealing at 180 °C only slightly changes the hydrogen content in intrinsic a-Si:H (i-a-Si:H), by contrast, the same annealing process expels at least ~21.3% of the hydrogen content from p-a-Si:H. As shown in Supplementary Fig. 2, TOF-SIMS spectra also reveal that room-temperature oxidation of an i/p-a-Si:H stack hardly changes the hydrogen content in the i-a-Si:H film, however, the same oxidation process expels ~17.1% of the hydrogen content in p-a-Si:H from the inside to the surface. Based on these findings, we conclude that the boron doping plays a crucial part in the formation of metastable hydrogen configurations in p-a-Si:H.
We next consider the migration barriers of hydrogen atoms to understand the possible binding configurations of aforementioned metastable hydrogen. Structural relaxations observed displacement of four-coordinated silicon atoms by boron atoms shorten the bonds from ~2.35 Å to ~2.07 Å ( Supplementary Fig. 3), well consistent with the results of Pandey and colleagues 26 . Further simulations demonstrate that these B−Si 4 sites have a large probability of attracting hydrogen atoms to form metastable B−H−Si configurations when diffusive hydrogen atoms pass by ( Supplementary Fig. 4), which is in agreement with the nuclear magnetic resonance signal 27 and relevant simulations 26,28 . As a consequence, conductance of p-a-Si:H is expected to decline due to reduction in the quantity of B−Si 4 (ref. 26 ). Transition-state surveys (Fig. 2b,c) the hydrogen hopping mechanism (or the tunnelling mechanism 31 at temperatures below 60 K) illustrated in Fig. 2b, resulting in improvement of σ dark as has been confirmed in Fig. 1a. The microscopic migrations in Fig. 2b are consistent with the light-induced formation of Si−H−Si configurations 32 .
The mechanistic understanding is also evident in optoelectronic analysis. We prepared symmetric structures of p-a-Si:H/i-a-S i:H/n-c-Si/i-a-Si:H/p-a-Si:H (here n-c-Si represents n-type c-Si) and i-a-Si:H/n-c-Si/i-a-Si:H, whose injection-dependent effective minority carrier lifetimes (τ eff ) were measured before and after 2 h light soaking under 1 sun illumination, as well as that after 15 min annealing at 180 °C. The right graph in Fig. 2d shows that the τ eff of p-a-Si:H/i-a-Si:H/n-c-Si/i-a-Si:H/p-a-Si:H increased substantially after light soaking and then returned to initial values after the annealing. The recombination rate at the a-Si:H/c-Si interface satisfies a closed-form expression in case of high illumination, which can be fitted by the model of Olibet and colleagues 30 ; by modelling the τ eff at injection >1.0 × 10 15 cm −3 , we determined that light soaking increased (decreased) the surface charge density Q s (the interface dangling-bond density N s ) from 3.0 × 10 10 cm −2 (2.1 × 10 9 cm −2 ) to 3.8 × 10 10 cm −2 (1.4 × 10 9 cm −2 ), and then annealing decreased (increased) the Q s (N s ) back to 3.0 × 10 10 cm −2 (2.0 × 10 9 cm −2 ). As a control, the left graph in Fig. 2d shows that the τ eff of i-a-Si:H/n-c-Si/i-a-Si:H almost remained constant after either light soaking or annealing, which demonstrates that the variation in τ eff in the right graph must originate from p-a-Si:H. According to Sinton and co-workers 33 , pseudo FFs (PFFs) of silicon solar cells take into account the effect of chemical passivation. We probed the PFF of the device Ag/IWO/p-a-Si:H/i-a-Si:H/n-c-Si/i-a-Si:H/n-a-Si:H/ IWO/Ag (where IWO is tungsten-doped indium oxide) before and after 2 h light soaking under 1 sun illumination, as well as that after 15 min annealing at 180 °C. The inset of Fig. 2d finds that the PFF maintains a PFF of ~86.4% regardless of light soaking and annealing. This demonstrates that the decrease in N s has a negligible impact on chemical passivation, probably due to the small order of magnitude of N s itself. On the other hand, ultrafast and broadband transient absorption signals (Fig. 2e) indicate that light soaking increases the mobility of photon-generated carriers from 7.10 × 10 −3 cm 2 V −1 s −1 to 1.81 × 10 −2 cm 2 V −1 s −1 in p-a-Si:H. This probably results from less scattering of carrier transport in the p-a-Si:H network, thanks to the global decline of strain-induced gap states from B−H−Si configurations 26 . Consideration of these light-induced enhancements to σ dark , Q s and the carrier mobility leads to the conclusion that the light-induced dark conductivity increase stems from activation of boron doping via hydrogen movements. In this regard, we further ascribe the decay of σ dark in Fig. 1c to the detrimental reconstruction of B−H−Si configurations, as the binding energy of hydrogen in B−H−Si is ~0.46 eV higher than that in Si−H−Si. In accordance, the fast Debye and slow Williams-Watts relaxations (Fig. 1c) are attributed to incorporation of fast diffusive hydrogen and slow hopping hydrogen into the B−Si bonds, respectively, forming invalid boron doping B−H−Si that has negative effects on the σ dark as has been confirmed in Fig. 1b.
We next distinguish the weakly bound hydrogen atoms from the normal Si−H bonds in p-a-Si:H to strengthen the mechanism underlying the light-induced dark conductivity increase. Figure 3a illustrates the preparation of p-a-Si:H films for TOF-SIMS, Fourier-transform infrared spectroscopy (FTIR) and current-voltage characterizations. The capping of an IWO layer on the p-a-Si:H is to mimic the structure of SHJ solar cells, which may have an effect on the redistribution dynamics of hydrogen atoms during the annealing process. TOF-SIMS signals (Fig. 3b) find 2 h annealing at 180 °C reduced >20% of the hydrogen content in the p-a-Si:H film, whereas the content of silicon and boron (almost) remained unchanged. By contrast, Fig. 3c finds that all of the wagging, bending and stretching intensities of the normal Si−H bonds (almost) remain unchanged after the same annealing process. A comparison between the TOF-SIMS signals of hydrogen atoms and the FTIR spectra of Si−H bonds unambiguously demonstrates that relatively low-temperature (180 °C) annealing merely expels weakly bound hydrogen atoms from the p-a-Si:H film while the normal Si−H bonds are hardly affected. The light-induced enhancement of dark conductance of the p-a-Si:H film is plotted as a function of the annealing time at 180 °C in Fig. 3d, evidently, the σ dark /σ dark0 gradually declined to ~1 due to the exhaustion of weakly bound hydrogen atoms during the prolonged annealing. This definitely proves the light-induced dark conductivity increase and boron doping activation does stem from weakly bound hydrogen atoms rather than normal Si−H bonds in the p-a-Si:H.
According to Pandey and colleagues 26 , there exist a host of possible configurations of weakly bound hydrogen atoms with respect to boron atoms in the complicated p-a-Si:H network, such as weak hydrogen atoms nearby B−Si 4 doping, boron dimers and boron clusters and so on. By changing the flow rate of B 2 H 6 during the film deposition, we fabricated four p-a-Si:H films on quartz glasses, their current-voltage characteristics are shown in Supplementary Fig. 5a. We find that dark conductance ( Supplementary Fig. 5b) gradually saturates when the flow rate of B 2 H 6 exceeds ~45 sccm, which suggests that a huge amount of boron atoms are invalidly doped into p-a-Si:H, or do not make contribution to hole concentration. Furthermore, light-induced enhancement of dark conductance substantially decreases when the flow rate of B 2 H 6 exceeds ~20 sccm ( Supplementary Fig. 5c). Taking into account the possibility that boron dimers and clusters dominate only in case of high flow rates of B 2 H 6 molecules, we conclude that the light-induced dark conductivity increase and boron doping activation mainly stem from the weak hydrogen atoms nearby the most important B−Si 4 doping sites, rather than those nearby boron-superabundant configurations, such as boron dimers and boron clusters and so on.
Application to high-efficient SHJ solar cells
Encouraged by the enhancement of σ dark by light soaking, we attempt to develop the full potential of SHJ solar cells by this light-induced effect. Figure 4a showcases the device structure (where the thickness of the p-a-Si:H is ~15 nm; Supplementary Fig. 6) whose initial open-circuit voltage (V oc ), short-circuit current density (J sc ), FF and PCE are 744.30 ± 0.68 mV, 38.43 ± 0.07 mA cm -2 , 83.70 ± 0.22% and 23.94 ± 0.04% respectively, based on 316 continuous devices from our daily production line. Under 1 sun illumination, as expected, the FF of these cells undergoes a steady increase (standard cell in Fig. 4b).
The slope of the current-voltage curve near the low-internal-field region (V oc condition) serves as an indication of charge collection efficiency 35 , as found in Supplementary Fig. 7, the light soaking continuously increases the slope near this low-internal-field region, indicating more efficient charge extraction due to enhancement of the net field across the depletion region. This strongly supports our perspective that the light soaking activates better boron doping. By contrast, we observe a noticeable drop in the gain of FF for devices annealed for 2 h at 180 °C (180 °C in Fig. 4b), attributed to its less metastable hydrogen configurations (inferred from Figs. 2a and 3b), which leads to less hydrogen movements in Fig. 2b. Intriguingly, we observe that when 13 A current is applied to the cell (Fig. 4b), the FF exhibits a quite similar behaviour to that under 1 sun illumination. This implies the photon energy from light soaking is not the exclusive cause responsible for the dark conductivity increase and boron doping activation, electron-hole recombination caused by current-injected carriers probably also take effect 31 . Further increasing the light intensity from 1 to 11, 48 and 60 sun boosted the FF by 0.32 ± 0.18% abs , 0.39 ± 0.14% abs , 1.40 ± 0.26% abs and 1.50 ± 0.37% abs , respectively (Fig. 4c). Here the improvement in FF under 60 sun illumination is close to the ΔFF ≈ 1.8 ± 0.4% abs reported via a multifunctional process 36 . We also notice that increasing either the light intensity or the forward bias can improve the magnitude of ΔFF (Fig. 4c and Supplementary Fig. 8). This highlights that intensive light soaking or high forward bias activates more efficient boron doping by pumping more metastable hydrogen from B−H−Si to other configurations, in this consideration, we naturally regard SHJ solar cells as the premium choice for concentrator photovoltaic systems. At the mass-production level, 60-sun illumination obtains state-of-the-art industrial FF and PCE of 85.19 ± 0.18% and 24.46 ± 0.05%, respectively (Fig. 4d,e), together with improved V oc by ~2.6 mV ( Supplementary Fig. 9), thanks to improvement of the build-in field in c-Si absorber. Numerical investigation on these improvements is based on a traditional drift-diffusion model of the SHJ solar cell. Procedures and simulated parameters are provided in Supplementary Tables 2 and 3 Table 4 reveals the decline of N s from 9.0 × 10 8 cm −2 to 4.3 × 10 8 cm −2 only slightly increases the FF by 0.04% abs , much lower than the experimental 1.50 ± 0.37% abs , but in good consistency with the PFF in Fig. 2d. Samples A and C show that the FF increases by 0.66% abs when the efficient doping concentration of boron (N a ) increases from 2.0 × 10 18 cm −3 to 1.0 × 10 19 cm −3 . Samples A and D show the FF boosts by 0.77% abs when the series resistance (R s ) declines from 0.4 Ω cm 2 to 0.25 Ω cm 2 (Supplementary Table 4). The collective refinements to N s , N a and R s improves the FF and PCE from 83.79% and 24.0% to 85.25% and 24.5%, respectively, in good agreement with the experimental results from 83.70 ± 0.22% and 23.9 ± 0% to 85.19 ± 0.18% and 24.5 ± 0.1% (samples A and E in Supplementary Table 4). Together with that, the simulated increase of V oc from 744.3 mV to 746.9 mV is also identical to the experimental results from 744.3 ± 0.7 mV to 746.9 ± 0.5 mV. Such excellent consistencies between simulations and experiments confirm the improvements in SHJ solar cells do stem from light-induced efficient doping of boron atoms.
. Comparison of samples A and B in Supplementary
After capping an 80 nm SiO x antireflection layer onto a high-efficiency cell, we submitted it to an independent testing centre and achieved a certificated PCE of 25.18% with a FF of 85.42% on a 244.63 cm 2 wafer (Fig. 4f and Supplementary Fig. 10). They are among the highest certificated PCE and FF for total-area two-side contacted silicon solar cells 34 (Fig. 4g and Supplementary Fig. 11). The FF reaches 98.30% of its Shockley-Queisser limit, ~86.9% (ref. 37 ). We also submitted another SHJ solar cell capped with a 110 nm MgF 2 antireflection layer to ISFH CalTeC; they reported total-(244.81 ± 0.91 cm 2 ) and designated-area (226.71 ± 0.91 cm 2 ) PCEs of 25.10 ± 0.38% and 25.45 ± 0.38%, respectively ( Fig. 4f and Supplementary Fig. 12). The a bit lower FFs of 84.28% ± 0.93% and 84.63 ± 0.93% than that certificated from NPVM probably stem from the degradation between the 70 sun light soaking and the certification.
With regard to stability, FF and PCE of devices retain 98.70% and 97.59% of their initial values after 1,000 h DH85 impact (Fig. 4h), without any encapsulations. At the module level, Figs. 4i and 4j show that the FF and PCE retain 98.1% (96.8%) and 95.5% (95.4%), respectively, after 3,000 h DH85 impact (600 thermal cycles between −40 °C and 85 °C), demonstrating their high stability against extreme climate degradation factors. The DH85 (thermal-cycle) degradation of the module is threefold longer than the IEC 60068-2-78 (IEC 61215-2:2016) standard. These tests exclude the high-density (~10 21 cm −3 ) weakly bound hydrogen atoms in the p-a-Si:H film as the key factor that dominates the damp-heat (thermal-cycle) degradation 4 .
In addition to the stabilities in DH85 and thermal cycle environments, we finally explored the reversible behaviour of SHJ solar cells caused by the light-induced dark conductivity increase and boron doping activation. As found in Fig. 5a, we alternated between measuring the cells' FFs under 1 sun illumination for 180 min and the dark for 720 min. Evidently, the FF decays ~0.3−0.35% abs during each sleeping in the dark. From Supplementary Fig. 13, we find that the FF rapidly declined by ~0.15% abs in the first ~20 min, followed by a slow decay in the next ~745 min. This fast decay time of ~20 min is consistent with the characteristic time constant τ D ≈ 15.84 ± 1.55 min of the Debye relaxation (Supplementary Table 1), confirming the enhancement of FF does stem from the improvement of conductance of doped a-Si:H film. Figure 5a also reveals that the FF rapidly climbs up after turning on the light soaking; thus, the output of power plants comprising SHJ solar cells undergoes a rapid increase after sunrise on sunny days, which challenges the present IEC testing standards, as the in-house certification underestimates their performance in real operations. The following provides a feasible pathway to freezing the dark decay. We took 198 solar cells from the same batch and divided them into 11 groups. First, the devices in each group undergo a 70 s light soaking under 60 sun illumination, followed by a 25 min sleeping in the dark to finish the fast Debye relaxation. Their FFs were then measured before and after 10 min annealing at different temperatures, as shown in Fig. 5b, the decay magnitude of FF (from Williams-Watts relaxation) dramatically drops when the temperature is decreased from 200 °C to 60 °C, which suggests that the low temperature arrests the unfavourable formation of the B−H−Si configurations. This observation agrees with the perspective that annealing can accelerate annihilation of Si−H−Si configurations 38 . Using the average ΔFF (Fig. 5b), we derived the temperature-dependent characteristic time constant τ WW by the Williams-Watts model, According to Kakalios and co-workers 39 , the β WW of a-Si:H is 0.00165T (in Kelvin), independent of the doping type; the τ WW , on the other hand, obeys an Arrhenius relationship, where R is molar gas constant. Figure 5c shows the fitting of equation (3) to the τ WW (blue circles) calculated from equation (2), interestingly, the theoretical τ WW (red circle) from Supplementary Table 1 is close to the extrapolation of the fitting line, confirming the validation of equation (3). The derived activation energy E a ≈ 0.399 eV is well agreement with the prediction of migration barrier ~0.417 eV and the 0.385 ± 0.143 eV inferred from the reported data of doped a-Si:H (ref. 40 ). Figure 5d finds E a of doped a-Si:H is noticeably smaller than that of the intrinsic counterpart [40][41][42][43][44] , most likely owing to existence of the exclusive metastable hydrogen configurations in doped materials (as inferred from Figs. 2a and 3b). We notice phosphorus-doped a-Si:H also has smaller E a , thus it is expected to make similar contributions to the light-induced effect. This speculation is evident from the light soaking behaviour of 'half ' cells with structure of Ag/IWO/n-a-Si:H/i-a-Si:H/n-c-Si/IWO/Ag, where the p-a-Si:H is totally removed (Supplementary Fig. 14). It is interesting to find the doped a-Si:H thin films exhibit an opposite light-induced behaviour in comparison to p-type c-Si when oxygen atoms exist in the form of B s −O 2i complexes inside the crystalline matrix 45 . Given doped a-Si:H has small E a but great τ WW at low temperatures, we conclude that the cold climates can effectively prevent the decay of metastable FF.
Conclusion
We observed light-soaking-induced enhancement of dark conductance of boron-doped a-Si:H thin films, which is appealing for realizing outstanding optoelectronic devices. We show that light soaking promotes the diffusion and hopping of the weakly bound hydrogen atoms, which allow the activation of B−Si 4 doping. The light-soaking effect noticeably improves the charge carrier transport in SHJ solar cells, yielding an excellent FF of 85.42% (84.63%) and a PCE of 25.18% (25.45%) on a 244.63 cm 2 (226.71 cm 2 ) total-area (designated-area) wafer. during which the chamber pressure, primary ion source and current are 1.0 × 10 −9 mbar, 30 keV Bi + and 1.0 pA, respectively, the depth profiles were acquired using a 500 eV Cs + sputter beam. Cross-sectional images of p-a-Si:H were probed by high-resolution transmission electron microscope (FEI Titan 80-300ST), operated at 200 kV. Injection-dependent τ eff and PFF were measured by the Sinton WCT-120 and Suns-Voc, respectively. The Q s and N s are fitted from a surface recombination model by Olibet and colleagues, they discussed details about the model and also provided the fitting codes in the appendix A of Olibet's thesis 47 .
Methods
Ultrafast and broadband transient absorption spectra were measured using a homebuilt pump-probe set-up. The output of a titanium sapphire amplifier (Coherent LEGEND DUO, 4.5 mJ, 3 kHz, 100 fs) splits into three beams (2.0 mJ, 1.0 mJ and 1.5 mJ), two of which separately pump two optical parametric amplifiers (OPA; Light Conversion TOPAS Prime). TOPAS-1 provides tunable pump pulses and TOPAS-2 generates the probe pulses. A 1,300 nm pulse from TOPAS-2 is sent through a CaF 2 crystal mounted on a continuously moving stage. This generates a white-light supercontinuum pulses from 350 nm to 1,100 nm. The pump pathway length is varied between 5.12 m and 2.60 m with a broadband retroreflector mounted on an automated mechanical delay stage (Newport linear stage IMS600CCHA controlled by a Newport XPS motion controller), thereby generates delays between pump and probe from −400 ps to 8 ns. Pump and probe beams are overlapped on surface of the p-a-Si:H. By a beam viewer (Coherent, LaserCam-HR II) we regulate the size of pump beam about three times larger than the probe beam. The probe beam is guided to a custom-made prism spectrograph (Entwicklungsbüro Stresing) where it is dispersed by a prism onto a 512 pixel complementary metal-oxide semiconductor linear image sensor (Hamamatsu G11608−512DA). The probe pulse repetition rate is 3 kHz, whereas the excitation pulses are mechanically chopped to 1.5 kHz (100 fs to 8 ns delays), and the detector array is read out at 3 kHz. These characterizations are also summarized in Supplementary Table 5.
The transient absorption signals are fitted by the one-dimension recombination and diffusion model: ) . N(x, t) is the carrier density, which is a function of the time t and the position x in the film, D is the diffusion coefficient, k 1 , k 2 , k 3 are the first-, second-and third-order bulk recombination constants, α is the absorption coefficient, and S f and S b are the front and back interface/surface recombination velocities. The front surface/interface is exposed to the pump laser beam. The general rate equation consists of the diffusion equation and includes the different recombination rates present in the bulk. N(x, t) is proportional to ΔT/T, N(x, t) = β ΔT T with a fitted prefactor β. About 80 nm IWO was grown on both sides of the devices by the reactive plasma deposition (RPD, DS1-12080-SN-D13; Shenzhen S.C.) at 150 °C, whose target material is 1.0% tungsten-doped in indium oxide. Nine silver busbars and fingers were screen printed on the two faces of the devices using low-temperature paste, followed by annealing at 150 °C for 5 min and 185 °C for 30 min. For the certificate cell, an 80 nm SiO x layer was capped onto the sun-side surface in a 13.56 MHz radio-frequency PECVD (ULVAC CME-400).
The initial condition is
Device characterization. Current-voltage characteristics of all solar cells without SiO x antireflection were tested under standard conditions (25 °C, 100 mW cm −2 ) using a solar simulator (Halm IV, ceitsPV-CTL2). The light intensity was calibrated using a certified National Renewable Energy Laboratory (NREL) reference cell. The submitted cell with SiO x antireflection was independently tested by the NPVM in the Fujian Province, China, one of the designated test centres for the solar cell efficiency tables. The device area was captured by an automatic image test system. Before certification, it was light soaked for 30 min under 1 sun illumination, followed by cooling down to room temperature. The cell with MgF 2 antireflection layer was independently tested by ISFH CalTeC, which experienced a 1 sun illumination before the measurement. The conveyor during light soaking was pre-heated to ~200 °C, and the light intensity was adjusted from 1 to 60 sun (ASIA NEO TECH INDUSTRIL Company, NLIDR-S60; red light). These cells were quickly cooled down by cold-air blowing after the light soaking. All devices were measured under standard conditions. These characterizations are also summarized in Supplementary Table 6.
Damp-heat degradation. The devices underwent 1,000 h damp-heat impact at DH85 in the dark, during which they were in open-circuit condition. These devices are six-inch SHJ solar cells without any encapsulations. The 60-cell module underwent 3,000 h damp-heat impact at DH85 in the dark according to the IEC 60068-2-78, during which it was in open-circuit condition. All measurements were conducted under standard conditions (25 °C, 100 mW cm −2 ), out of the climate test chamber.
Thermal cycle degradation. The thermal cycles were conducted in accordance with the IEC 61215-2:2016. The cycle temperature was between −40 °C and 85 °C. The applied current was 100% I mpp at the rising edge of temperature.
Simulation procedures and parameters of SHJ solar cells.
The rear-junction SHJ solar cell was modelled using the traditional drift-diffusion models on the AFORS-HET device-simulation platform previously developed for heterojunction solar cells 50 . General parameters are listed in Supplementary Table 2. The effects of N s , N a and R s on the device performances are different. N s represents chemical passivation, which is dominated by defect density at the a-Si:H/c-Si interfaces. Q s represents the surface charge density at the depletion region of the p-n junction, which cannot be directly used in the simulation. As an alternative option, we use N a to represent the field passivation. R s represents transport series resistance in the device, here its variation mainly stems from the bulk resistance of p-a-Si:H as has been observed in Fig. 1a. Both N a and R s are dominated by the doping efficiency of boron in the p-a-Si:H film. N s is derived from modelling the τ eff of IWO/p-a-S i:H/i-a-Si:H/n-c-Si/i-a-Si:H/n-a-Si:H/IWO, measured before and after 70 s light soaking under 60-sun illumination. R s was measured from the solar simulator under the standard conditions. N a is the only optimized parameter to matching the experimental V oc , J sc , FF and PCE of SHJ solar cells before and after the 70 s 60-sun illumination. Samples A and E in Supplementary Tables 3 and 4 are control samples, whose performances are very close to those of the as-prepared and light-soaked (60 sun) SHJ solar cells respectively. To distinguish the effects of N s , N a and R s on the performance of SHJ solar cells, samples B-D in Supplementary Tables 3 and 4 change N s , N a and R s one by one. By this means, we can determine the key factors that dominate the device-level light-induced changes.
Reporting Summary. Further information on research design is available in the Nature Research Reporting Summary linked to this article.
|
2022-05-14T15:12:50.011Z
|
2022-05-01T00:00:00.000
|
{
"year": 2022,
"sha1": "1b59c358bf46d78701e36773f8300788646734b6",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41560-022-01018-5.pdf",
"oa_status": "HYBRID",
"pdf_src": "SpringerNature",
"pdf_hash": "1919fada4eeffebac569e92351ba15820d75e92b",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": []
}
|
270144307
|
pes2o/s2orc
|
v3-fos-license
|
Xanthogranulomatous pancreatitis: A rare entity in the spectrum of pancreatic lesions, a case report
Introduction Xanthogranulomatous pancreatitis (XGP) is a rare, benign, and idiopathic disease that often presents with non-specific symptoms and can mimic or coexist with other pancreatic diseases. Despite its infrequency, XGP is frequently misdiagnosed as a pancreatic neoplasm, with only 15 reported cases in the literature. The pathogenesis of XGP remains unclear. Case report We present the case of a 34-year-old woman with no pathological history who experienced continuous abdominal pain and oral intolerance, without signs of cholestasis. An abdominal CT scan initially suggested a cystic neoplasm of the pancreas, leading to a laparotomic cephalic duodenopancreatectomy. The anatomopathological study and immunohistochemistry revealed XGP in association with a mucinous cystic neoplasm with mild to moderate atypia. The patient remained hospitalized for six days post-surgery without any complications. Discussion XGP may be induced by the inflammatory reaction secondary to the obstruction of the pancreatic duct by mucin. The etiology is unknown, but it is attributed to a combination of obstruction, hemorrhage, or ductal infection. Abdominal pain is the most common symptom. Differentiating XGP from malignant processes of the pancreatic gland is challenging. Surgical treatment typically involves the Whipple procedure; however, echoendoscopy with biopsy is now available for a more accurate and early differential diagnosis. Conclusion XGP is a rare and challenging differential diagnosis for pancreatic neoplasms. Due to its potential to mimic malignant lesions, a high index of suspicion is necessary. Echoendoscopy with fine-needle aspiration biopsy should be considered a routine diagnostic tool before major surgery, such as the Whipple procedure.
Introduction
Xanthogranulomatous pancreatitis (XGP) is a rare form of chronic pancreatitis characterized by the deposition of numerous foamy histiocytes in the pancreatic parenchyma, along with other inflammatory cells, cholesterol, and fibroblastic proliferation [1].The etiology of XGP is currently unknown, but it is suspected to involve a combination of ductal obstruction, infection, and repeated intraductal bleeding [2].To date, in our knowledge there are 15 cases that have been reported in the literature up to 2022 (Table 1).In all of these cases, lesions that mimicked malignancy in imaging studies such as CT scans or magnetic resonance imaging (MRI) were discovered, leading to subsequent surgical treatment [1,[3][4][5].Differential diagnoses include pancreatic pseudocyst, cystic mucinous neoplasm, and solid pseudopapillary neoplasm [1,2].
The objective of this case is to highlight the simultaneous presence of two pathologies, the xanthogranulomatous pancreatitis and the cystic mutinous neoplasm, which is rare and its challenges in diagnosis.
Methods
This case report has been reported in line with the SCARE criteria [6].
Case presentation
We present the case of a 34-year-old female patient with no personal pathological or surgical history, BMI of 23 kg/m 2 who presented to the emergency service of our private university clinic with a three-day history of abdominal pain located on the left flank, of moderate intensity, associated with oral intolerance, and with a history of similar episodes leading to significant weight loss of 7 kg in six months.In the admission laboratory, lipase was 256 U/l (NV 12-60 U/l), total bilirubin 0.52 mg/l (NV < 1 mg/l), aspartate aminotransferase (AST) 33 U/l (NV < 40 U/l), alanine aminotransferase (ALT) 16 U/l (NV < 41 U/l), C-Reactive protein 2 mg/dl (NV < 5 mg/dl), and leukocytes 10 × 10⋅3/μl (NV 3.8-10 × 10 3 /μl).
An abdominal ultrasound revealed a round image at the pancreatic head, with hypoechoic areas, measuring 4.6 × 3.9 cm.The ultrasound suggested a cystic lesion.This finding was further investigated with an abdominal CT scan with intravenous contrast, revealing a rounded, macrocystic lesion with defined edges, multitabulated, approximately 38 × 38 × 45.8 mm in diameter, located at the head of the pancreas.This lesion generated a slight mass effect on the pancreas and adjacent structures, without evidence of invasion of neighboring vascular structures.Dilation of the Wirsung duct was observed in its corporal and caudal portions with a maximum diameter of 8.3 mm (Fig. 1).Then it was decided to add additional blood work with tumor markers, CA 19-9 of 18 U/ml (NV < 37 U/ml) and carcinoembryonic antigen (CEA) of 2.3 ng/ml (NV < 3.8 ng/ml).
Additional imaging studies were conducted with magnetic resonance imaging (MRI) of the abdomen, revealing a cystic lesion with defined edges, heterogeneous content, multitabulated, with thick septa inside, and a 17-mm solid-appearing parietal image with reinforcement in the head of the pancreas.Post-contrast enhancement and restriction in diffusion sequences were observed in this 40 × 38 × 44 mm lesion, along with dilation of the Wirsung duct and atrophy of the body and tail of the pancreas (Figs.2A B, 3A B).
Due to the high suspicion of malignancy and the persistence of pain and vomiting, a cephalic pancreaticoduodenectomy (Whipple procedure) was performed by laparotomy, revealing a tumor occupying the pancreatic head without extrapancreatic disease (Fig. 4A B).A sample from the distal pancreatic margin was taken for a frozen study, which reported negative results for tumor cells.The procedure lasted 4.5 h without immediate complications.Drains were removed on the 5th postsurgical day after confirming normal amylase and bilirubin values, and the patient was subsequently discharged the next day.
The final pathology report described a neoplastic proliferation composed of spindle cells with mild to moderate atypia, a variably dense stroma, frequent giant cells, and foamy histiocytes.The lesion was interspersed with the pancreatic parenchyma and dilated ducts (Figs.5-6A B).An immunostaining study confirmed xanthogranulomatous pancreatitis with a CD68+ marker in relation to a cystic mucinous neoplasm (Fig. 7).The patient remained asymptomatic and in excellent general condition, with no evidence of recurrence, at the 24-month follow-up as of date of follow-up.
Discussion
Xanthogranulomatous pancreatitis is a rare form of chronic inflammation of the pancreatic duct resulting from its obstruction by mucin.Although its etiology remains unknown, a combination of ductal obstruction, infection, and repeated bleeding is suspected to contribute to its development [2].It is characterized by the deposition of foamy histiocytes in the pancreatic parenchyma, along with the proliferation of fibroblasts and other inflammatory cells, and may also involve necrosis and hemorrhage [2].XGP typically affects men more frequently and presents around the age of 60; however, in our case, the patient was a young adult woman [1].Abdominal pain is the most common symptom, as reported in this case and in other studies [1][2][3].Other less frequent symptoms include weight loss, acute pancreatitis, or jaundice [7], which were not observed in our patient.
The diagnosis is usually challenging because imaging studies can confuse this benign entity with a malignant process due to the absence of pathognomonic characteristics [3].The differential diagnosis should be made with pancreatic pseudocyst, mucinous cystic neoplasm, neuroendocrine tumor, intraductal papillary mucinous neoplasm, and solid pseudopapillary neoplasm [1,2].Unlike XPG, mucinous cystic neoplasm occurs more frequently in females [8] and is located in the pancreatic body and tail [9].The initial treatment in all reported cases was surgical after an imaging diagnosis [1,[3][4][5], and it should be reserved for cases of adenocarcinoma or lesions with a high suspicion of malignancy, as is the case with this entity.Due to this challenging diagnosis, XGP is documented in deferred anatomopathological and immunohistochemistry studies.
Echoendoscopy (EcoE) has emerged as a promising method for the diagnosis of XGP.This minimally invasive procedure combines endoscopy and ultrasound to obtain high-resolution images of the pancreas and surrounding structures.EcoE allows for the visualization of small lesions and fine-needle aspiration biopsy (FNAB) of suspicious areas, which can then be sent for pathological analysis.Studies have reported a sensitivity of 92-100 % and specificity of 89-100 % for EcoE with FNAB in the diagnosis of Xanthogranulomatous pancreatitis, making it a valuable tool for clinicians [1,8].EcoE with FNAB has also demonstrated Fig. 1.CT scan in which we observe the lesion of the pancreatic head.
high diagnostic accuracy due to the location of the tumor in relation to the duodenal papilla, allowing for an early diagnosis of this entity [8].However, it is important to note that EcoE with FNAB is a highly specialized and expensive procedure that may not be available in all healthcare settings [10], in this case in particular, we hadn't have access to EcoE, however this study would not have change the outcome, because our patient remained with pain and oral intolerance.
Kwon et al., described the findings on imaging, showing that the most frequent main composition of the XGP-associated mass was cystic in 60 % of their cases with an irregular thick wall, located in the pancreatic tail.The XGP appeared heterogeneous on the portal-phase of CT and MRI.On contrast-enhanced dual -phase CT, the solid component of all lesions showed hypoenhancement on the arterial phase in comparison with the normal pancreas.This was also identified on dynamic contrast -enhanced MRI where lesions were mostly hypointense or isointense, and hyperintense on the delayed phase [12].
On the other hand, Whipple surgery is considered the primary treatment option for pancreatic mucinous adenomas, particularly when they are situated in the pancreatic head, have a considerable size, and display dilation of the pancreatic duct.This is due to the high risk of malignancy, which ranges from 10 to 39 %.In our patient's case, aside from the 2.5 cm lesion in the pancreatic head, pancreatic duct dilation was also observed, further justifying the decision to proceed with Whipple surgery to prevent the potential malignancy of the mucinous adenoma.
Conclusion
XGP is a rare and challenging differential diagnosis for pancreatic neoplasms.Given its potential to mimic malignant lesions, a high index of suspicion is necessary.Echoendoscopy with fine-needle aspiration biopsy should be considered as a routine diagnostic tool before undertaking major surgery such as the Whipple procedure.Further studies are needed to better understand the etiology and optimal treatment of Xanthogranulomatous pancreatitis.
Fig. 2 .
Fig. 2. A: MRI in which we observe the cystic multitabulated lesion and the Wirsung duct dilation.B: MRI in which we observe the cystic multitabulated lesion and the Wirsung duct dilation.
Fig. 3 .
Fig. 3. A: MRI in which we observe the Wirsung dilatation and the atrophy of the pancreatic body and tail.B: MRI in which we observe the Wirsung dilatation and the atrophy of the pancreatic body and tail.
Fig. 6 .
Fig. 6.A: Histology of pancreatic parenchyma of multinucleated giant cells.B: Histology of pancreatic parenchyma of multinucleated giant cells and foamy histiocytes.
Table 1
Cases of XGP reported in literature up to 2022.
|
2024-05-31T15:27:28.901Z
|
2024-05-01T00:00:00.000
|
{
"year": 2024,
"sha1": "119c9a0a72674149361d461c7c5d182e73648f72",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.ijscr.2024.109810",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b5c499a548897b10e6a4f3a1e31e16b4f03f9b43",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
240029762
|
pes2o/s2orc
|
v3-fos-license
|
Cervical spine epidural abscess caused by brucellosis: A case report and literature review
Abstract We report a rare case of epidural abscess at the cervical 5‐cervical 6 (C5–C6) levels. The patient underwent surgery with complete abscess removal through C6 vertebral body corpectomy. The result of bacteriological culture was Brucella melitensis. Brucellosis must be considered as a possible cause of epidural abscess in patients from endemic area.
| INTRODUCTION
Brucellosis is a zoonotic bacterial infection caused by Brucella species, which spreads from animals to humans, most often via unpasteurized milk, cheese, and other dairy products. 1 More rarely, it can be spread through the air or through direct contact with infected animals. 2 The disease can lead to systemic involvement, and the most common complications are seen in the musculoskeletal system. Musculoskeletal complications are arthritis, bursitis, sacroiliitis, spondylitis, and osteomyelitis. Spinal epidural abscess is a rare complication in the course of spondylitis caused by Brucella species. 3 Lumbar vertebras are the most common region for epidural abscesses whereas involvement of the cervical spine is uncommon. Management of spinal epidural abscess is controversial. 4 Several cases of successful treatment with antibiotic alone have been reported in particular in patients with stable neurological condition; however, signs of spinal cord compression represent a neurosurgical emergency because of its potential for causing rapidly progressive paralysis. 5 In this paper, the authors report a rare case of spinal epidural abscess caused by brucellosis in Iran and then discuss about it.
| CASE PRESENTATION
A 36-year-old male patient presented to our neurosurgery department at Imam Khomeini hospital with cervical pain and neck stiffness for about 2 months. At the time, there was a history of fever, fatigue, loss of appetite, malaise, and profuse night sweating for about 10 days. He had been medicated with a few NSAIDs. His medical history was negative for any underlying disease and surgery. He had no history of travel in recent months. On admission, physical examination revealed an initial blood pressure of 130/90 mm Hg, body temperature of 37/8 C, heart rate of 84, and respiratory rate of 16 breaths/minute. Neurological examination revealed motor weakness at the both shoulder abduction, hypoesthesia at the dermatomes of C5-C6, hyperactive deep tendon reflexes, and no meningeal irritation signs.
Magnetic resonance imaging of the cervical spine demonstrated spondylitis and epidural abscess formation with a craniocaudal length of 3 cm at the C5-C6 levels. Decreased spinal canal diameter and minimal spinal cord compression were presented secondary to the epidural abscess ( Figure 1). Results of blood tests revealed white blood count (WBC) of 14200, hemoglobin of 13.1 g/dl, erythrocyte sedimentation rate (ESR) of 33, and CRP of 1.3 mg/l. Blood cultures were negative. We tried to rule out tuberculosis and fungal infection; however, the tuberculin skin test became normal and the patient did not appear immune deficient. He was sexually active with his wife of 35 yr. He denied current or previous tobacco use. All relevant immunizations were up to date.
Because of signs of spinal cord compression, we decide to remove the abscess and perform decompression of spinal cord. The patient underwent anterior cervical surgery with complete abscess removal through C6 vertebral body Corpectomy. Then, reconstruction was done by a corpectomy mesh cage insertion and a cervical plate fixation ( Figure 2). The result of bacteriological culture was Brucella melitensis, and pathologic findings were negative for malignancy. After consultation with an infectious disease specialist, effective antibiotic therapy with doxycycline (100mg twice daily) and rifampin (600mg once daily) was considered for 6 months. After surgery, the patient made a good recovery without new neurological deficits and no recurrence. After 6 months follow-up, the patient had been recovered from all of the previous symptoms.
The present case report conformed to the provisions of the declaration of Helsinki (as revised in 2013). Written informed consent has been obtained from the patient regarding processing personal information and the publication of medical data.
| DISCUSSION
Approximately 500,000 cases of brucellosis are reported annually worldwide, most of which occur in developing countries. 6 Brucella species is responsible for only 0.1% of cases of spinal epidural abscess. Risk factors include immune-compromised states such as diabetes mellitus, alcoholism, chronic renal failure, cancer, and acquired immunodeficiency syndrome, as well as spinal procedures including epidural anesthesia or analgesia and spinal surgery or trauma. 7 No predisposing conditions and specific risk factors for contamination were found in the present case, and also, his familial history was negative for brucellosis and other infections.
Although there were lots of reports about brucellar infection involving the vertebral body or intervertebral disk, there were few previous reports about brucellar infection involving epidural space. Cervical spinal epidural abscess due to brucellosis is a rare condition, difficult to diagnose and can be complicated by disastrous neurological and vascular complications if left untreated. 8 Hence, early diagnosis and initiation of appropriate medical and surgical treatment are lifesaving.
In the cases of spinal epidural abscess due to Brucella infection, spinal pain with palpation, local tenderness, or fever can be seen clinically. However, these findings are not specific. 9 In our case, the symptoms were cervical pain, neck stiffness, fever, fatigue, loss of appetite, malaise, and profuse night sweating.
Management of spinal epidural abscess due to brucella spices is not standard and remains controversial. To the best of our knowledge, no case of spinal epidural abscess associated with brucellosis has been reported in Iran earlier. Here, we present a rare case of cervical spinal epidural abscess caused by brucellosis treated with both surgery and antibiotic therapy. Alyousefi et al. 10 reported first case of cervical epidural abscess caused by brucellosis in Saudi Arabia. Their case presented with fever and back pain and was successfully treated with only antibiotic therapy for 6 months. Boyaci et al. 11 reported a case of Brucellar spinal epidural abscess in Turkey presented with night sweat and lumbar spondylodiscitis who was treated with antibiotic regimen without surgery.
Song et al. 12 reported a case of cervical spine epidural abscess causing neurological deficits treated with anterior cervical spine discectomy, fusion with an iliac strut and antimicrobial therapy. In the current case, we used a corpectomy mesh cage for anterior column reconstruction and solid fusion was achieved at follow-up visit.
Diagnosis of Brucella infection requires isolation of the bacterium from the blood or tissue samples. The optimal antibiotic regimen and duration of treatment for brucellar spinal abscess are still controversial. In general, combination therapy of doxycycline plus streptomycin for at least 12 weeks is accepted more. 13 Exact follow-up of patients is important to ensure complete treatment.
| CONCLUSIONS
Brucellar spinal epidural abscess is a rare condition, difficult to diagnose, and can be complicated if left untreated. Brucellosis must be considered as a possible cause of spinal epidural abscess in patients from endemic area.
|
2021-10-28T15:08:46.107Z
|
2021-10-26T00:00:00.000
|
{
"year": 2022,
"sha1": "ee8cd5e652bab7cfcce1b60c90ee14831fc0c17f",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/ccr3.5644",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c3786e34294e03b64e9cc49f9e9952a6b95f8c3f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
198901739
|
pes2o/s2orc
|
v3-fos-license
|
The ASAS-SN Catalog of Variable Stars V: Variables in the Southern Hemisphere
The All-Sky Automated Survey for Supernovae (ASAS-SN) provides long baseline (${\sim}4$ yrs) light curves for sources brighter than V$\lesssim17$ mag across the whole sky. As part of our effort to characterize the variability of all the stellar sources visible in ASAS-SN, we have produced ${\sim}30.1$ million V-band light curves for sources in the southern hemisphere using the APASS DR9 catalog as our input source list. We have systematically searched these sources for variability using a pipeline based on random forest classifiers. We have identified ${\sim} 220,000$ variables, including ${\sim} 88,300$ new discoveries. In particular, we have discovered ${\sim}48,000$ red pulsating variables, ${\sim}23,000$ eclipsing binaries, ${\sim}2,200$ $\delta$-Scuti variables and ${\sim}10,200$ rotational variables. The light curves and characteristics of the variables are all available through the ASAS-SN variable stars database (https://asas-sn.osu.edu/variables). The pre-computed ASAS-SN V-band light curves for all the ${\sim}30.1$ million sources are available through the ASAS-SN photometry database (https://asas-sn.osu.edu/photometry). This effort will be extended to provide ASAS-SN light curves for sources in the northern hemisphere and for V$\lesssim17$ mag sources across the whole sky that are not included in APASS DR9.
INTRODUCTION
Recent large scale sky surveys such as the All-Sky Automated Survey (ASAS; Pojmanski 2002), the Optical Gravitational Lensing Experiment (OGLE; Udalski 2003),the Northern Sky Variability Survey (NSVS; Woźniak et al. 2004), MACHO (Alcock et al. 1997), EROS (Derue et al. E-mail: jayasinghearachchilage.1@osu.edu have been used in numerous astronomical contexts. Pulsating variables such as Cepheids and RR Lyrae stars are commonly used as distance indicators as they follow distinct period luminosity relationship (e.g., Leavitt 1908;Matsunaga et al. 2006;Beaton et al. 2018, and references therein). Eclipsing binary stars are excellent probes of stellar systems and with sufficient radial velocity followup, allow for the derivation of fundamental stellar parameters, including masses and radii of the stars in these systems (Torres et al. 2010). Variable stars are also useful for the study of stellar populations and Galactic structure (Matsunaga 2018;Feast, & Whitelock 2014).
The All-Sky Automated Survey for SuperNovae (ASAS-SN, Shappee et al. 2014;Kochanek et al. 2017) monitored the visible sky to a depth of V 17 mag with a cadence of 2-3 days using two units in Chile and Hawaii each with 4 telescopes. Starting in 2017, ASAS-SN expanded to 5 units with 20 telescopes. All the current ASAS-SN units are equipped with g-band filters and are currently monitoring the sky to a depth of g 18.5 mag with a cadence of ∼ 1 day. The ASAS-SN telescopes are hosted by the Las Cumbres Observatory (LCO; Brown et al. 2013) in Hawaii, Chile, Texas and South Africa. ASAS-SN primarily focuses on the detection of bright supernovae (e.g., Holoien et al. 2017Holoien et al. , 2018a, tidal disruption events (e.g., Holoien et al. 2014Holoien et al. , 2016Holoien et al. , 2018b and other transients (e.g., Tucker et al. 2018;Rodríguez et al. 2018), but its excellent baseline allows for the characterization of stellar variability across the whole sky. ASAS-SN team members have also studied the relative specific Type Ia supernovae rates ) and the largest amplitude M-dwarf flares seen in ASAS-SN .
In Paper I (Jayasinghe et al. 2018a), we reported ∼66, 000 new variables that were flagged during the search for supernovae, most of which are located in regions close to the Galactic plane or Celestial poles which were not wellsampled by previous surveys. In Paper II (Jayasinghe et al. 2019a), we uniformly analyzed ∼412, 000 known variables from the International Variable Stars Index (VSX,Watson et al. 2006), and developed a robust variability classifier utilizing the ASAS-SN V-band light curves and data from external catalogues. As data from The Transiting Exoplanet Survey Satellite (TESS; Ricker et al. 2015) became available, we have explored the synergy between the two surveys. ASAS-SN provides long baseline ( 4 yr) light curves sampled at a cadence of ∼1 − 3 days, that complement the high cadence TESS light curves. In Paper III (Jayasinghe et al. 2019b), we characterized the variability of ∼1.3 million sources within 18 deg of the Southern Ecliptic Pole towards the TESS continuous viewing zone and identified ∼11, 700 variables, including ∼7, 000 new discoveries. We also identified the most extreme heartbeat star system thus known, and characterized the system using both ASAS-SN and TESS light curves (Jayasinghe et al. 2018d). We have also explored the synergy between ASAS-SN and APOGEE (Holtzman et al. 2015) with the discovery of the first likely non-interacting binary composed of a black hole with a field red giant (Thompson et al. 2018) and we identified 1924 APOGEE stars as periodic variables in Paper IV (Pawlak et al. 2019). We have also identified rare variables, including 2 very long period detached eclipsing binaries (Jayasinghe et al. 2018b,c) and 19 R Coronae Borealis stars (Shields et al. 2018).
Here, we extracted the ASAS-SN light curves of ∼30.1 million sources from the AAVSO Photometric All-Sky Survey (APASS; Henden et al. 2015) DR9 catalog with V < 17 mag in the southern hemisphere (δ < 0 deg). In this work, we systematically search this sample for variable sources.
In Section §2, we discuss the ASAS-SN observations and data reduction procedures. Section §3 discusses the variability search and classification procedures. In Section §4, we discuss our results and present a summary of our work in Section §5. All the light curves of these sources are made available to the public through our online database.
OBSERVATIONS AND DATA REDUCTION
We started with the APASS DR9 catalog as our input source catalog. We selected all the APASS sources with V < 17 mag in the southern hemisphere (δ < 0 deg), excluding the ∼1.3M sources towards the Southern Ecliptic Pole which were analyzed in Paper III. This resulted in a list of ∼30.1M sources. Figure 1 illustrates the spatial distribution of these sources. ASAS-SN V-band observations were made by the "Brutus" (Haleakala, Hawaii) and "Cassius" (CTIO, Chile) quadruple telescopes between 2013 and 2018. Each ASAS-SN field has ∼ 200-600 epochs of observation to a depth of V 17 mag. Each camera has a field of view of 4.5 deg 2 , the pixel scale is 8. 0 and the FWHM is ∼2 pixels. ASAS-SN nominally saturates at ∼10 − 11 mag, but light curves of saturated sources are sometimes quite good due to corrections made for bleed trails (see Kochanek et al. 2017). The light curves for these sources were extracted as described in Jayasinghe et al. (2018a) using image subtraction (Alard & Lupton 1998;Alard 2000) and aperture photometry on the subtracted images with a 2 pixel radius aperture. The APASS catalog was also used for calibration. The zero point offsets between the different cameras were corrected as described in Jayasinghe et al. (2018a). The photometric errors were recalculated as described in Jayasinghe et al. (2019b).
While we decided to use the APASS DR9 catalog as our input source list due to its all-sky coverage, this catalog has several shortcomings (Henden et al. 2015;Marrese et al. 2019). While the APASS DR9 sky coverage is nearly complete, there are regions towards the Galactic plane that are missing (see Figure 1). In addition, the DR9 catalog includes a number of duplicate entries, which appear to be caused by the merging process, where poor astrometry in a given field may cause two centroids to be included for a single source. Centroiding in crowded fields is also poor and blends cause both photometric and astrometric errors. The APASS DR9 catalog does not provide unique identifiers, thus we used the VizieR ) recno field as unique identifiers. To address the issue of incomplete sky coverage we will use the ATLAS All-Sky Stellar Reference Catalog (Tonry et al. 2018b) in the next paper to produce light curves for the missing sources in APASS DR9.
VARIABILITY ANALYSIS
Here we describe the procedure we used to identify and characterize variables in the source list. We describe how we cross-matched the APASS sources to external catalogues in Section §3.1. In Section §3.2, we describe the procedure we 0h00 3h00 6h00 9h00 12h00 12h00 15h00 18h00 21h00 took to identify candidate variable sources. In Section §3.3, we discuss the application of the V2 random forest classifier model from Jayasinghe et al. (2019a) to classify these variables, in Section §3.4, we discuss the corrections done to mitigate the effects of blending on the candidate variables and in Section §3.5, we discuss the quality checks that we used to improve the final variables catalog.
Cross-matches to external catalogs
We crossmatch the APASS sources with Gaia DR2 (Gaia Collaboration et al. 2018a) using a matching radius of 5. 0. The sources were also cross-matched to the Gaia DR2 probabilistic distance estimates from Bailer-Jones et al. (2018). Even though we used a liberal matching radius, ∼84% (∼94%) of the sources have a cross-match in Gaia DR2 within 2. 0 (3. 0). We also crossmatch the sources with 2MASS (Skrutskie et al. 2006) and AllWISE (Cutri et al. 2013;Wright et al. 2010) using a matching radius of 10. 0. We used TOPCAT (Taylor 2005) to cross-match the APASS sources with the Gaia DR2, 2MASS and AllWISE catalogs. Sources in the Small Magellanic Cloud (SMC) are also included in our input source list. We used Gaia DR2 (Gaia Collaboration et al. 2018c) to identify ∼1, 600 sources from our source list that are SMC members. For sources in the SMC, we use a distance of d = 62.1 kpc (Graczyk et al. 2014) in our variability classifier. The LMC was covered in Paper III.
Random Forest Variable Identification
In Paper III we used several methods, including linear cuts on periodogram statistics, light curve features and external photometry to identify variable sources. Here, we take a different approach by training and apply a random forest classifier to distinguish candidate variables from constant sources. We built a variability classifier based on a random forest model using scikit-learn (Pedregosa et al. 2012;Breiman 2001). The set of variable sources used to train this classifier consisted of ∼302, 000 variables from Papers II and III with definite classifications. Variables with uncertain classifications, including 'VAR' and 'ROT:', were not included in this list as they reduced the accuracy of the final random forest classification model. The set of constant sources in the training list consisted of ∼600, 000 sources randomly selected from the list of constant sources in Paper III.
The goal was to provide classifications into two broad groups: CONST (constant stars) and VAR (potential variables). The potential variables will be analyzed in further detail so it is more important not to lose real variables than to accidentally include non-variables. These broad classes were selected to reduce the complexity of the classifier, and to provide an accurate initial separation prior to reclassifying the variable sources with the random forest variabile type classifier from Paper II. To generate periodicity statistics, we used the astropy implementation of the Generalized Lomb-Scargle (GLS, Zechmeister & Kürster 2009;Scargle 1982) periodogram to search for periodicity over the range 0.05 ≤ P ≤ 1000 days in all ∼30.1M light curves. We utilize the best GLS period, false alarm probability (FAP) and the power of the best GLS period as features. The complete list of 20 features and their importances to the random forest classifier is summarized in Table 1. Feature importances are calculated as Gini importances using the mean decrease impurity algorithm (Pedregosa et al. 2012).
We set the number of decision trees in the forest as n_estimators=1000, pruned the trees at a max- Figure 2. The normalized confusion matrix derived from the final version of the trained random forest classifier. The y-axis corresponds to the 'input' classification, while the x-axis is the 'output' prediction obtained from the trained random forest model. imum depth of max_depth=16 to prevent over-fitting, set the number of samples needed to split a node as min_samples_split=10 and set the number of samples at a leaf node as min_samples_leaf=5. To further reduce overfitting, weights were assigned to each class by initializing class_weight='balanced_subsample'. These parameters were optimized using cross-validation to maximize the overall F 1 score of the classifier. For any given source, the RF classifier assigns classification probabilities Prob(Const) and Prob(Var) = 1 − Prob(Const). The output classification of the RF classifier is the class with the highest probability. The training sample was split for training (80%) and testing (20%) in order to evaluate the performance of the RF classifier. We illustrate the ability of the RF model to classify new objects with the confusion matrix shown in Figure 2. The greatest confusion (2%) arises from input variable sources that are subsequently classified as constant stars. The performance of the classifier is summarized in Table 2. The overall F 1 score for the classifier is 98.5%. We applied the trained random forest classifier to the entire sample of ∼30.1M sources and identified 3,553,235 candidate variables. The distinction between the constant sources and the candidate variables is illustrated in Figure 3 through the distributions of the four features with the largest importance: LS Pow, T(φ|P), J − K s and log(LS FAP. We find that candidate variable sources are strongly periodic, as is illustrated by high values of LS Pow and smaller values of log(LS FAP) and T(φ|P). In addition, the distribution of the 2MASS color J − K s differs significantly between constant and variable sources. Variable sources are skewed towards redder NIR colors with J − K s > 1 mag while constant sources largely peak around J − K s ∼ 0.5 mag. Cooler, evolved stars are more likely to be variable, so this is not unexpected.
Variability Classification
Once candidate variables are identified, we aimed at classifying these sources into the various standard classes of variable stars. We use the variability classifier implemented in Jayasinghe et al. (2019a), which consists of a random forest classifier plus several refinement steps. Given the large number of candidates, we changed our variability classification strategy as follows.
• Initially, we classified all the candidate variables using just the GLS periods derived in §3.2.
• Following this, we derive periods for a limited set of sources (see below) using the astrobase implementation (Bhatti et al. 2018) of the Box Least Squares (BLS, Kovács et al. 2002) periodogram to improve the completeness for eclipsing binaries whose periodicity cannot be easily identified with GLS.
We also run the variability classifier twice, once using the best period (GLS or BLS) and once using twice the best period. The final classification is the one which yields the greatest classification probability. This step greatly improves the separation of EW type eclipsing binaries from RRC variables, and also improves upon the efficiency of the automated period doubling algorithm that was used for eclipsing binaries in Paper II.
Blending Corrections
The large pixel scale of the ASAS-SN images (8. 0) and the FWHM (∼16. 0) results in blending towards crowded regions. The APASS catalog was constructed with images that have a significantly smaller pixel scale (2. 6), so multiple APASS sources can fall into a single ASAS-SN pixel. We do not correct for the contaminating light in the photometry of the blended sources, but we identify and correct blended variable groups in our catalog.
Since we extracted light curves for the positions of APASS sources, we can have two or more APASS sources inside a single ASAS-SN resolution element. This is further exacerbated towards low Galactic latitudes by crowding. If we select the sources with another APASS neighbor within 30. 0, we find that ∼1.1M of the ∼3.6M candidate variables had a neighbor within 30. 0. We compute the flux variability amplitudes for these sources using a random forest regression model (Jayasinghe et al. 2019a). The majority of the variable "groups" consisted of two sources, with a few groups consisting of up to three or more sources. For each variable group, we consider the source with the largest flux variability as the 'true' variable, and remove the other overlapping sources from the final list. Following this treatment, our list of candidate variables consisted of ∼3M sources.
Quality Checks
At this stage, visual review of a random set of light curves suggested that quality checks must be implemented to distinguish true variability signals from variability due to bad photometry, and other survey specific issues (e.g., shutter failures, etc.). In Paper III, given the significantly shorter list of candidate variables, this was accomplished through simple visual review of the light curves. In this work, given the shear number of sources, visual review is not a feasible option. Thus, we choose to implement various criteria in lieu of visual review, to distinguish the true variables from the 'noise'. We first restrict the list to sources with V mean > 10 mag, A > 0.05 mag and T(t) < 0.9. We implemented the cut in the ASAS-SN V-band magnitude to minimize noise due to saturation artifacts. We also calculate the ratio between the amplitude estimated by random forest regression (A) to the IQR (Table 1) of the light curve, and the reddening-free Wesenheit magnitudes (Madore 1982;Lebzelter et al. 2018) and for each source. The Wesenheit magnitudes are used in the pipeline from paper II to refine variable type classifications. The quantity α can be used to identify light curves with significant outliers as we expect α ≈ 2 for most sources. The criteria used in lieu of visual review are summarized in Table 3. We note that these criteria are applied in addition to the refinement criteria in Paper II. These are not replacements but additional quality checks intended to improve the purity of our catalog. In addition to the criteria summarized in Table 3, we further scrutinize sources with periods that are close to aliases of a sidereal day (e.g., P ≈ 1 d, P ≈ 2 d, P ≈ 30 d, etc.). This is accomplished by tightening the criteria on T(φ|P), log(LS FAP), LS Pow and δ. This process slightly reduces the completeness of our catalog at these periods, but greatly reduces the number of false positives. In addition, we removed QSO contaminants in this list by cross-matching our list of variables to the Liao et al. (2019) catalog of known QSOs using a matching radius of 5. 0. We identified 336 cross-matches, out of which 325 were classified as YSO variables in our pipeline. At this point we had ∼247, 200 sources nominally classified as variable stars.
We inspected 5000 randomly selected sources classified as non-variable and the same number classified as variable. This was partly just a sanity check but also driven by the concern that the large size of our initial list (∼10% of the sources) suggested that our false positive rates had to be higher than suggested by Table 2. Among the non-variable sources, we identified only 3 (∼ 0.06%) that might be low level variables, which suggests that we are missing few variables that can be detected in this data. For the variable stars, we found significant contamination in the following variable classes: GCAS (∼50%), L (∼25%), VAR (∼45%) and YSO (∼13%). The implied false positive rate for the variable sources was ∼5.9% at this point.
Light curves that are contaminated by systematics tend to be classified as irregular or generic variables as they are inherently aperiodic in nature. Thus, we decided to review all ∼32, 800 sources that were classified as L, VAR, GCAS, or YSO to improve the purity of our catalog. Initial results suggested that L variables with T(t) > 0.65 were dominated by noise, so we rejected ∼14, 300 such sources without further visual review. We visually reviewed the remaining ∼18, 500 sources, and rejected ∼12, 600 sources (∼68%) and only retained ∼5, 900 of these sources in the final catalog. When we carried out a new inspection of 5000 randomly selected variables, the false positive rates were now EA (∼1.4%), L (∼0.6%), SR (∼2.6%), and VAR (∼0.9%). This implies an overall false positive rate for the final catalog of variable sources of ∼ 1.3%.
After these criteria are applied, we end up with a list of ∼220, 000 variables. This means that our initial candidate list had a false positive rate of ∼93%. The larger than expected false positive rate is partly due to a biased training set in the source classifier. The training set of constant sources was derived from a region of the sky away from the Galactic plane. The increased crowding and blending towards the Galactic plane will systematically affect constant stars at low latitudes and introduce spurious variability signals into their light curves. Our classifier will identify these constant sources as candidate variables. In addition to this, sources in the vicinity of bright, saturated stars in our data are likely to have spurious variability signals in their image subtraction light curves due to the corrections made for bleed trails (see Kochanek et al. 2017). This effect is again exacerbated towards the Galactic plane.
RESULTS
The complete catalog of ∼220, 000 variables and their light curves are available at the ASAS-SN Variable Stars Database (https://asas-sn.osu.edu/variables) along with the V-band light curves for each source. Most of the known variables identified in this work were already added to the Variable Stars Database in Paper II. We have overhauled the web interface for the ASAS-SN Variable Stars Database to include interactive light curve plotting and photometry from Gaia DR2, APASS DR9, 2MASS and ALLWISE. Table 4 lists the number of sources of each variability type in the catalog.
We , the catalog of WISE variables (Chen et al. 2018) and the variables from MACHO (Alcock et al. 1997). Of the ∼220, 000 variables identified in this work, ∼131, 900 were previously discovered by other surveys, and ∼88, 300 are new discoveries, as also listed in Table 4.
It is evident that previous surveys, including our discoveries from paper I, successfully discovered sources that vary with large amplitudes or are strongly periodic. Most (∼54%) of our new discoveries are red, pulsating variables. We also discover a large number of binaries and rotational variables, amounting to ∼26% and ∼12% of the newly discovered variable sources, respectively. It is also noteworthy that we discover many more δ Scuti sources than previously known. These variables are particularly interesting as they pulsate at high frequencies (P< 0.3 d) and are located towards the lower end of the instability strip (Breger 1979). δ Scuti variables are also known to follow a period-luminosity relationship (Lopez de Coca et al. 1990).
The Wesenheit W RP vs. G BP − G RP color-magnitude diagram for all the variables with excellent variable type classification probabilities (Prob > 0.9) is shown in Figure 4. Generic and uncertain variable types are not shown. We have sorted the variables into groups to highlight the different classes of variable sources. A similar Wesenheit W RP vs. G BP − G RP color-magnitude diagram for all the newly discovered variables, separated by probability, is shown in Figure 5. The sharp cutoffs seen in the sample of semi-regular variables with Prob < 0.9 are inherited from the variable type refinements from paper II. Most variables with Prob < 0.9 are located in similar areas of the CMD as the variables with Prob > 0.9. However, we note two interesting clusters of these low-probability variables at (G BP − G RP , W RP ) ∼ (2.5, −4.5) and (0.75, 1.8) corresponding to semi-regular and rotational variables respectively. The Wesenheit W RP vs. G BP − G RP color-magnitude diagram for all the variables with Prob > 0.9 and the points colored according to the period is shown in Figure 6. This essentially highlights the large dynamic range in period probed by the ASAS-SN light curves. Owing to the ASAS-SN survey cadence and our long time baseline, we are able to probe both short period variability (P < 0.1 d) and long period variability (P > 1000 d). The ASAS-SN survey continues to monitor the sky in the g-band, which lends itself well to the analysis of long term trends and unusual variability. As a testament to this, Jayasinghe et al. (2019c) noted a sudden dimming episode (flux reduction of ∼70% in the g-band) in an APASS source (ASASSN-V J213939.3−702817.4) that was non-variable for ∼1800 d. This source was classified as a constant source in this work.
The combined Wesenheit W J K period-luminosity relationship (PLR) diagram for the periodic variables with Prob > 0.9 is shown in Figure 7. The PLR sequences for the Cepheids are well defined (Soszynski et al. 2005). Sharp PLR sequences can also be seen for Delta Scuti variables and contact binaries. The Mira variables also form a distinct PLR sequence beyond P > 100 d. The slight deficits of variables at the aliases of a sidereal day (e.g., P ≈ 1 d, P ≈ 2 d, P ≈ 30 d, etc.) are due to the quality checks implemented in §3.5.
The period-amplitude plot for the periodic variables with Prob > 0.9 is shown in Figure 8. The high prior completeness of the Mira, RR Lyrae and Cepheid variables is evident. We do not discover many of these variables in this work. The large majority (∼98.7%) of the new discoveries are of different variable types with smaller variability amplitudes and/or weak periodicity.
We also examine the period-color relationship of the variables in the W 1 − W 2 color space in Figure 9. Most variables have W 1 − W 2 ∼ 0, but the NIR infrared-excess in-creases with increasing period for the long period variables. This is even more dramatic for the Mira variables that are on the asymptotic giant branch (AGB). Dust formation is commonly traced through infrared excesses. Our findings agree with McDonald et al. (2018), for example, that strong mass loss and increased dust formation first occurs for pulsation periods of P 60 d for Galactic stars.
As an external check of our classifications, we used data from our cross-match to Gaia DR2 to produce Figure 10. We define a "variability" color β, β = phot bp mean flux error/phot rp mean flux error , which is a measure of the difference in variability between the bluer and redder Gaia bands and compare it to the inverse of the quantity "phot_rp_mean_flux_over_error" which is a measure of the mean signal to noise ratio. The different groups of variables fall in distinct regions, with red pulsating variables having smaller values of β compared to bluer variables. Comparing the known variables and the new discoveries, we find that the new discoveries mostly fall in the same regions as the known variables. This provides an independent confirmation of the purity of the newly discovered variables and validates our quality assurance methodology in §3.5. Examples of the newly identified periodic variables are shown in Figure 11 and examples of the newly discovered irregular variables are shown in Figure 12. The light curves for the red giant pulsators, including the irregular variables, are complex, and often multi-periodic, which requires further Fourier analysis. In order to better understand these pulsating red giants, Percy, & Fenaux (2019) recommended a more detailed analysis, combining visual inspection of the light curves and a more advanced period analysis, in lieu of the automated classification used by ASAS-SN.
We illustrate the sky distribution of the newly discovered variables in Figure 13. Most of these discoveries are clustered towards the Galactic disk, as is expected. We note the scarcity of high amplitude Mira variables, RR Lyrae and Cepheid variables and the abundance of lower amplitude semi-regular/irregular variables among the newly discovered variables. Variables with large amplitudes and strong periodicity are relatively easily discovered and characterized by wide field photometric surveys, so the existing completeness of these variable types is very high. The gaps in coverage will be rectified in the next paper in this series. We also show the sky distribution of the known variables identified in this work in Figure 14. Here, we note the abundance of Mira variables, eclipsing binaries and Cepheid variables that have been discovered by previous surveys..
CONCLUSIONS
We systematically searched for variable sources with V < 17 mag in the southern hemisphere (δ < 0 deg), excluding the ∼1.3M sources near the Southern Ecliptic Pole which were analyzed in Paper III. Through our search, we identified ∼220, 000 variable sources, of which ∼88, 300 are new discoveries. In particular, we have discovered ∼48, 000 red pulsating variables, ∼23, 000 eclipsing binaries, ∼2, 200 δ-Scuti variables and ∼10, 200 rotational variables. The V-band light curves of all the ∼30.1M sources studied in this work are available online at the ASAS-SN Photometry Database (https://asas-sn.osu.edu/ photometry). To highlight the possible, blended sources, a flag is assigned to each source if the distance to the nearest APASS neighbor is <16. 0. The new variable sources have also been added to the ASAS-SN variable stars database (https://asas-sn.osu.edu/variables). Most of these sources will also fall into the TESS footprint, thus short baseline TESS light curves that possess better photometric precision can also be obtained to complement the long baseline ASAS-SN light curves.
This work greatly improves the completeness of bright variables in the Southern hemisphere and provides long baseline V-band light curves. As part of our ongoing effort to systematically analyze all the ∼50 million V < 17 mag APASS sources for variability, we will next update this database with the light curves for the sources across the northern hemisphere and include the light curves for sources missing from the APASS DR9 catalog. This work has made use of data from the European Space Agency (ESA) mission Gaia (https://www.cosmos. esa.int/gaia), processed by the Gaia Data Processing and Analysis Consortium. This publication makes use of data products from the Two Micron All Sky Survey, as well as data products from the Wide-field Infrared Survey Explorer. This research was also made possible through the use of the AAVSO Photometric All-Sky Survey (APASS), funded by the Robert Martin Ayers Sciences Fund. This paper has been typeset from a T E X/L A T E X file prepared by the author.
|
2019-07-24T18:00:00.000Z
|
2019-07-24T00:00:00.000
|
{
"year": 2019,
"sha1": "611dc33c0406713bf5b327cc103a6fc8871c2a3e",
"oa_license": null,
"oa_url": "https://academic.oup.com/mnras/article-pdf/491/1/13/31064279/stz2711.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "611dc33c0406713bf5b327cc103a6fc8871c2a3e",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
9377724
|
pes2o/s2orc
|
v3-fos-license
|
QCD Short-distance Constraints and Hadronic Approximations
This paper discusses a general class of ladder resummation inspired hadronic approximations. It is found that this approach naturally reproduces many successes of single meson per channel saturation models (e.g. VMD) and NJL based models. In particular the existence of a constituent quark mass and a gap equation follows naturally. We construct an approximation that satisfies a large set of QCD short-distance and large $N_c$ constraints and reproduces many hadronic observables. We show how there exists in general a problem between QCD short-distance constraints for Green Functions and those for form factors and cross-sections following from the quark-counting rule. This problem while expected for Green functions that do not vanish in purely perturbative QCD also persists for many Green functions that are order parameters.
Introduction
Formulating a consistent hadronic approximation to Quantum Chromodynamics (QCD) is an old and very difficult problem. At low energies the solution to this problem is Chiral Perturbation Theory (ChPT) but the domain of validity of this is fairly limited and there tend to be a rather large number of parameters that needs to be dealt with. It cannot be simply extended to the intermediate energy domain. In this paper we describe an approach based on a few simple assumptions and then try to see how far this can go. This fits naturally in the limit of large number of colours (N c ). In this limit and assuming confinement, QCD is known to reduce to a theory of stable hadrons interacting only at tree level [1]. So the only singularities in amplitudes are produced by the various tree-level poles occurring. This has long been a problem for various variants of models incorporating some notion of constituent quarks like the Nambu-Jona-Lasinio (NJL) models [2,3,4] or the chiral quark model [5].
The main idea in this paper is to take the underlying principle of ladder resummation approaches to hadronic physics and make two successive approximations in this. First we treat the rungs of the ladder as a type of general contact interaction and second the remaining loop-integrations that occur, which are always products of one-loop integrations, we treat as general everywhere analytic functions. The only singularities that occur then are those generated by the resummations and we naturally end up with a hadronic large N c model. This is also very close to the treatment of the (extended) Nambu-Jona-Lasinio models as given in [6,7,8] where n-point Green functions 1 are seen as chains of one-loop bubbles connected by a one-loop with three or more vertices. The one-loop bubbles can be seen as one-loop Green functions as well. The full Green functions there are thus composed of one-loop Green functions glued together by the (ENJL) couplings g V and g S . One way to incorporate confinement in these ENJL models is by introducing an infinite number of counterterms to remove all the unwanted singularities [9]. In [9] it was then argued that the ENJL approach was basically identical to a one resonance saturation approach. They then proposed a minimal hadronic ansatz where one resonance saturation is the underlying principle and all couplings should be determined from QCD short-distance and chiral constraints with the relevant short-distance constraints those that result from order parameters. Order parameters are quantities which would be fully zero if only perturbative QCD without quark masses and condensates is considered. This approach has been further discussed for two-point Green functions in [10] and applied to some three-point functions in [11], see also the discussions in [12] for earlier similar uses of order parameters. Problems appear for n-point Green functions in that not necessarily all freedom in the parameters can be fixed by the long-distance chiral constraints and/or short-distance constraints or involve too many unknown constants in the chiral constraints.
In this paper we follow a different scheme. We assume that the Green functions are produced by a ladder-resummation like ansatz. They consist of bubble-diagrams put together from one-loop Green functions. We do not use the (constituent) quark-loop expressions for these one-loop Green functions but instead consider them as constants or low-order polynomials in the kinematic variables. This set of assumptions turns out to be rather constraining in the type of model that can be constructed. In particular the gap equation for spontaneous symmetry breaking follows from the requirements of resummation and the full Ward identities as shown in Section 2. The link with constituent quark models is the fact that given the full Ward identities one can define a constituent quark mass, obeying a gap equation, and the one-loop Green functions satisfy the Ward identities with constituent quark-masses. In the two-point function sector this naturally reduces to the approach of [9] but it allows to go beyond two-point functions in a more systematic manner.
In Section 2 we discuss the buildup of the model and the two-point functions. We first work in the chiral limit and then add corrections due to current quark masses. Chiral Perturbation Theory, or low-energy, constraints are naturally satisfied in our approach which is chiral invariant from the start. Also large N c constraints are satisfied naturally. We show how the short-distance constraints can be included. Section 3 treats several threepoint functions and includes here short-distance constraints coming from form factors and from the more suppressed combinations of short-distances.
Numerical results are presented in Section 4. We find a reasonable agreement for the predictions.
Going beyond the one-resonance saturation in this approach is difficult as explained in Section 5. Another point raised is that hadronic models will in general have problems with QCD short-distance constraints, even if the short-distance behaviour is an order parameter, we discuss in detail how the pseudo-scalar-scalar-pseudo-scalar three-point function is a typical example of this problem in Section 6.
We consider this class of models still useful even with the problems inherent in it. They provide a consistent framework to address the problems of nonleptonic matrix-elements where in general very many Green functions with a large number of insertions is needed. The present approach offers a method to analytically calculate these Green functions and thus study the effects of the various ingredients on the final results. One motivation for this work was to understand many of the rather surprising features found in the calculations using the ENJL model of the B K parameter, the ∆I = 1/2 rule, gluonic and electroweak Penguins, electromagnetic effects and the muon anomalous magnetic moment [13,14] and improve on those calculations.
The Appendix contains expressions for the short-distance properties of several threepoint functions.
General
The Lagrangian for the large N c ENJL model is with i, j flavour indices, α, β colour indices and q R(L) = (1/2)(1 + (−)γ 5 )q. The flavour matrices v, a, s, p are external fields and can be used to generate all the Green functions we will discuss. The four-quark interactions can be seen as an approximation for the rungs of a ladder-resummation scheme. The Green functions generated by functional differentiation w.
An underlying assumption is that these currents can be identified with the QCD ones.
In the remainder of this section we will discuss the two-point functions The other possibilities vanish because of parity. The large N c limit requires these to be proportional to δ il δ jk and Lorentz and translational invariance allow them to be written in terms of functions that only depend on q 2 and the flavour index i, j.
These functions satisfy Ward-identities following from chiral symmetry and the QCD equations of motion Here we use qq i = α 0|q i α q i α |0 . The type of diagrams that contribute in large N c to the two-point functions is depicted in Fig. 1(a). The contribution from only the one-loop diagram is depicted in Fig. 1(b) and we will generally denote these as Π.
Under interchange of i and j, Π M Sij (q 2 ) is anti-symmetric, all others are symmetric. The one-loop equivalents have the same symmetry properties.
In Refs. [7,8] it was shown that the full two-point functions can be obtained from the one-loop ones via a resummation procedure This resummation is only consistent with the Ward Identities, Eq. (5), if the one-loop two-point functions obey the Ward Identities of Eq. (5) with the current quark masses m i replaced by the constituent quark masses M i given by known as the gap equation. The assumption of resummation thus leads to a constituent quark mass picture and one-loop Ward identities with constituent quark masses. Using the gap equation and the one-loop Ward identities the resummation formulas can be simplified using Our model assumption is to choose the one-loop functions as basic parameters rather than have them predicted via the constituent quark loops. This allows for a theory that has confinement built in a simple way and at the same time keeps most of the successes of the ENJL model in low-energy hadronic physics.
We now choose the two-point functions as far as possible as constants and have thus as parameters in the two-point sector (10) and the remaining one-loop two-point functions can be obtained from the one-loop Ward identities. As discussed below, more input will be needed for the three-point functions. We do not expand higher in momenta in the one-loop two-point functions. The reason for this is that assuming that g V and g S are constants, expanding the one-loop two-point functions higher in momenta causes a gap in the large q 2 expansion between the leading and the non-leading terms. Such a gap in powers is not present as we know from perturbative QCD.
Chiral Limit
In the chiral limit, the Ward identity for Π Sij (q 2 ) becomes singular and it is better to choose instead as parameters with the parameters ∆, Γ defined via
Short-Distance
X for X = LR, V, A then the first and third Weinberg sum rules [15], are automatically satisfied but the second one , implies the relation Π Analogs of the Weinberg sum rules exist in scalar-pseudoscalar sector. With Π SP = Π S − Π P we have [12,16] lim The first one is the equivalent of the first Weinberg sum rule and is automatically satisfied. The second one implies The short-distance relation found in Eq. (17) does not satisfy the heat kernel relation for the one-loop two-point functions derived in [7] in the chiral limit. Note that that heat kernel relation was the underlying cause of the relation m S = 2M q between the scalar mass and the constituent quark mass in ENJL models [7,8].
Intermediate-Distance
The two-point functions in the chiral limit can be written as From the poles in the two-point functions we can find the various masses. There is a pole at q 2 = 0 corresponding to the massless pion. The scalar, vector and axial-vector masses are given by , The residues at the poles lead to The short distance constraints lead as expected to
Long-Distance
The two-point functions in the chiral limit can be determined from Chiral Perturbation Theory. This lead to the identification of B 0 , F 0 with the quantities appearing there and in addition
Parameters
Notice that from the six input parameters we can only determine five from the two-point function inputs. A possible choice of input parameters is m V , m A , F 0 , m S and F S . The last can be traded for B 0 or qq χ . The remaining parameter could in principle be fixed from K S but that is an unmeasurable quantity.
Beyond the Chiral Limit
The resummation formulas of Sect. 2.1 remain valid. What changes now is that we have values for the current quark masses m i and corresponding changes in the one-loop functions. An underlying expectation is that the vertices g S and g V are produced by purely gluonic effects and have no light quark-mass dependence. The first order the quark-mass dependence of g V and g S must be zero from short-distance constraints as shown below. The input parameters are now given by Eq. (10) and we will below expand them as functions in m q .
Intermediate-Distance
The resummation leads to expressions for the two-point functions which can again be written as one resonance exchange.
These satisfy the Ward Identities (5). The values of the couplings and masses are given by
Short-Distance
In order to proceed we have to expand the input parameters of Eq. (10) in the quark masses m q .
The parameters ǫ and ∆ are defined in the first line of Eq. (12). The other one-loop two-point functions are derivable from the one-loop Ward identities. The chiral limit short-distance constraints Eqs. (15) and (17) remain valid but there are new constraints on the coefficients of the quark mass expansions. The derivatives w.r.t. the quark masses of the two-point functions allow to construct more order parameters than Π LR and Π SP . In particular we have 2 The ones with lower powers of q 2 must vanish. The second and fourth are automatically satisfied as a consequence from the Ward identities. Π Aij (q 2 ) follows from the Ward identity and the chiral limit form of Π M P ij . The vanishing of those with lower powers of q 2 requires that The first, third, fifth and sixth identities give This implies that the only new parameter that appears to include quark masses to first order is ǫ. The last constraint turns out to be incompatible with short-distance constraints from three-point functions as discussed below.
Long-Distance
The long-distance expansion of our results to O(p 4 ) in Chiral Perturbation Theory allows in addition to those already obtained in the chiral limit also
Intermediate-Distance
The short-distance constraints lead to several relations between resonance parameters also beyond the chiral limit to first order in current quark masses. In the vector sector we obtain V ij stands here for the vector degree of freedom built of quarks with current mass m i and m j . The corresponding axial relations are
Three-Point Functions
A generic three-point function of currents A, B, C chosen from the currents in Eq. (2) is defined as In the large N c limit these can only have two types of flavour flow and they satisfy The flavour and momentum flow of Π ABC+ (p 1 , p 2 ) ijk is indicated in Fig. 2(a). In the remainder we will always talk about the Π + part only but drop the superscript +. We also use q = p 1 + p 2 . A generic contribution to the three-point function is shown in Fig. 2(b). The internal vertices are given by g V and g S . In Ref. [8] it was shown on two examples how this resummation can be performed for some three-point functions. Many other cases were worked out for the work on non-leptonic matrix-elements in Refs. [13,14].
Here we will make the assumption of resummation for the three-point functions just as we did for the two-point functions in Sect. 2. It can again be shown that the Ward identities for the full three-point functions and the resummation together require that the one-loop three-point functions satisfy the one-loop Ward identities with the constituent masses given by the gap equation (8).
We will once more assume that the three-point functions are constants or low-order polynomials of the kinematical variables, in agreement with the large N c limit Green functions structure. It turns out that the combination of one-loop Ward identities and short distance constraints is very powerful in restricting the number of new free parameters appearing in the three-point functions. This could already be seen in Sect. 2.3, since the derivative w.r.t. a quark mass of a two-point function is a three-point function with one of the momenta equal to zero.
A full analysis of three-point functions is in progress. Here we give a few representative examples.
The Pseudoscalar-Scalar-Pseudoscalar Three-Point Function and the Scalar Form Factor
The Pseudoscalar-Scalar-Pseudoscalar three-point function can be calculated from the class of diagrams depicted in Fig. 2(b) using the methods of [8] and reads for the case of m i = m k The general case has also terms involving one-loop three-point functions with a vector (V ) instead of a scalar (S). The one-loop Ward identities can be used to rewrite Π ASP , Π P SA and Π ASA in terms of Π P SP and one-loop two-point functions.
The one-loop three-point function Π P SP is in turn fully fixed by the one-loop Ward Identities. Let us illustrate the derivation, one Ward Identity is Putting p 2 1 = p 2 2 = q 2 = 0 this determines The same result follows from the identities for q µ Π ASP µ (p 1 , p 2 ) ijk and p µ 1 Π P V P µ (p 1 , p 2 ) ijk . The next term, linear in q 2 , p 2 1 , p 2 2 , can be derived as well, since the relevant combinations of the three-point functions with one vector or axial-vector can be determined from Ward identities involving three-point functions with two vector or axial-vector currents.
We only quote here the chiral limit result From the q 2 dependence of the full Green-function at low energies we can also derive L 5 , the result agrees with Eq. (29) as it should.
We can look at two different types of short-distance constraints. First, using the methods of exclusive processes in perturbative QCD [17], it can be shown that the scalar form factor in the chiral limit should decrease as 1/p 2 1 . Phenomenologically, this short-distance behaviour has been also imposed in [18] to calculate the scalar form factor. It was checked that this behaviour agrees with data. Using the LSZ reduction formulas the scalar form factor of the pion in the chiral limit is and it can be written in a simpler form 3 The short-distance requirement on F χ S (p 2 1 ) thus requires L 5 to have its resonance dominated value This gives a new relation between the input parameters, after using Eq. (17), This constraint is not compatible with Eq. (28). The three-point function Π P SP (p 1 , p 2 ) ijk is an order parameter in the sense described above. Its short-distance properties can thus be used to constrain the theory. The shortdistance behaviour is lim This is automatically satisfied by our expression (44). The entire Π P SP χ can be written in a simple fashion The short distance relation lim λ→∞ F χ S (λp 2 1 ) = 0 has no α S corrections. We therefore consider the constraint Eq. (42) to be more reliable than the one from Eq. (28).
The Vector-Pseudoscalar-Pseudoscalar Three-Point Function and the Vector Form Factor
We can repeat the analysis of Sect. 3.1 now for the V P P three-point function. The results will be very similar to there and apply to the vector (electromagnetic) form factor. We keep here to the simpler case of m i = m j . The resummation leads to [8] We can again use the Ward Identities to rewrite this in terms of two-point functions and Π V P P µ (p 1 , p 2 ) ijk only. We now expand in p 2 1 , p 2 2 and (p 1 + p 2 ) 2 = q 2 .
The one-loop WI imply The next term in the expansion depends only on one constant. This follows from the assumption (in the previous subsection) that Π SP P contains no terms more than linear in p 2 1 , p 2 2 , q 2 . The form given in Eq. (47) includes this assumption already. This extra constant can be determined from the fact that the pion vector factor should decrease as 1/q 2 for large q 2 . Extracting the chiral limit 4 vector form factor via The subscript one means the coefficient of p 1µ in the expansion The short-distance requirement then determines The ChPT expression for the pion vector form factor yields then (52) The full chiral limit three-point function can be written in a simple fashion with
The Scalar-Vector-Vector Three-Point function
The Scalar-Vector-Vector three-point function has been used to discuss the properties of the scalars in Ref. [12]. The relation between the full and the one-loop functions in the case of all masses equal is In the equal mass case both the full and the one-loop three-point function are fully transverse. The one-loop two-point functions expanded to second order in the momenta is fully determined from the Ward Identities via In the chiral limit these expressions reduce to The expression for the chiral limit full three-point functions is very simple with This also satisfies the QCD short-distance requirement
The Pseudoscalar-Vector-Axial-vector Three-Point Function
This three-point functions has been studied in a related way in Ref. [11]. The expression for the full Pseudoscalar-Vector-Axial-vector three-point function in terms of the one-loop one and two-point functions is in the case of m i = m k : where we have used the Ward identities The one-loop three-point function up to second order in the momenta is determined fully from the one-loop Ward Identities.
This expression can be worked out in the chiral limit using the values obtained earlier and compared with the chiral limit ChPT expression for this amplitude (see e.g. Ref. [11]).
and leads to values of L 9 compatible with those obtained in Eq. (52) and which is the same as Eq. (22). The three-point function in the chiral limit has a simple expression of the form The tensors P µν and Q µν are transverse and defined by P µν (p 1 , p 2 ) = p 2µ p 1ν − p 1 · p 2 g µν Q µν (p 1 , p 2 ) = p 2 1 p 2µ p 2ν + p 2 2 p 1µ p 1ν − p 1 · p 2 p 1µ p 2ν − p 2 1 p 2 2 g µν .
By construction, this function satisfies the chiral Ward identities (see e.g. [11]) that are the same as those involving the one-loop functionΠ µν P V A but replacing the constituent masses by current quark masses. The QCD short-distance relation is also obeyed.
The Pseudo-scalar-Axial-vector-Scalar Three-Point Function
Another order parameter is the sum of the Pseudoscalar-Axial-vector-Scalar and Scalar-Axial-vector-Pseudoscalar three-point functions. These functions can be written in terms of the corresponding one-loop functions and the two-point functions following the same method as in the other sections For the simpler case m j = m k and for the case The most general expressions for the one-loop three-point functions Π There is only one constant at order O(p 3 ) that remains unknown when we apply all the symmetry criteria. The functions in the term of order O(p) are fully determined by the use of the one-loop Ward identities Using the values of the coupling constants L 5 and L 8 we obtained from two-point functions, the functions Π P AS µ (p 1 , p 2 ) ijk and Π SAP µ (p 1 , p 2 ) ijk have the correct behaviour at long distance as described by Chiral Perturbation Theory. In this limit the unknown constant C P AS ijk is not involved. The sum of the two three-point functions in the chiral limit can be written in a fairly simple fashion
Comparison with experiment
The input we use for qq χ is the value derived from sum rules in Ref. [19], which is in agreement with most recent sum rules determinations of this condensate and of light quark masses -see [20] for instance-and the lattice light quark masses world average in [21]. The value of F 0 is from Ref. [22] and the remaining masses are those from the PDG.
These numbers 5 are in reasonable agreement with the experimental values given in brackets with the possible exception of L 5 which is rather high. We expect to have an uncertainty between 30 % and 40 % in our hadronic predictions. The values in Eq. (79) do not depend on the value of the quark condensate. We cannot determine ∆ at this level. The three-point functions P SP , V P P , SV V and P V A can be rewritten in terms of the above inputs. There is more freedom in those functions by expanding the underlying Π functions to higher order. These extra terms can usually be determined from the short-distance constraints up to the problem discussed in Sect. 6.
Difficulties in Going Beyond the One-Resonance Approximation
An obvious question to ask is whether we can easily go beyond the one resonance per channel approximation used above using the general resummation based scheme. At first sight one would have said that this can be done simply by including higher powers in the expansion of the one-loop two-point functions and/or giving g S , g V a momentum dependence. Since we want to keep the nice analytic behaviour expected in the large N c limit with only poles and have simple expressions for the one-loop functions and g S , g V , it turns out to be very difficult to accomplish. We have tried many variations but essentially the same type of problems always showed up, related to the fact that the coefficients of poles of two-point functions obey positivity constraints. Let us concentrate on the scalar two-point function in the chiral limit to illustrate the general problem. In this limit the full two-point function can be written in terms of the one-loop function as .
If we want to give g S a polynomial dependence on q 2 this two-point function generally becomes far too convergent in the large q 2 limit. The other way to introduce more poles is to expand Π(q 2 ) beyond what we have done before to quartic or higher order. For the case of two-poles this means we want However that means we can rewrite (82) From Eq. (82) it is obvious that the residues of the two poles will have opposite signs, thus preventing this simple approach for including more resonances. We have illustrated the problem here for the simplest extensions but it persists as long as both g S , g V and the one-loop two-point functions are fairly smooth functions.
A General Problem in Short-Distance Constraints in Higher Green Functions
At this level we have expanded our one-loop two-point functions to at most second nontrivial order in the momenta and we found that it was relatively easy to satisfy the shortdistance constraints involving exact zeros. However, if we check the short-distance relations for the three-point functions that are order parameters given in Eqs. 58) and (67), we find that they are typically too convergent. In this subsection we will discuss how this cannot be remedied in general without spoiling the parts we have already matched. In fact, we will show how in general this cannot be done using a single or any finite number of resonances per channel type of approximations. An earlier example where single resonance does not allow to reproduce all short-distance constraints was found in Ref. [11]. First look at the function Π P SP and see whether by adding terms in the expansion in q 2 , p 2 1 , p 2 2 to Π P SP (p 1 , p 2 ) χ beyond those considered in Eq. (38) we can satisfy the shortdistance requirement of Eq. (A.1). It can be easily seen that setting Π P SP (p 1 , p 2 ) χ = Π P SP (p 1 , p 2 ) χ Eq. (38) + Π P SP χ 5 makes the short-distance constraint Eq. (A.1) satisfied. However, a problem is that now we obtain a very bad short-distance behaviour for the pion scalar form factor F χ S (p 2 1 ) which diverges as p 2 1 rather than going to zero. Inspection of the mechanism behind this shows that this is a general problem going beyond the single three-point function and model discussed here.
The problem is more generally a problem between the short-distance requirements on form factors and cross-sections, many of which can be qualitatively derived from the quarkcounting rules or more quantitatively using the methods of Ref. [17], with the short-distance properties of general Green functions.
The quark-counting rules typically require a form factor, here F χ S (p 2 1 ), to vanish as 1/p 2 1 for large p 2 1 . The presence of the short-distance part proportional to p 2 1 /(q 2 p 2 2 ) in the short distance expansion of Π P SP (p 1 , p 2 ) χ then requires a coupling of the hadron in the P channel to the S current proportional to p 2 1 (or via a coupling to a hadron in the S channel which in turn couples to the S current, this complication does not invalidate the argument below). In the general class of models with hadrons coupling with point-like couplings the negative powers in Green functions can only be produced by a hadron propagator. The positive power present in the short-distance expression must thus be present in the couplings of the hadrons. This in turn implies that this power is present in the form factor of at least some hadrons. The latter is forbidden by the quark-counting rule.
It is clear that for at most a single resonance in each channel there is no solution to this set of constraints. In fact, as will show below, there is no solution to this problem for any finite number of resonances in any channel. This shows that even for order parameters the approach of saturation by resonances might have to be supplemented by a type of continuum. We will illustrate the problem for the PSP three-point function. The general expression, labeling resonances in the first P -channel by i, in the S-channel by j and in the last P -channel by k is The couplings f i are polynomials in their respective arguments. The short-distance constraint now requires f 0 (q 2 , p 2 1 , p 2 2 ) = 0 and various cancellations between coefficients of the other functions. The presence of the term p 2 1 /(q 2 p 2 2 ) now requires the presence of at least a nonzero term of order p 2 1 in one of the f 5ik (p 2 1 ). However the Green function can then be used to extract the scalar (transition) form factor between hadron i and k which necessarily increases as p 2 1 which is forbidden by the quark-counting rules for this (transition) scalar form factor. The terms with p 2 2 /(q 2 p 2 1 ) and q 2 /(p 2 1 p 2 2 ) obviously leads to similar problems but in other (transition) form factors.
We have discussed the problem here for one specific three-point function but it is clear that this is a more general problem for three-point functions. For Green function with more than three insertions similar conflicts with the quark counting rules will probably arise also from hadron-hadron scattering amplitudes.
Conclusions
In this paper we have constructed a new approximation to low and intermediate energy hadronic quantities. Our approach naturally fits in the large N c limit and incorporates chiral symmetry constraints by construction. We have shown that many short-distance constraints can be easily incorporated but pointed out that our model, but also a more general saturation by hadrons approach, cannot reconcile all short-distance constraints due to a general conflict between short distance constraints on Green functions and those on form factors and cross-sections that can be obtained from those Green functions via LSZ reduction.
We have also shown how our approach incorporates the gap equation and the concept of a constituent quark mass following directly from the Ward Identities and the resummation assumption.
|
2014-10-01T00:00:00.000Z
|
2003-04-23T00:00:00.000
|
{
"year": 2003,
"sha1": "6ccb30ee9b1f20c4a69a1af2385c67b243d3f2be",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/hep-ph/0304222",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "6ccb30ee9b1f20c4a69a1af2385c67b243d3f2be",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
269007519
|
pes2o/s2orc
|
v3-fos-license
|
Long-term results of etiology-based thoracic endovascular aortic repair: a single-center experience
The use of thoracic endovascular aortic repair (TEVAR) for thoracic aortic aneurysm (TAA) and Stanford type B aortic dissection (TBAD) has been increasing; however, in terms of etiology, the differences of long term after TEVAR outcomes remain unexplored. Thus, we investigated etiology-specific long-term results of TEVAR for TAA and TBAD. A total of 421 TEVAR procedures were performed at our institution from July 2007 to December 2021; 249 TAA cases and 172 TBAD cases were included. Traumatic aortic dissection and aortic injury cases were excluded. The mean observation duration was 5.7 years. The overall 30-day mortality rate was 1.4% (n = 6), with 1.2% (n = 3) in the TAA group and 1.7% (n = 3) in the TBAD group. The overall incidence of postoperative stroke was 0.9% (n = 4), with 1.2% (n = 3) and 0.6% (n = 1) in the TAA and TBAD groups, respectively (p = 0.90). Paraplegia developed in 1.7% (n = 7) of patients, with 2.4% (n = 6) in the TAA group and 0.6% (n = 1) in the TBAD group. Freedom from aortic-related death was not significantly different between the two etiologies; however, thoracic reintervention was more common in the TBAD group (p = 0.003), with endoleak being the most common indication for reintervention. Additionally, retrograde type A aortic dissection occurred in four TBAD cases, while migration occurred in three TAA cases. The perioperative results of TEVAR for TAA and TBAD were satisfactory. The long-term results were unfavorable owing to the occurrence of etiology-specific and common complications. In terms of the high frequency of reintervention, the long-term complications associated with TEVAR are etiology specific.
Introduction
Thoracic endovascular aortic repair (TEVAR) has become the main strategy for the treatment of various thoracic aortic pathologies of multiple etiologies [1].During the past decade, endovascular therapy has revolutionized the management of descending thoracic aortic aneurysm (TAA) with the benefits of preventing an aortic rupture without the need for direct surgical exposure.However, the long-term durability of TEVAR devices has recently become a cause for concern.Furthermore, a different strategy is required for the treatment of TAA and type B aortic dissection (TBAD), potentially resulting in varying outcomes.The purpose of this singlecenter study was to review the long-term results of TEVAR treatment performed at our institution for different etiologies of TAA and TBAD.
Study design and population
The data were retrospectively obtained from a storage/ software facility at Saitama Medical University International Medical Center.We reviewed the data of consecutive patients who underwent TEVAR for TAA and TBAD at our center between July 2007 and December 2021.We excluded patients who underwent TEVAR for blunt thoracic aortic injury.All these patients had a high surgical risk due to comorbidities, such as chronic obstructive pulmonary disease, coronary artery disease, and renal insufficiency.Moreover, patients were selected in cases where they were both technically and anatomically suitable for TEVAR.Eligible patients were divided into two groups according to their etiology: group A (TAA) and group B (TBAD).Early-, mid-, and long-term results were analyzed and compared between the groups.
A total of 421 patients were treated by TEVAR, 249 for TAA (group A), and 172 for TBAD (group B).The mean follow-up period was 68.8 ± 39.5 (range, 0.0-157.8)months.In group B, the average duration between onset and operation was 579.4 days, which was 20 in acute phase, 30 in subacute, and 122 in chronic.The study protocol was approved by the Institutional Review Board of Saitama Medical University International Medical Center, where the work was conducted (ID: 2022-116; December 7, 2022).
Procedures and treatment
All procedures were performed under general anesthesia.The femoral artery or external iliac artery was incised.The stent graft was deployed under rapid pacing (heart rate > 150 beats/min).In cases of arch lesions, an extra-anatomical bypass of neck vessels was performed to secure the landing zone.In group A, TEVAR was performed to interrupt blood flow in the aneurysm.In group B, TEVAR was performed to exclude the primary entry.After accessing the true lumen, a stiff guidewire was then positioned in the ascending aorta, and an endograft was deployed.A pigtail catheter was introduced into the ascending aorta through the contralateral femoral or brachial artery, and digital subtraction angiography was performed.Touchup ballooning of the endograft was avoided.The device oversizing was kept at < 10% compared to the native aorta.
Endpoints
The primary endpoints were 30-day mortality after TEVAR and early and late aortic-related deaths.Late aortic-related deaths were defined as those that occurred more than 30 days after the initial procedure.The cause of aortic-related death was obtained from medical records, telephone investigations, or autopsy reports.The secondary endpoint was aortic reintervention, which was defined as the need to perform an additional procedure to resolve a complication resulting from the initial TEVAR.
Technical success was defined based on the Society of Vascular Surgery reporting standards on perioperative events within 24 h postoperatively [2].
Study variables and definitions
TBAD was defined according to the Stanford classification, that is, dissection of the entry site distal to the left subclavian artery.The diagnosis was based on clinical history and non-invasive, diagnostic computed tomography (CT) angiography.
Adverse events of early outcomes were defined as hospital death, stroke, spinal cord injury, and respiratory failure requiring tracheostomy.Hospital death was defined as death between hospital admission and discharge or within 30 days postoperatively.Stroke was diagnosed using CT or magnetic resonance imaging in case of a new occurrence of postoperative neurological symptoms.Paraplegia was defined as a permanent bilateral motor deficit in the lower extremities.
Adverse events of mid-and long-term outcomes included aortic-related death, retrograde type A aortic dissection (RTAD), aorto-esophageal fistula (AEF), and aorto-bronchial fistula (ABF).Thoracic aortic reintervention was defined as additional open or endovascular repair of the descending thoracic and thoracoabdominal aorta due to aortic disease progression.
Statistical analysis
Categorical variables are expressed as numbers (percentage of total), while continuous variables are presented as mean ± standard deviation.The χ 2 test was used to compare categorical variables.Continuous variables were compared using Student's t-test.The cumulative rate was determined using the Kaplan-Meier method.Logistic regression analysis was used to examine the significance of the clinical, diameter-calculated CT scan, and operative variables.Differences in outcomes were considered statistically significant at p < 0.05.All statistical analyses were performed using JMP, version 14.0 (SAS Institute Inc., Cary, NC, USA).
Operative details are presented in
Perioperative and early complications
Perioperative outcomes are summarized in Table 3.One patient from each groups (A and B) died intraoperatively: in group A, due to cardiac tamponade due to RTAD, and in group B, due to intraoperative aortic rupture in one case of acute, complicated TBAD.The overall 30-day mortality was 1.4% (n = 6), 1.2% (n = 3) in group A and 1.7% (n = 3) in group B. The overall incidence of postoperative stroke was 0.9% (n = 4), 1.2% (n = 3) in group A and 0.6% (n = 1) in group B, although no significant difference was observed between the groups (p = 0.90).Additionally, 1.7% (n = 7) of patients developed spinal cord injury, 2.4% (n = 6) in group A and 0.6% (n = 1) in group B, with no significant difference (p = 0.15).One patient in group A had an embolism of the superior mesenteric artery within the shaggy aorta and died of intestinal necrosis 2.6 months postoperatively.Respiratory failure requiring tracheostomy was not observed in any patient.
Mid-and long-term outcomes
The overall estimated postoperative survival rates at 3, 5, 7, and 10 years did not differ significantly between both groups (p = 0.15; Fig. 1).Freedom from aortic-related death at 5, 7, and 10 years also showed no significant difference (p = 0.60; Fig. 2).The causes of aortic-related death were rupture (n = 10), infection (n = 3), AEF (n = 4), ABF (n = 4), and RTAD (n = 1).The cause of death was not confirmed in 20 patients.Non-aorta-related deaths due to malignancy were more prevalent in group A than in group B, with a significant difference (p = 0.01).The details of all causes of death for each group are presented in Table 4.
Figure 3 shows the estimated Kaplan-Meier curves of freedom from thoracic reintervention for the two groups.The overall postoperative freedom from thoracic reintervention at 5, 7, and 10 years was 88.5 ± 2.9%, 84.4 ± 4.8%, and 76.3 ± 8.8%, respectively, in group A, and 79.0 ± 3.8%, 66.6 ± 5.9%, and 64.3 ± 6.3%, respectively, in group B (p = 0.003).The indications for reintervention are shown in Table 5; in both groups, endoleak was the most common reason (18 and 13 in group A and group B, respectively), and sac enlargement was more prevalent in group B than in group A, with a significant difference (6 and 14 in groups A and B, respectively; p = 0.007).Four patients in group B also had RTAD.Otherwise, migration was noted in three patients in group A, two of whom
Discussion
In this study, the short-term results of TEVAR for TAA and TBAD were satisfactory, whereas the long-term results of both groups were unfavorable due to the occurrence of etiology-specific and common complications.We found no significant differences in all-cause mortality and aortic-related death between the two pathologies.
The current standard of care for a descending thoracic aortic lesion (aneurysm, blunt traumatic aortic injury, and type B dissection) is TEVAR, which is preferentially recommended over surgery if the pathology meets specific anatomical requirements [3][4][5][6][7][8].The main objective of TEVAR in TAA is to treat and prevent aneurysm rupture and decompression, while in TBAD, it is to decrease false lumen blood flow and increase true lumen blood flow by closing the entry site and prevent false lumen expansion and rupture by a false lumen thrombus.The long-term benefits of TEVAR for TAAs remain unclear.In our study, the rate of long-term freedom from aortic-related death after TEVAR was relatively high (87.7%);however, some patients developed complications of endoleaks (7.6%) after TEVAR, which led to aortic-related death due to rupture (2.0%) and necessitated reintervention, including late open conversion.In TEVAR for TAAs, when the proximal landing zone is somewhat limited and the aortic aneurysm itself does not remodel, stent grafts tend to migrate distally.Ranney reported a similar excellent longterm (12-year) aorta-specific survival rate after TEVAR (96.2%) and noted that reintervention due to endoleaks occurred in 7% of cases [9].
The gold standard treatment for acute and chronic TBAD remains optimal medical therapy.This is aimed at limiting the progression of dissection by reducing aortic wall pressure.However, whether conservative therapy is effective for TBAD is unclear.In particular, chronic aortic dissection carries a high risk of late aneurysmal dilation, mainly due to false lumen enlargement and rupture.In these cases, operative treatment is indicated.For chronic TBAD, the preference for endovascular surgery remains controversial.However, endovascular repair does not appear to deliver the expected results.In this study, two patients who underwent TEVAR for TBAD died due to RTAD, and four patients were operated emergently.The choice of TEVAR for TBAD remains controversial because stent grafting bears the risk of eliminating antegrade false lumen flow by persisting through the primary entry, while retrograde false lumen flow persists through potential dissection re-entry more distally.The frequent need for reintervention to address specific complications like RTAD and SINE, which are caused by intimal damage and residual re-entry, may not completely prevent rupture due to false lumen enlargement.Guangqi et al. reported their experience with 121 consecutive patients who underwent endovascular repair for acute and chronic TBAD.They found that postoperative endoleaks occurred in 22% of cases, with a 30-day mortality rate of 8.2% [10].TEVAR is associated with a high 30-day mortality rate and severe complications, including RTAD (2.5-8%) [11,12], stroke (4.6%), and paraplegia (1.9-4.4%)[13,14].Furthermore, there were a small number of fatal complications, such as AEF and ABF, during the mid-and long-term observation periods.In our study, the causes of aorta-related deaths in group B included rupture in five cases, infection in two cases, AEFs in three cases, ABFs in one case, and RTAD in one case.Performing TEVAR in cases with a considerably enlarged false lumen may carry a high risk of fistula formation.Nozdrzykowski et al. indicated that patients with chronic TBAD with extensive aneurysms, malperfusion, or acute rupture are surgically challenging, and the use of TEVAR might be limited due to the aneurysm size and location, occlusion by dissection of the false lumen, or thrombus formation within the chronic aneurysm [15].TEVAR for TBAD has been used for treatment up to the celiac artery and not up to the abdominal aorta, including the abdominal visceral branches that arise partially or totally from the false lumen.Gao et al. illustrated that the maximum abdominal aorta diameter and the number of branches arising from the false lumen were independent risk factors for incomplete false lumen thrombosis in the subacute phase [16].
Several studies have examined the long-term results for each etiology, including ours.Our findings revealed no difference in all-cause mortality and aortic death between the two pathologies, although TBAD required more secondary treatments than TAAs in the long term.With additional reintervention at the appropriate time and indications, TBAD may also safely enhance long-term survival.
However, patients' characteristics differed significantly between the two groups.Especially, the death of patients in group A mainly resulted from non-aorta-related causes, such as cardiac failure, pneumonia, and malignancy.Some reports have indicated that long-term outcomes depend on the aortic pathology and patients' comorbidities, and a TAA is the most complicated pathology and causes the highest mortality [17][18][19].Previous series of patients undergoing treatment indicated that most deaths after TEVAR for TAAs were due to cardiac failure, pneumonia, and cancer [20,21].On the other hand, the causes of death in patients in group B remained largely unknown, especially in terms of aortarelated deaths.A similar study reported that some important elements, such as the influence of medical risk factor control on mid-term results and accurately recording the cause of death, could not be studied, which may lead to an underestimation of the deaths [22].We considered that the presence of different factors in the two groups may have influenced the long-term results.
Limitations
Our study had the following limitations.First, it was a retrospective study with possibly unavoidable patient selection bias; this may lead to partially improved results compared to real-life anatomical and clinical scenarios.Second, it included a small sample size.Third, the rate of major adverse outcomes was probably underestimated in both pathologies; this might have led to biased analysis results.Fourth, we did not classify the onset into acute, subacute, or chronic phases.In the present study, the outcome of TBAD may vary depending on the onset and phases, as it is influenced by the achievement of aortic remodeling.
Conclusions
The early outcomes of TEVAR for both TAA and TBAD were satisfactory.However, the mid-and long-term results could not be assessed owing to the occurrence of etiologyspecific and other common complications.The purpose of TAA and TBAD is not the same; therefore, the main adverse events are different.The occurrence rates of migration and paraplegia are higher in TAA.On the other hand, TBAD remains a challenging etiology that should be considered when assessing reintervention rate.Long-term complications associated with TEVAR are etiology specific.
Fig. 1 Fig. 2
Fig.1Freedom from all causes of death curves
Table 1
Patient characteristicsData are presented as mean ± standard deviation or n (%)
Table 2
Operative status and procedure
Table 4
All causes of deathData are presented as n (%)
Table 5
Causes of thoracic reintervention
|
2024-04-10T06:17:46.250Z
|
2024-04-09T00:00:00.000
|
{
"year": 2024,
"sha1": "8228b7e5ad2e3850e03869faee70c26402f52d1f",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00380-024-02392-8.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "980185a6087f13e15e1622003ac944e3cd5884c6",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
195760863
|
pes2o/s2orc
|
v3-fos-license
|
Clinical Impact of the Fracture Risk Assessment Tool on the Treatment Decision for Osteoporosis in Patients with Knee Osteoarthritis: A Multicenter Comparative Study of the Fracture Risk Assessment Tool and World Health Organization Criteria
Background: To compare the frequency of high-risk osteoporotic fracture in patients with knee OA (OA) using the fracture risk assessment tool (FRAX) and the bone mineral density (BMD). Methods: We retrospectively assessed 282 Korean patients with knee OA who visited five medical centers and 1165 healthy controls (HCs) aged ≥50 years without knee OA. After matching for age, sex, and body mass index, 478 subjects (239 patients with knee OA and 239 HCs) were included. Results: Based on the BMD, the frequency of osteoporosis was 40.2% in patients with knee OA and 36.4% in HCs. The predicted mean FRAX major osteoporotic fracture probabilities calculated with or without femur neck BMD differed significantly between the knee OA and HCs (6.9 ± 3.8% versus 6.1 ± 2.8%, p = 0.000 and 8 ± 3.6% versus 6.8 ± 2.3%, p < 0.001, respectively). The mean FRAX hip fracture probabilities calculated with or without femur neck BMD differed significantly in the knee OA and HCs (2.1 ± 2.4% versus 1.7 ± 1.8%, p = 0.006 and 3 ± 2.3% versus 2.4 ± 1.6%, p < 0.001, respectively). Conclusion: Our study suggests that FRAX may have a clinical impact on treatment decisions to reduce osteoporotic facture in patients with knee OA.
Introduction
Osteoporosis and osteoarthritis (OA) are common bone disorders related to aging and are associated with significant morbidity and disability. The relationship between osteoporosis and OA is complex and controversial. Several previous studies have indicated an inverse relationship between osteoporosis and OA [1,2], results of which have suggested a protective effect of OA on osteoporosis. However, other studies suggest that increased bone mineral density (BMD) in OA does not reduce the risk of fracture [3,4]. Data from the Rotterdam study indicated that, although patients with knee OA had a higher BMD, their incident fracture risk was higher than that in those without knee OA [5]. The absence of a protective effect of increased BMD on fracture risk in these studies was explained as an increased tendency for falls in patients with OA. A recent study reported that inflammation contributes to the development of OA and osteoporosis, providing insight regarding the concomitant presence of both conditions [6]. Subsequent studies have reported that 20-29% of patients with advanced OA have occult osteoporosis [7][8][9]. Therefore, fracture risk assessment in patients with OA should not be overlooked.
The World Health Organization (WHO) criteria, using BMD measured by dual-energy X-ray absorptiometry (DXA), are the most widely used in the diagnosis of osteoporosis [10]. Although a low BMD is a major determinant of osteoporotic fracture, over 50% of fracture cases did not have a BMD considered osteoporosis [11,12]. The fracture risk assessment tool (FRAX) [13] is the most commonly used model for fracture risk assessment. The clinical risk factors assessed using the FRAX have high validity from an evidence-based assessment and are easily obtainable [14].
In clinical settings, physicians mainly determine osteoporosis treatment based on WHO criteria. However, whether these current treatments will be appropriate for patients with OA has not been clearly studied. Inappropriate approaches to fracture risk assessment may lead to under-treatment of osteoporosis. Thus, our aim was to evaluate the frequency of high-risk group of osteoporotic fracture in patients with knee OA comparing the FRAX and WHO criteria. We also examined whether patients with knee OA differ from an age, sex, and body mass index (BMI) matched community-based healthy controls (HCs) without knee OA in terms of the frequency of high-risk of osteoporotic fracture.
Study Design and Subjects
Our study was designed to be 1:1 matching between the knee OA and HCs. We defined knee OA using the American College of Rheumatology radiologic and clinical criteria for knee OA [15]. Status of radiologic knee OA was assessed using the Kellgren-Lawrence grade [16]. We retrospectively assessed 282 Korean patients with knee OA who visited five medical centers between November 2012 and November 2017. Subjects were excluded if they had confounding disorders such as rheumatoid arthritis, avascular necrosis, osteomyelitis, premature menopause, metabolic bone disease, malignancy within 5 years, high-impact trauma, or use of medications such as glucocorticoids and calcitonin within the last 3 months. After adjusting for the exclusion criteria, 245 patients with knee OA were selected. For the control group, we retrospectively assessed patients identified from the databases of two of the five medical centers that recruited patients with knee OA. Candidates were randomly selected using hospital registration numbers. A total of 1165 subjects aged ≥50 years were enrolled in the databases of health check-up centers. We excluded subjects who had self-reported OA or knee pain. We also excluded subjects who had radiologic defined knee OA higher than Kellgren-Lawrence grade 2 among those who had undergone knee X-ray. Overall, 991 subjects met the inclusion criteria before matching for age, sex, and BMI. Next, to control for major confounders of BMD, FRAX calculations, and frequency of high-risk osteoporotic fracture between the knee OA and HCs were matched for age, sex, and BMI. After matching for age, sex, and BMI; 478 subjects (239 knee OA patients and 239 HCs) were included in this study ( Figure 1). The study was approved by the Institutional Review Board (IRB) of each hospital (AJIRB-MED-MDB-15-285, 3-32100191-AB-N-01, C2015163 (1621), DSMC2015-12-017-007, and 2015-09-026). Informed consents were waived by the IRBs.
BMD Evaluation
All subjects were evaluated for BMD using DXA (GE Lunar, Madison, WI, USA). The BMDs of the lumbar vertebrae (L1-4) and proximal femur were measured. On the basis of the WHO criteria, patients with normal BMD, osteopenia, and osteoporosis were classified according to BMD T-scores (standard deviation (SD) for a reference population) of ≥ −1, −1 > T-score > −2.5, and ≤ −2.5, respectively, for postmenopausal women or men aged ≥50 years. Patients with osteoporosis would be candidates for pharmacological intervention.
Osteoporotic Fracture Assessment Using the FRAX Calculation
FRAX uses data of clinical risk factors to estimate the 10-year probability of major osteoporotic and hip fractures. The FRAX values were calculated based on the Korean model (http://www.shef.ac. uk/FRAX/tool.aspx?country=25). The FRAX using BMD was calculated with femur BMD T-scores and the FRAX without BMD was calculated without femur BMD T-scores. According to the FRAX criteria, high-risk of osteoporotic fracture was defined as a 10-year probability of ≥ 20% for major osteoporotic fractures or ≥ 3% for hip fractures. Patients with a high risk of osteoporotic fractures would be candidates for pharmacological intervention.
Statistical Analysis
Continuous variables were expressed as mean ± SD. The t-test or Mann-Whitney test was used for the comparison of continuous variables in the prematched data of the knee OA and HCs. HC data were matched with the knee OA data (ratio of 1:1) by a statistician using greedy matching algorithms. The paired t-test or Wilcoxon signed rank test was used to compare differences between continuous variables in the matched data of the knee OA and HCs. McNemar's test (for two categories) or test of marginal homogeneity (for more than three categories) was used to compare differences between categorical variables in the matched data of the knee OA and HCs. All statistical analyses were performed using SPSS version 21.0 (IBM Corp, Armonk, NY, USA). A p value of < 0.05 was considered statistically significant.
Results
The study population included 231 women (96.7%) and eight men (3.3%) in each group. All the female patients with knee OA and HCs were postmenopausal women. In the comparison of the BMD T-scores between the age, sex, and BMI-matched patients with knee OA and HCs, the patients with knee OA had lower proximal femur neck BMD T-scores than those with HCs (p = 0.036) ( Table 1). However, the distributions of the BMD categories in patients with knee OA were similar to those in HCs (Table 2). Among the clinical risk factors assessed by FRAX, which were comparable between the two groups, previous fracture was the only difference identified between the knee OA and HCs (15.5% versus 0%, p < 0.001) ( Table 3). The mean FRAX major osteoporotic fracture probabilities calculated with and without femur neck BMD were 6.9 ± 3.8% and 8 ± 3.6%, respectively, for the knee OA and 6.1 ± 2.8% and 6.8 ± 2.3%, respectively, for HCs; the FRAX calculations, regardless of the femur neck BMD T-scores, led to higher probabilities of major osteoporotic fracture in knee OA than in HCs (p = 0.000, and p < 0.001, respectively) ( Figure 2A). The mean FRAX hip fracture probabilities calculated with and without femur neck BMD were 2.1 ± 2.4% and 3 ± 2.3%, respectively, for the knee OA and 1.7 ± 1.8% and 2.4 ± 1.6%, respectively, for HCs; The FRAX calculations, regardless of the femur neck BMD T-scores, led to higher probabilities of hip fracture in knee OA than in HCs (p = 0.006, and p < 0.001, respectively) ( Figure 2B). Figure 3 shows the percentage of candidates eligible for pharmacological intervention between the two groups. Ninety-six of the 239 subjects (40.2%) were candidates for pharmacological intervention based on BMD values in the knee OA, while 87 of the 239 subjects (36.4%) were candidates in HCs. Candidate frequency for pharmacological intervention between the two groups was similar. FRAX calculations with the femur neck BMD indicated 22.7% and 18.9% of subjects would receive recommendations for osteoporosis treatment in the knee OA and HCs, respectively. This difference was also not significant. However, FRAX calculations without femur neck BMD indicated that 41.6% and 35.7% of subjects would receive recommendations for osteoporosis treatment in the knee OA and HCs, respectively. This difference was statistically significant (p = 0.008) ( Figure 3). When comparing FRAX calculations between the patients with and without previous fractures among those with knee OA, the FRAX calculations, regardless of the femur neck BMD T-scores, were higher in the patients with previous fractures than in those without previous fractures (p < 0.0001) ( Table 4). In addition, we compared the distributions of the BMD categories and FRAX probabilities in knee OA and HCs except for patients with previous fractures and male patients. The distributions of the BMD categories in the patients with knee OA were similar to those in the patients with HCs ( Table 5). The FRAX calculations with the femur neck BMD were also similar between the two groups. However, the FRAX calculations without femur BMD were higher in the patients with knee OA than in those with HCs (p < 0.0001) ( Table 6). In addition, when FRAX calculations without femur neck BMD were adjusted, an additional 10 patients with knee OA were recommended for osteoporosis treatment compared to HCs (p = 0.025) ( Table 6). We investigated the distributions of the BMD categories between the two groups on the basis of the FRAX criteria in the patients with knee OA. In the patients classified as at high-risk of osteoporotic fracture using the FRAX calculations with femur neck BMD, the prevalence of osteoporosis was 54.0% (Table 7). In the patients classified as at high-risk of osteoporotic fracture using the FRAX calculations without femur neck BMD, the prevalence of osteoporosis was 88.9% (Table 8). The results indicated that the frequency of osteoporosis was higher in the patients classified as at high-risk of osteoporotic fracture than those not classified as at high-risk of osteoporotic fracture (Tables 7 and 8).
Discussion
In our study, when using the WHO criteria based on BMD, the frequency of osteoporosis was 40.2% in patients with knee OA and 36.4% in HCs, which was not a statistically significant difference ( Figure 3). This was similar to the prevalence of osteoporosis in the general female population in Korea [17]. Previous studies have confirmed that 20-29% of patients with advanced OA awaiting joint arthroplasty have occult osteoporosis [7][8][9]. A study in Korea also reported that the prevalence of osteoporosis in advanced knee OA patients was 31% [18]. However, the subjects of our study were not limited to advanced OA patients who were scheduled for arthroplasty as in the aforementioned study. We also found a high frequency of osteoporosis in patients with knee OA that was comparable to that in HCs, which differed from previous studies indicating an inverse relationship between osteoporosis and OA [1,2]. A cohort study reported no significant difference in BMD between patients surgically treated for hip or knee OA and a control group over five years. However, at the five-year follow up, OA was accompanied by changes in shape and a faster reduction of BMD [19]. This study supports our observation that the knee OA group did not have a significantly higher BMD than HCs. We confirmed that OA did not exert a protective effect on osteoporosis. In contrast to the WHO criteria, FRAX calculations regardless of the femur neck BMD T-scores led to higher probabilities of major osteoporotic fracture and hip fracture in the knee OA than in HCs (Figure 2). In addition, when FRAX calculations without the femur neck BMD were adjusted, 5.9% more patients in the knee OA would be recommend for osteoporosis treatment compared to HCs; 41.6% and 35.7% of subjects would receive recommendations for osteoporosis treatment in the knee OA and HCs (Figure 3). While the recommended proportion of patients is 5.9%, the number needed to treat for any clinical difference may be very large in clinical settings. We also indicated the classification of patients with knee OA who are at high-risk of osteoporotic fracture on the basis of FRAX selected patients with low BMD (Tables 7 and 8). The FRAX calculations without femur neck BMD showed a higher detection rate of osteoporosis in the patients classified as at high-risk of osteoporotic fracture than the FRAX calculations with femur neck BMD. In our study, FRAX without BMD was more sensitive than FRAX with BMD in identifying patients with knee OA who were at high-risk of osteoporotic fracture. These results suggest that BMD may be a confounding factor that underestimates the frequency of high-risk osteoporotic fractures in patients with knee OA. In addition, a previous study demonstrated that the use of FRAX without BMD was comparable with that of FRAX with BMD [20]. Thus, according to the FRAX management algorithm [21], patients with knee OA classified as high risk using the FRAX without BMD may be offered treatment without BMD testing. In our study, of the 74 patients with knee OA classified as high risk using FRAX without BMD (Table 6), only 37 were classified as having osteoporosis. Further, 50% of patients with knee OA classified as high risk using the FRAX without BMD did not have osteoporotic BMD. This result suggests that FRAX without BMD may be applied in addition to BMD testing in patients with knee OA. The WHO criteria based on the value of the BMD T-scores ≤ −2.5 has been widely used as both a diagnostic and intervention threshold. Nevertheless, the value of the BMD T-scores showed high specificity but low selectivity, indicating discrepancy between BMD values and fragility fractures [10]. In previous studies, a number of fragility fractures occurred in individuals with BMD values above the osteoporosis threshold [11,22]. A prospective study indicated a higher incidence rate of fractures occurred in women with OA than in those without OA across all BMD groups [23]. This study also demonstrated that OA was a significant risk factor for any fracture in women with osteopenia or normal BMD, but not in osteoporotic women. This observation provides insight into non-osteoporotic fractures in OA patients and is consistent with the results of our study. In our study, 37 patients had previous fractures in knee OA, but none had previous fractures in HCs (Table 3). All 37 patients were female. Of the 37 patients with previous fractures, 30 (81.1%) are currently receiving antiosteoporosis treatment. We investigated the frequency of high-risk osteoporotic fracture in the 37 patients with previous fractures, comparing the WHO and FRAX criteria. On the basis of the WHO criteria, 22 patients (57.9%) were classified as having osteoporosis; 11 (29.7%), as having osteopenia; and four (10.8%), as having normal BMD. Fifteen patients (40.5%) with a T-score of ≥ −2.5 had previous fractures. The probability of developing a fracture using the FRAX calculation was higher in the patients with previous fractures than in those without previous fractures ( Table 4). The FRAX calculations with femur neck BMD indicated that 17 (46.0%) of the 37 subjects would receive recommendations for osteoporosis treatment. The FRAX calculations without femur neck BMD indicated that 25 (67.6%) of the 37 subjects would receive recommendations for osteoporosis treatment. When the FRAX calculations without femur neck BMD were adjusted, three more patients (8.1%) from among the 37 patients with previous fracture would be recommended for osteoporosis treatment as compared with those when the WHO criteria were used. Because the goal of osteoporosis treatment is not to increase BMD but to prevent fracture, which ultimately determines disability and mortality, it may be necessary to compare each criterion used for osteoporosis treatment. A previous study compared FRAX without BMD and BMD alone for predicting osteoporosis treatment [24]. In that study, with advancing years, the difference in fracture probability between women with a T-score ≤ −2.5 and women of the same age without any risk factors decreased. The study showed the BMD criteria for intervention using a fixed T-score did not optimally target women at higher risk of fracture than age-matched individuals without any clinical risk factors, particularly among the elderly. Conversely, the probability of a major osteoporotic fracture in women with a previous fragility fracture increased with age, from 2.3% at the age of 40 years to 23% at the age of 90 years. Fracture probabilities based on the FRAX calculation were consistently higher in women with no clinical risk factors. In our study, the fracture probabilities derived from the FRAX calculation were higher in the knee OA than in HCs; these results may be due to the higher frequency of previous fractures in the knee OA. Therefore, when we consider that OA occurs mainly in elderly patients and that there is a higher fracture risk when previous fractures are taken into account, we may apply the FRAX calculations to patients with OA to assess fracture probabilities.
The high risk of osteoporotic fracture in OA patients may be associated with an increased fall tendency, possibly due to postural instability, quadriceps weakness, joint pain, and stiffness [3,5,25]. There are also studies that have reported bone loss due to OA increases the risk of osteoporotic fractures. A prospective study showed patients with radiographic hip and knee OA develop higher total hip bone loss over 2.6 years [26]. Another study indicated that radiographic hip OA was associated with an annual bone loss of 2% in men and 1.4% in women, despite the 3-8% higher BMD values when compared to control [27]. Other causes of increased fracture risk in OA patients could be inflammation. Inflammatory rheumatic diseases such as rheumatoid arthritis, systemic lupus erythematosus, and ankylosing spondylitis have been associated with elevated bone loss and increased fracture rates [28]. The contribution of inflammation to the development of osteoporosis is not limited to chronic inflammatory diseases, as the low grade inflammation of OA also contributes to the development of osteoporosis. Recent studies in the field of osteoimmunology have demonstrated that increased bone loss occurs not only in osteoporosis but also in the early stages of OA [29]. In our study, 144 (60.3%) complained of knee pain and 140 (58.6%) took analgesics, including nonsteroidal anti-inflammatory drugs, among the patients with knee OA. The distributions of the BMD categories in the patients with knee OA were similar to those in the patients with HC (Table 5). This suggests that the high frequency of high-risk osteoporotic fracture in patients with knee OA is due to increased fall tendency rather than to increased bone loss.
There are currently no data available comparing whether there is a difference in the development of osteoporotic fractures assessed by the WHO criteria and FRAX criteria in patients with knee OA, and whether these differences should affect osteoporosis treatment decisions. In our study, when adjusting for FRAX criteria but not for the WHO criteria, the frequency of high-risk osteoporotic fracture in patients with knee OA was higher than in HCs. Moreover, additional candidates for osteoporosis treatment were identified in patients with knee OA based on the FRAX criteria. Therefore, physicians may consider applying FRAX calculations in patients with knee OA to determine appropriate osteoporosis treatment. Whether the current treatments for osteoporotic fractures, which were targeted at individuals with low BMD, will be appropriate for patients with OA is yet to be determined. Additional prospective studies involving large populations are required to support the application of the FRAX calculation in patients with OA. We also suggest that further studies be conducted to develop a fracture risk assessment tool that includes OA as a risk factor, similar to the FRAX, which includes rheumatoid arthritis as a risk factor. Our study has some limitations. First, the number of knee OA patients included in the study was relatively small. In particular, the number of women (96.7%) was significantly higher than that of men. However, considering that female sex is a major risk factor for osteoporosis and knee OA [30,31], this study reflects the major concerns in a real world setting. Second, although we excluded subjects with self-reported OA, knee pain, or radiologic defined knee OA, there may still be a possibility of selection bias on HCs. Third, this study was restricted to knee OA, thus further studies are needed to determine whether the results of our study indicating a relationship between OA and osteoporotic fractures can be applied to other joints. Fourth, although patients with knee OA showed an increased risk of developing osteoporotic fractures, there are limitations in determining a clear causal relationship because of the cross-sectional study nature of our study.
Conclusions
This study demonstrated that the frequency of osteoporosis assessed by BMD was similar for patients with knee OA and HCs. In contrast, the probability of developing a fracture using the FRAX calculation was higher in the knee OA than in HCs. In addition, more candidates could be identified for osteoporosis treatment among knee OA patients considering the FRAX criteria. Thus, FRAX may have a clinical impact on treatment decisions aimed at reducing the development of osteoporotic fractures in patients with knee OA.
|
2019-06-30T03:04:03.544Z
|
2019-06-26T00:00:00.000
|
{
"year": 2019,
"sha1": "29d3c9a3ecf82dadfdec988138f76b12bf40f4fe",
"oa_license": "CCBY",
"oa_url": "https://res.mdpi.com/d_attachment/jcm/jcm-08-00918/article_deploy/jcm-08-00918-v2.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "29d3c9a3ecf82dadfdec988138f76b12bf40f4fe",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
210944534
|
pes2o/s2orc
|
v3-fos-license
|
Molecular evidences confirm the taxonomic separation of two sympatric congeneric species (Mollusca, Gastropoda, Neritidae, Neritina)
Abstract A reliable taxonomy, together with more accurate knowledge of the geographical distribution of species, is a fundamental element for the study of biodiversity. Multiple studies on the gastropod family Neritidae record three species of the genus Neritina in the Brazilian Province: Neritina zebra (Bruguière, 1792), Neritina virginea (Linnaeus, 1758), and Neritina meleagris Lamarck, 1822. While N. zebra has a well-established taxonomic status and geographical distribution, the same cannot be said regarding its congeners. A widely cited reference for the group in Brazil considers N. meleagris a junior synonym of N. virginea. Using a molecular approach (phylogenetic, species delimitation, and statistical parsimony network analyses), based on two mitochondrial markers (COI and 16S), this study investigated if N. virginea and N. meleagris are distinct species. The molecular results confirmed the existence of two strongly supported distinct taxonomic entities in the Brazilian Province, which is consistent with the morphological descriptions previously proposed for N. virginea and N. meleagris. These species occur in sympatry in the intertidal sandstone formations of Northeastern Brazil. Despite the great variation in the colour patterns of the shells, the present study reinforced previous observations that allowed the differentiation of these two species based on these patterns. It also emphasized the importance of the separation of these two clades in future studies, especially those conducted in the Brazilian Province, since these species may cohabit.
Introduction
Molluscs from the gastropod family Neritidae are the most diverse members of Neritimorpha (Kano et al. 2002), with some groups within this family having variable shell colouration patterns (e.g., Russell 1941;Tan and Clements 2008;Eichhorst 2016).Due to the great variety of colour patterns, the delimitation of different species could be hampered, especially if they are closely related and live in sympatry (e.g., Huang 1997;Blanco et al. 2014).This may explain the disparate estimates in the literature of the number of species of Neritina reported for the Brazilian Province.
Several studies report three species of the genus Neritina on the Brazilian coast: Neritina zebra (Bruguière, 1792), Neritina virginea (Linnaeus, 1758), and Neritina meleagris Lamarck, 1822(e.g., Baker 1923;Russell 1941;Rios 1975;Matthews-Cascon et al. 1990;Díaz and Puyana 1994;Quintero-Galvis and Castro 2013;Eichhorst 2016).While N. zebra has a well-established taxonomic status and geographical distribution (Matthews-Cascon et al. 1990;Rios 2009;Barroso et al. 2012;Eichhorst 2016), there is uncertainty regarding its congeners.The shell catalogues of Rios (1985Rios ( , 1994Rios ( , 2009)), a widely cited reference in studies conducted in Brazil, state that only two species occur in the Brazilian Province: N. virginea and N. zebra.In these compendia, N. meleagris is considered a junior synonym of N. virginea without any justification.Quintero-Galvis and Castro (2013), using a molecular phylogenetic approach to analyse specimens from the Colombian coast (Caribbean Province), concluded that N. meleagris and N. virginea are phylogenetically close, but different species.Since these species have a wide geographic distribution, encompassing the Caribbean and Brazilian Provinces (Barroso et al. 2016;Eichhorst 2016), that are separated by a recognized biogeographic barrier (the Amazon-Orinoco outflow) (Floeter et al. 2008;Barroso et al. 2016), the inclusion of specimens from both biogeographical provinces in phylogenetic analyses is desirable.
Since a reliable taxonomy, together with a more accurate knowledge about the geographical distribution of species, is fundamental to the study of biodiversity (Wheeler et al. 2004), the present study aims to investigate if N. virginea and N. meleagris are two distinct species, using molecular data (phylogenetic, species delimitation, and statistical parsimony network analyses).
Methods
We collected specimens from Barra Grande beach (Piauí State) (2°54.125'S,41°24.573'W)and Camocim beach (Ceará State) (02°51.778'S,41°51.57'W),both located in Northeastern Brazil, and preserved them in 95% ethanol.We identified the species using the literature (Russell 1941;Matthews-Cascon et al. 1990;Eichhorst 2016), primarily based on the shape and colour patterns of the shells.Specimens were collected under SISBIO permit no.57473-3 and deposited in the malacological collection "Prof.Henry Ramos Matthews" -series B (CMPHRM-B) of Universidade Federal do Ceará (UFC).A total of 17 specimens, eight newly sequenced and nine already published by Quintero-Galvis and Castro (2013) and available on GenBank, were used for phylogenetic reconstruction (Table 1).All sequences used were attributed to nominal species considered valid in the literature (see Aktipis and Giribet 2010;Cook et al. 2010;Page et al. 2013;Quintero-Galvis and Castro 2013).
We extracted whole genomic DNA from the foot muscle of specimens, using the Qiagen DNeasy Blood & Tissue Kit (Qiagen, Valencia, CA, USA).The quality and integrity of the DNA obtained were evaluated in a micro-volume spectrophotometer.Amplification of double-stranded fragments from the cytochrome c oxidase I (COI) and 16S mitochondrial genes was achieved by polymerase chain reaction (PCR) using newly developed neritid-specific custom primers for the 16S gene [(16SNer_F 5'AC-TACTCCGCCTGTTTATCAAA3') and (16SNer_R 5'GGGCTTAAACCTAATG-CACTT3')] and modified versions of Folmer et al. (1994) primers for the COI gene [(LCO1490_mod 5'ATTCTACGAATCAYAAAGAYATTGG3') and (HCO2198_ mod 5'TAWACTTCAGGATGACCRAAAAATCA3')].The PCR was carried out using GoTaq Green Master Mix (Promega Corporation), 1.25 μL of each primer (10 μM stock), and 100 ng of DNA template in a 25 μL reaction volume.The PCR cycles for COI and 16S amplification consisted of an initial denaturation step at 95 °C for 2 min, followed by 35 cycles of denaturation at 95 °C for 30 s, annealing at 48-49 °C for 45 s and extension at 72 °C for 1 min, and a final extension at 72 °C for 5 min.The PCR products were then examined using gel electrophoresis on 1.3% Tris-Borate-EDTAagarose gel stained with SYBR Safe DNA Gel Stain (Invitrogen).The PCR products showing strong bands in gel electrophoresis were purified with IllustraExoProStar -1 Step (GE Healthcare Life Sciences), following its standard protocol, and sent for Sanger sequencing (Macrogen Inc., South Korea).
The forward and reverse sequences for each gene fragment were edited using Geneious v. 7.1 (Biomatters).The concatenated alignments of COI and 16S were conducted using the MAFFT program with the G-INS-I algorithm (Katoh and Standley 2013) using the default parameters, with additional inspection by eye for accuracy (see Suppl.material 1).As our 16S sequences (650 bp for N. virginea and 651 bp for N. meleagris) were longer than those available on GenBank, we used a minor homologous region in the alignment of this gene.However, we deposited the full 650-651 bp 16S sequences in GenBank.The combined dataset contained 1124 bp (639 bp for COI and 485 bp for 16S).Evolutionary relationships were estimated for the concatenated genetic markers using Bayesian inference (BI) and maximum likelihood (ML) analyses.The best-fit evolution models were determined using Parti-tionFinder (Lanfear et al. 2012), considering the positions of the codon for the COI gene, which codes for protein, and a single partition for the 16S gene.The corrected Bayesian Information Criterion (BIC) was used to select among the options.Parti-tionFinder selected respectively GTR + I, F81, and HKY + G as the best model for the three positions of the codon in COI, and GTR + G for 16S.Bayesian inference, using the previously mentioned partitions and models, was performed using the Mr-Bayes program (Ronquist et al. 2012) and the dataset was run for 3 × 10 7 generations, with Markov chains sampled every 1000 generations, and the standard 25% burnin calculated.Convergence was checked using Tracer 1.6 (http://beast.bio.ed.ac.uk/Tracer).Tree branches were considered strongly supported if posterior probabilities were ≥ 0.90.Randomized accelerated maximum likelihood (RAxML) (Stamatakis 2006) was used to generate a ML tree with partitions under the evolution model GTR + G and with 1 × 10 4 replications.Branches with bootstrap values greater than 70 were considered strongly supported.Phylogenetic trees were drawn and edited in FigTree 1.4.3 (http://tree.bio.ed.ac.uk/software/figtree/).
For the species delimitation analyses, we initially constructed a distance matrix based on the Kimura 2-parameter (K2P) model, using the COI sequences, in the MEGA 6.0.6 software (Tamura et al. 2013).This matrix was analysed with the default settings of the Automatic Barcode Gap Discovery method (ABGD) (Puillandre et al. 2012) (available at http://www.abi.snv.jussieu.fr/public/abgd/abgdweb.html).We also used the Species Delimitation plugin v1.04 for Geneious v. 7.1 (Masters et al. 2011) with two data sets: (1) the results of our Bayesian concatenated phylogenetic analysis (COI + 16S), and (2) the results of a neighbor-joining tree based on the K2P model with 10,000 bootstrap replicates using COI sequences generated in MEGA 6.0.6.In this analysis, we calculated (1) the mean distance between the members within the clade (Intra Dist), (2) the mean distance of those individuals to the nearest clade (Inter Dist-closest), (3) the ratio between Intra Dist and Inter Dist-closest, and (4) the P ID, which represents the mean probability (95% confidence interval) of correctly identifying an unknown member of the putative species to fit inside (Strict P ID), or at least to be the sister group of (Liberal P ID), the species clade in a tree (Masters et al. 2011).
A statistical parsimony network analysis was conducted with COI sequences (347 bp), using the TCS algorithm (Clement et al. 2002) implemented in PopART v. 1.7.2 (Leigh and Bryant 2015).The sequence alignment step followed the same procedures already described in our phylogenetic analysis protocol (see Suppl.material 1).In addition to the N. virginea and N. meleagris sequences generated in this study, we included 55 sequences of N. virginea from island (Puerto Rico: 44 sequences) and continental (Panama: 10; Colombia: 1) locations in the Caribbean Province (Aktipis and Giribet 2010;Cook et al. 2010;Page et al. 2013; Quintero-Galvis and Castro 2013) (Table 1).Along with phylogenetic and species delimitation analyses, we also included the only COI sequence available on GenBank attributed to N. meleagris from Colombia (Quintero-Galvis and Castro 2013) (Table 1).The shells and opercula of the specimens of N. virginea and N. meleagris submitted to molecular analyses were observed and photographed under a stereomicroscope.A scanning electron microscope (SEM) was used to view their radulae (two females of N. virginea and two males and one female of N. meleagris) in the Analytical Facility of UFC (Central Analítica, UFC).This information was collected in order to compare our results with the information available in the literature.
Results and discussion
The results of our molecular analyses (phylogeny, species delimitation, and statistical parsimony network) confirmed the existence of two strongly supported clades living in sympatry in the intertidal beachrocks of Northeastern Brazil (Brazilian Province).The Bayesian and maximum likelihood trees showed the same topology, with the formation of four clades within Neritina: Group I (Neritina piratica + Neritina usnea), Group II (Neritina meleagris, collected in NE Brazil), Group III (Neritina virginea, collected in NE Brazil and Colombia, + "Neritina meleagris", from Colombia), and Group IV (Neritina punctulata) (Fig. 1).The Groups II and III were also observed in the ABGD analysis.Intraspecific distances in these groups were at least one order of magnitude smaller than the interspecific distances.Our minimum interspecific genetic distance values (COI region only), involving groups II and III, were 8.4 and 9.6%, respectively (Table 2).These values are higher than the minimum value assumed by Abdou et al. (2017) to characterize distinct Indo-Pacific Neritina species.In addition, the probabilities of a new sequence fitting inside P ID (Strict) or at least the sister group P ID (Liberal) of these clades were equal to, or in most cases, greater than 84% (Table 2).These results are compatible with the values found to delimit species in different groups of gastropods (e.g.Churchill et al. 2014;Cooke et al. 2014;Espinoza et al. 2014).Numbers on and below the main branches represent the posterior Bayesian probabilities (BP) (>0.90) and bootstrap values for maximum likelihood (ML) (>70%), respectively.Specimens with the number "1" are from Camocim beach (Ceará State, NE Brazil) and those with numbers "2", "3", and "4" are from Barra Grande beach (Piauí State, NE Brazil).The numbered specimens of N. virginea (1, 2, 3, and 4) and N. meleagris (1, 2, 3, and 4) are the same specimens shown in Figure 3.Although the distinction between clades showed high support values, the phylogenetic relationship between them could not be recovered.As we did not have access to the specimens, it was not possible to check the shell colour patterns of the Neritina meleagris from Colombia (obtained from GenBank) that was included in the same clade as Neritina virginea.Thus, we suspect that an error may have occurred at the time of submission of the sequences to GenBank, since, in the study of Quintero-Galvis and Castro (2013), N. virginea and N. meleagris appeared in very distinct branches of the phylogenetic tree.
Despite the geographical distance, all N. virginea sequences from Brazil and the Caribbean were very similar, with all haplotypes grouped within a few mutational steps (Fig. 2).This result reinforces the validity of N. virginea and confirms its presence in the of Neritina meleagris M radula of Neritina virginea (SEM), with rachidian tooth enlarged in the upper left quadrant N radula of Neritina meleagris (SEM), with rachidian tooth enlarged in the upper left quadrant.Abbreviations: l1 first lateral tooth, l4 fourth lateral tooth, m marginal teeth, r rachidian tooth.The specimens with the number "1" are from Camocim beach (Ceará State, NE Brazil) and those with numbers "2", "3", and "4" are from Barra Grande beach (Piauí State, NE Brazil).The numbered specimens of N. virginea (1, 2, 3, and 4) and N. meleagris (1, 2, 3, and 4) are the same specimens used in the phylogenetic analysis of Figure 1.Scale bars: 1.0 mm (A-L); 100 μm (M, N).
Brazilian Province.As also observed in the phylogenetic analysis, the only sequence assigned as N. meleagris from the Caribbean Province is positioned within one of the most frequent N. virginea haplotypes for this region (Fig. 2).With respect to our N. meleagris sequences, although this species is found in sympatry with its congener in the Brazilian Province, it is separated from the N. virginea haplogroups by at least 36 mutational steps.
Figure 3 shows the colour patterns, opercula, and radulae of the N. meleagris and N. virginea specimens collected from the Barra Grande and Camocim beaches.Our molecular results are consistent with the morphological descriptions previously proposed for each species (Russell 1941;Matthews-Cascon et al. 1990;Eichhorst 2016).Russell (1941) described, for both species, a colour pattern consisting of dark zigzag lines and lighter spots.However, this author emphasized that while N. virginea has a leading edge outlined in heavy black, N. meleagris instead has a leading edge outlined with white, white and black, or white and red, resembling imbricating scales.The imbricating scales pattern was emphasized in the original description of N. meleagris (Lamarck 1822), whereas Matthews-Cascon et al. (1990) and Eichhorst (2016) highlighted the differences in the leading edges of the colour pattern for each species.Although did not examine the type specimens, individuals of N. virginea from the Linnean Collection at the Natural History Museum, London (see http://linnean-online.org/17152/), and N. meleagris, from the type locality (Dominican Republic) (see http://data.biodiversitydata.nl/naturalis/specimen/ZMA.MOLL.313038),had the same leading edge patterns as described earlier.All analysed specimens of N. meleagris had the leading edge outlined with white or white and black, while N. virginea specimens had the leading edge outlined in black (Fig. 3A-H).Despite the great variation in their shell colour patterns, a more detailed observation of the leading edges of the N. virginea and N. meleagris shells allows the separation of the two species, even in the field.Warmke and Abbott (1962) also emphasized the use of leading edges to separate the two species.Williams (2017) argued that the colours and patterns of gastropod shells could be genetically determined, influenced directly by environmental factors, or a combination of both.Specifically, the patterns of leading edges (leading edge outlined with white or white and black in N. meleagris and outlined in black in N. virginea) appear to be under genetic control rather than be influenced directly by environmental factors, since the patterns for each species are consistent regardless of the location studied (e.g.Russell 1941;Eichhorst 2016;present study).This observation is reinforced by the clades obtained in our phylogenetic analysis, corroborating the diagnostic colour patterns previously described (Fig. 1).
Besides the shell colour patterns, N. virginea and N. meleagris differ from each other in subtle ways.The inner lips of the shells of the two species are denticulated.However, in N. virginea there are several small denticles interspersed by two larger teeth, while in N. meleagris the teeth are larger, more prominent in the central region, and less numerous when compared to N. virginea (Fig. 3I, K).Russell (1941), Matthews-Cascon et al. (1990), andEichhorst (2016) also highlighted these differences regarding the number of teeth on the inner lip.Both species have a calcareous and smooth operculum, with a bifurcated apophysis.Comparing the opercula, N. virginea has a darker (bluish-black) and more elongated operculum, with the apophysis elements thinner and more separated from each other.On the other hand, the operculum of N. meleagris presents a lighter coloration (yellowish-black) and a semi-circular shape, with the apophysis elements much stouter and closer to each other (Fig. 3J, L).In the present study, Neritina virginea and N. meleagris have a very similar morphology of the radula: a rhipidoglossate radula, with one rachidian tooth, five pairs of lateral teeth, and many denticulated marginal teeth arranged in transverse rows (Fig. 3M, N; see also Suppl.material 2).The most striking difference between these radulae is the rectangular rachidian tooth, which has three cusps in N. meleagris (both male and female) but is cuspless in N. virginea.The first lateral tooth of N. virginea is more slender than that of N. meleagris.Previous studies have shown that the radula teeth pattern of neritids is very stable, the most variable character being the number of cusps on the fifth lateral tooth, which is likely correlated with age (Baker 1923;Huang 1997;Haynes 2001).This characteristic makes it difficult to define intra-and interspecific differences.Further studies are needed to better define the differences between the radulae of the two species.
Our molecular data show that N. virginea and N. meleagris are two distinct species, thus confirming the N. meleagris record for the Brazilian coast.In summary, our results, along with the already well-established record of Neritina zebra (Matthews-Cascon et al. 1990;Rios 2009;Barroso et al. 2012Barroso et al. , 2016;;Eichhorst 2016), demonstrate that there are three species of the genus Neritina registered for the Brazilian Province to date.We emphasize the importance of the separation of N. virginea and N. meleagris in future studies, especially those conducted in the Brazilian Province, since these species may cohabit.In the field, these species can be identified with a detailed observation of the leading edge patterns of their shells, assisting ecological studies.Further research is needed in other areas along the Brazilian Province to determine the geographic distribution of N. virginea and N. meleagris, highlighting the locations where they co-occur.
Figure 1 .
Figure 1.Molecular phylogenetic hypothesis (Bayesian tree) of some species of Neritidae of the Western Atlantic.The Bayesian tree was based on partial mitochondrial COI and 16S sequences.The Neritina meleagris and Neritina virginea clades (ingroup) are highlighted.The other taxa were used as outgroup.Numbers on and below the main branches represent the posterior Bayesian probabilities (BP) (>0.90) and bootstrap values for maximum likelihood (ML) (>70%), respectively.Specimens with the number "1" are from Camocim beach (Ceará State, NE Brazil) and those with numbers "2", "3", and "4" are from Barra Grande beach (Piauí State, NE Brazil).The numbered specimens of N. virginea(1, 2, 3, and 4) and N. meleagris (1, 2, 3, and 4) are the same specimens shown in Figure3.
Figure 2 .
Figure 2. Statistical parsimony network analysis (TCS algorithm) based on 64 partial mitochondrial COI sequences (347 bp).This analysis included specimens of Neritina meleagris and Neritina virginea from the Caribbean and Brazilian Provinces.Size of the circle is proportional to frequency of the haplotype and colours inside the circles designate geographical locations to which the samples belong.Black circles correspond to hypothetical haplotypes.The number of mutational steps is indicated by dashes on branches.We highlighted the 36 mutational steps that separate the two species haplotypes.
Figure 3 .
Figure 3. Colour patterns of shells, opercula, and radulae of the Neritina virginea and Neritina meleagris analysed.The red arrows highlight the differences between the leading edges of colour patterns of both species: N. virginea has the leading edges outlined in heavy black, while N. meleagris has the leading edge outlined in white or black and white.A Neritina virginea_1 B Neritina virginea_2 C Neritina virginea_3.
Table 1 .
List of species included in the phylogenetic, species delimitation, and statistical parsimony network analyses.The voucher number of species collected in NE Brazil and the accession numbers of the sequences obtained in the present study and from GenBank are indicated.The numbers in parentheses next to the GenBank accession number correspond to each of the specimens analysed in the present study (see Figs1, 3).
Table 2 .
Species delimitation results from the Bayesian concatenated and Neighbor-Joining trees.These analyses were performed with the Species Delimitation plugin for Geneious.
|
2020-01-23T09:21:21.727Z
|
2020-01-16T00:00:00.000
|
{
"year": 2020,
"sha1": "f4a07b869d5b51d3aefd4cd9566a03a176213bb7",
"oa_license": "CCBY",
"oa_url": "https://zookeys.pensoft.net/article/46790/download/pdf/",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "701edcc0c1db9bb110d08c1be57bf30e01ee4d37",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
62936453
|
pes2o/s2orc
|
v3-fos-license
|
A Review of Image Compression for Medical Images Using Differential Pulse Code Modulation (DPCM)
Image processing modifies pictures to improve, extract information and change their structure (composition, image editing and image compression etc.). Image can be processed by optical, photographic and electronic means, but image processing using digital computers are the most common methods because digital methods are fast, flexible and precise. Image Compression- reducing the redundancy in the image data to optimize transmission / storage. This paper outlines the different types of image compression and lastly we compare the Differential Pulse Code Modulation (DPCM).
This is an open access article under the CC BY-NC-ND license (https://creativecommons.org/licenses/by-nc-sa/4.0)
1.Introduction
The objective of image compression is to reduce the redundancy of an image. Lossless compression is to reduce the redundancy of an image. Lossless compression method applies when data are critical and loss information is not acceptable. Medical image compressions are based on lossless compression method. Medical imaging is been used for diagnosis of diseases and surgical planning, and they need long-term storage for profiling patient's data as well as efficient transmission for long diagnosis 1 . It is essential to make the medical image compression lossless to avoid of critical medical information. In the field of online diagnosis or real time applications such as telemedicine, demands for hardware to handle lossless compression that can accelerate the computation process. There have been many studies on medical lossless compression algorithm. DPCM has an advantage over other lossless compression schemes due to simple structure to implement 2 .
Problem Formulation :
The Objective of this proposed work is to implement a robust technique that works for the images compression using Differential Pulse Code Modulation (DPCM) which can compress the data as much as possible. In this multimedia edge efficient compression is really a tuff work. The early research in image compression introduced many techniques such as JPEG, JPEG-2000, and JPEG-LS.
3Image Processing Techniques:
The analysis and manipulation of a digitized image, especially in order to improve its quality, storage & transmissions in networks or cloud. Our aim is how to improve compression techniques for implementing the above requirements. Compression is useful because it helps reduce the consumption of expensive resources, such as hard disk space or transmission bandwidth. On the downside, compressed data must be decompressed to be used, and this extra processing may be determined to some applications. Data compression can be defined as reducing of the amount of storage space required to store a given amount of data 3 .
Data compression comes with a lot of advantages. It saves storage space, bandwidth, cost and time required for transmitting data from one place to another. Compression can be lossy or loss-less. With a loss-less compression and decompression the original and decompressed files are identical bit per bit. On the other hand, compression efficiency can be improved by throwing away most of redundant data, without however losing much quality.
Types of Image Compression:
There are 2 types of image compression: lossless compression (reversible) and lossy compression (irreversible). Run-length encoded (RLE) and the JPEG lossless compression algorithms are example of lossless compression.
In lossy compression, data are discarded during compression and cannot be recovered. Lossy compression achieves much greater compression than lossless techniques 5 . Wavelet and higher-level JPEG are examples of lossy compression. JPEG 2000 is a progressive lossless-to-lossy compression.
Lossless Image Compression:
When hearing that image data are reduced, one could expect that automatically also the image quality will be reduced. A loss of information is, however, totally avoided in lossless compression, where image data are reduced while image information is totally preserved 6 . It uses the predictive encoding which uses the gray level of each pixel to predict the gray value of its right neighbour. Only small deviation from this prediction is stored. This change the statistics of the image signal drastically. Statistical encoding is another important approach to lossless data reduction. Statistical encoding can be especially successful if the gray level statistics of the images has already been changed by predictive coding. The overall result is redundancy reduction that is reduction of the reiteration of the same bit patterns in the data. Losslesscompression is therefore also called reversible compression.
Lossy Image Compression :
Lossy data compression has of course a strong negative connotation and sometimes it is doubted quit emotionally that it is at all applicable in medical imaging.
In transform encoding one performs for each image run a mathematical transformation that is similar to the Fourier transform thus separating image information on gradual spatial variation of brightness (regions of essentially constant brightness) from information with faster variation of brightness at edges of the image.
In image data reduction, this second step is called quantization. Since this quantization step cannot be reversed when decompressing the data, the overall compression is 'lossy' or 'irreversible'.
Proposed Methodology:
The proposed technique uses Differential Pulse Code Modulation (DPCM) for image compression in efficient manner. In this method we have two system, Compression system and Decompression system. Since we know that we have many techniques to implement transformation. Here we will use Differential Pulse Code Modulation.
For Encoding we have many techniques but we will try to use Huffman Encoding because it is efficient and easy to implement. During Reconstruction of image we have to apply both Transformation and Encoding in Reverse manners.
5.Conclusion
In this paper, we analyses the previous work done in the image compression using various Transformation and encoding techniques and we find that Differential Pulse Code Modulation (DPCM) and Huffman encoding has better performance and easily to implement.
|
2019-02-16T14:32:35.905Z
|
2016-12-28T00:00:00.000
|
{
"year": 2016,
"sha1": "d64ce660c3f73d67a78945f6cf2c313e006e80a9",
"oa_license": "CCBYNCSA",
"oa_url": "http://www.compitjournal.org/335/download-research-paper",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "cc1542ed05d027e1d0b1e0f77dcb195c25664c8d",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
231678148
|
pes2o/s2orc
|
v3-fos-license
|
Correlating Local Volumetric Tissue Strains with Global Lung Mechanics Measurements
The mechanics of breathing is a fascinating and vital process. The lung has complexities and subtle heterogeneities in structure across length scales that influence mechanics and function. This study establishes an experimental pipeline for capturing alveolar deformations during a respiratory cycle using synchrotron radiation micro-computed tomography (SR-micro-CT). Rodent lungs were mechanically ventilated and imaged at various time points during the respiratory cycle. Pressure-Volume (P-V) characteristics were recorded to capture any changes in overall lung mechanical behaviour during the experiment. A sequence of tomograms was collected from the lungs within the intact thoracic cavity. Digital volume correlation (DVC) was used to compute the three-dimensional strain field at the alveolar level from the time sequence of reconstructed tomograms. Regional differences in ventilation were highlighted during the respiratory cycle, relating the local strains within the lung tissue to the global ventilation measurements. Strains locally reached approximately 150% compared to the averaged regional deformations of approximately 80–100%. Redistribution of air within the lungs was observed during cycling. Regions which were relatively poorly ventilated (low deformations compared to its neighbouring region) were deforming more uniformly at later stages of the experiment (consistent with its neighbouring region). Such heterogenous phenomena are common in everyday breathing. In pathological lungs, some of these non-uniformities in deformation behaviour can become exaggerated, leading to poor function or further damage. The technique presented can help characterize the multiscale biomechanical nature of a given pathology to improve patient management strategies, considering both the local and global lung mechanics.
Introduction
Interest in lung health has grown significantly recently, namely due to the COVID-19 pandemic. However, lung health commands growing concerns globally due to concerns over the long-term consequences of pollution, incidence of lung disease, and the aging population. Lung mechanics research has delivered multiscale modelling methods to study both health and pathophysiology [1]. Studies have characterized the architecture and function of the upper airways [2][3][4], the tree-like airway structure and its mechanics [5][6][7][8][9][10] with its tight coupling to the circulatory system [11], and the smallest air-containing units of the lung: the alveoli [12][13][14][15][16].
The tightly coupled respiratory and cardiovascular systems form a delicate yet highly efficient mass transfer unit. Pathogens, particulates, and other threats to the airways can alter the fine balance of mechanisms in play within this system. This can lead to an altered mechanical behaviour due to inflammation, swelling of the tissue (fluid imbalance), and altered material behaviour or tissue remodelling, to name a few. It is not feasible to clinically
Lung Sample Preparation
The lung vasculature was washed out with saline. The lungs were inflated with up to 0.5-5 mL depending on the size of the animal [35,36]. This was done to maintain at least their functional residual capacity and up to 50% of their total lung volume to mitigate excessive airway closure during transportation. Inflation occurred via a metal cannula inserted into the trachea. The lung was then closed off to the atmosphere using an air-tight luer stopcock to minimize lung collapse and airway degradation during transport. The cannula was held in place with sutures and tissue glue prior to being packed for shipping.
The test samples were delivered to Diamond Light Source Ltd. on the day of the testing and were dispatched and tested within 6 h of being culled. A small number of samples were presented (n = 4) due to supply chain limitations at the time. The pilot study aimed to assess the in situ mechanical test device (Controlled inflation Tester, CiT) developed for remote control at Diamond Light Source Ltd., Diamond-Manchester Branchline I13-2, and the feasibility of in situ measurements within the thorax building on previous work on excised lungs [14]. Sample numbers available during the time of the experiment were too low to perform an analysis with strong statistical significance. Therefore, this paper focuses on critiquing the rigor and capabilities of the method framework.
The cadaveric samples were carefully inserted upright into a 2-mm thin-walled polymer (PVC) cylindrical tube (70 mm diameter × 200 mm length) and the upper limbs were taped to ensure minimal movements during scanning. The head was supported to maintain an open trachea using a thread hooked under the teeth and suspended from the top of the containers. Further support was provided by tissue paper inserted around the body (outside the view of the beam) in the containers to ensure stability of the sample during rotation.
Lung Mechanics Measurements
Mechanical ventilation was performed on each animal to assess the mechanical state of the lung prior to imaging and to ensure that the system was airtight. To facilitate lung compliance/stiffness measurements as well as to enable streamlined measurements during synchrotron radiation micro-CT (SR-microCT), a Controlled inflation Tester (CiT) was developed.
Controlled inflation Tester (CiT)
Knowledge of the lung's mechanical state is critical when studying its architecture. A previous iteration of CiT used a syringe pump to deliver airflow, with pressure monitoring used to regulate lung pressure within physiological values during the imaging process [14]. To obtain continuous feedback of the lung behaviour and to ensure that the global mechanical state of the lung was monitored during imaging, a streamlined setup was established for control on the fly at the beamline.
The main component of the system is a 50-mL glass gastight syringe (Hamilton), directly driven by an encapsulated linear actuator (Nanotec L4118L1804-T6X2-A50) and held within a custom 3D-printed housing structure. The actuator (resolution 0.01 mm/step) can be used to control the delivered volume, with correction applied for compressibility of the air. This calculation depends on the pressure, measured with a silicon micromachined amplified pressure sensor (Omega PXM319, Omega, Manchester, UK), and the initial system volume. A photodiode (Osram Opto BPW 21, OSRAM Opto Semiconductors GmbH, Regensburg, Germany) and LED (Cree C503 Series, RS Components Ltd., Corby, UK) were placed on opposing sides of the syringe to provide a reference position for the syringe to automatically return to the initial volume. A 3/2 solenoid valve (SMC VDV valve) was used to allow automatic zeroing of the pressure to atmosphere.
The system can be operated in manual volume control mode, or can run automated Pressure-Volume (P-V) loops by setting the upper and lower limits on P. All hardware was controlled and measured using a National Instruments Data Acquisition card (USB-6211) and the control and acquisition software was written in LabVIEW (National Instruments, Austin, TX, USA).
The hardware uses a stepper-motor, syringe, and pressure transducer to mechanically load the lung in a similar fashion to a syringe pump ( Figure 1). Speeds to infusion/withdrawal ranged from 50 to 200 mL/min during testing and setup. Additional functions can be performed including the use of solenoid valves to switch to atmospheric pressure (mitigating unwanted formation of vacuums in the system during test setup or zeroing); automatic compliance corrections for the connections between the lung and the device; variable ventilation controls, e.g., speed, volume, and pressure of inflation; and start and stop criteria either predefined or based on cycle characteristics computed on the fly. The control functionality is programmed in LabView, with a streamlined interface recording test data (see Figure 2). Materials 2021, 14, x FOR PEER REVIEW 4 of 18 and the initial system volume. A photodiode (Osram Opto BPW 21, OSRAM Opto Semiconductors GmbH, Regensburg, Germany) and LED (Cree C503 Series, RS Components Ltd., Corby, UK) were placed on opposing sides of the syringe to provide a reference position for the syringe to automatically return to the initial volume. A 3/2 solenoid valve (SMC VDV valve) was used to allow automatic zeroing of the pressure to atmosphere. The system can be operated in manual volume control mode, or can run automated Pressure-Volume (P-V) loops by setting the upper and lower limits on P. All hardware was controlled and measured using a National Instruments Data Acquisition card (USB-6211) and the control and acquisition software was written in LabVIEW (National Instruments, Austin, TX, USA).
The hardware uses a stepper-motor, syringe, and pressure transducer to mechanically load the lung in a similar fashion to a syringe pump (Figure 1). Speeds to infusion/withdrawal ranged from 50 to 200 mL/min during testing and setup. Additional functions can be performed including the use of solenoid valves to switch to atmospheric pressure (mitigating unwanted formation of vacuums in the system during test setup or zeroing); automatic compliance corrections for the connections between the lung and the device; variable ventilation controls, e.g., speed, volume, and pressure of inflation; and start and stop criteria either predefined or based on cycle characteristics computed on the fly. The control functionality is programmed in LabView, with a streamlined interface recording test data (see Figure 2). The nature of mechanical ventilation was a simple steady, linear ramp infusion, and withdrawal using CiT. Mechanical ventilation was performed purely to acquire a measure of lung compliance/stiffness and to mechanically deform the lung prior to each tomography image. Lung compliance/stiffness was obtained through cyclic loading under pressure control. The samples underwent at least 10 cycles of continuous ventilation until lung compliance stabilized (residual error < 1%). During mechanical ventilation, airways which may have collapsed reopen and the tissue (after a period of inactivity) also must adapt to movement once again. There is considerable change to the lung compliance during this time, which requires reconditioning. Therefore, a control mode within CiT was developed to map the stability of a given P-V loop. If the P-V loop begins to stabilize, the slope becomes more similar from one cycle to the next cycle, and therefore, the error between cycles reduces towards negligible magnitudes. There are other sources of error that may The nature of mechanical ventilation was a simple steady, linear ramp infusion, and withdrawal using CiT. Mechanical ventilation was performed purely to acquire a measure of lung compliance/stiffness and to mechanically deform the lung prior to each tomography image. Lung compliance/stiffness was obtained through cyclic loading under pressure control. The samples underwent at least 10 cycles of continuous ventilation until lung compliance stabilized (residual error < 1%). During mechanical ventilation, airways which may have collapsed reopen and the tissue (after a period of inactivity) also must adapt to movement once again. There is considerable change to the lung compliance during this time, which requires reconditioning. Therefore, a control mode within CiT was developed to map the stability of a given P-V loop. If the P-V loop begins to stabilize, the slope becomes more similar from one cycle to the next cycle, and therefore, the error between cycles reduces towards negligible magnitudes. There are other sources of error that may arise when setting up such lung experiments, e.g., leakage. If there is a leak, the leak would be detected, as the P-V loop would not stabilize between cycles and the residual error would remain high.
Cycling prior to imaging ensured that specimens had the same base mechanical performance. Once ready to begin imaging, 2-3 cycles were completed and the cycling stopped at a designated pressure. After 30 s, the tomogram was acquired. This delay enabled any reorganization of air within the lung and partial relaxation of the tissue to occur. This delay was optimized to mitigate motion blur during the collection of each tomogram. The target and actual end pressures during image acquisition were recorded. The true mechanical loading condition during an interrupted test is important and must be recorded in full. Therefore, the CiT continued to record pressure changes within the lung during the imaging cycle to give the context of the mechanical state to the resultant tomogram. This process was repeated, with each sample being ventilated prior to each scan.
Image Acquisition
Tomography was performed at the Diamond-Manchester Imaging Branchline I13-2 [37,38] of the Diamond Light Source synchrotron (Oxfordshire, UK). A partially-coherent, near-parallel, polychromatic "pink" beam (circa 8-30 keV) was generated by an undulator in an electron storage ring of 3.0 GeV voltage and 300 mA current. For data collection, the undulator gap was set to 5.0 mm. The beam was reflected from the platinum stripe of a grazing-incidence focusing mirror and high-pass filtered with 1.3-mm pyrolytic graphite and 3.2-mm aluminium, resulting in a beam of weighted-mean photon energy at approximately 27 keV. Samples were aligned for data collection under low-dose conditions (approximately 2 min per sample) by temporarily setting the undulator gap to 7 mm. Slits were used to crop the beam just outside the field of view; this limited both the sample exposure and the intensity of noise arising from scintillator defects. Test samples, within their containers, were connected with a remotely controlled airline to CiT and mounted on perpendicular Newport MFA-PPD (Newport Corp., Irvine, CA, USA) linear stages atop an Aerotech ABRT-260 (Aerotech Inc., Pittsburg, PA, USA) rotation stage. Various propagation distances were trialled, and approximately 70 mm was chosen to give an adequate level of phase contrast; 1201 projections were acquired at equally spaced angles over 180 • of continuous rotation ("fly scan"), with an extra projection (not used for reconstructions) collected at 180 • to check for possible sample deformation, bulk movements, and beam damage relative to the first (0 • ) projection. Projections were collected by a pco.edge 5.5 Camera Link (PCO AG, Kelheim, Germany) detector (sCMOS sensor of 2560 × 2160 pixels) mounted on a visible light microscope of variable magnification. Magnification was controlled via rotation of a turret incorporating various scintillator-coupled objective lenses. A 1.25× objective (used for the main study) was coupled to a CdWO 4 scintillator, mounted ahead of a 2× lens providing 2.5× total magnification, providing a field of view of 6.7 mm × 5.6 mm and an effective pixel size of 2.6 µm. A 2× objective (used for a second measurement), coupled to a CdWO 4 scintillator providing 4× total magnification, achieved a field of view of 4.2 mm × 3.5 mm and an effective pixel size of 1.6 µm. A previous work [14] showed that scan times under 30 s were required to ensure stability during imaging (in excised samples). There is a balance between speed and signal-noise for such highly deformable samples. Multiple projection and exposure settings were assessed. Deformations were deemed acceptable for scan times up to 100 s with the new ventilation control system (CiT) in low lung volumes. At higher lung volumes, the lung has greater recoil. Therefore, a balance of sampling (number of projections) and scan time was established. The magnifications were chosen to reliably capture the same sub-volume of a highly deforming lung through a respiratory cycle, with sufficient spatial resolution to discern the airway walls. An exposure time of 30 ms was chosen. This led to scanning times of circa 36 s per sample, sufficient to capture the main features without motion blur. Prior to each scan, each specimen was cycled between −5 and 30 cmH 2 O using CiT and then stopped at the desired pressure for that scan. This process of ventilating with CiT, stopping at a given pressure and imaging, was repeated to acquire multiple images to capture the deformed geometry at various points along its P-V curve.
Image Analysis: Reconstruction and Filtering
Data were reconstructed via filtered back projection with the open source, modular pipeline Savu [39,40]. Projection images were subject to flat-and dark-field correction, followed by a correction for optical distortion [41] (Figure 2) and another correction to suppress ring artefacts [42]. Phase retrieval of inline phase contrast scans was performed with Paganin filtering to restore quantitative detail to tomograms and to facilitate straightforward threshold-based segmentation of tissue from air. The degree of Paganin filtering is proportional to the relative contribution of X-ray absorption (β) and phase shift (δ) to projections. Under-filtering confers marginal benefit for segmentations, but over-filtering leads to feature loss and blurring. A range of filtering degrees was screened, and δ/β = 1000 gave the clearest results ( Figure 2). These settings showed no observable loss relating to the volume/mass calculations (<0.01%) of the sample within the image when segmenting the Paganin-filtered plus distortion-corrected image versus the solely distortion-corrected image. Digital volume correlation (DVC) works by tracking displacements, deformations, and distortions of groupings of voxels forming larger features (>50 µm). Any texture differences shown within the tissue walls are averaged out in the analysis and would otherwise introduce noise; therefore, filtering was considered acceptable for all images. The influence of filtering on the DVC analysis is quantified as part of the zero-strain test.
Image Analysis: Digital Volume Correlation (DVC)
DVC analysis was performed using DaVis v10.05, LaVision, Göttingen, Germany. Images were downsized to 16bit *.raww images, with the background segmented and removed. A sequence of images was collected for each sample and processed in DaVis. Full-field 3D strains of each lung sample were computed with sub-volume sizes screened down to 32 voxels. Each strain stage relates to a point on the P-V loading curve. Small movements of the sample can occur during the entire test duration (approximately 30 min), causing rigid body motion of the sample from the first volume image to the last. The DaVis software can remove any rigid body motion detected prior to computing the strains. Zero-strain tests were performed, computing strains from repeat scans acquired at the same loading condition, in addition to the main study. The zero-strain test aimed to quantify the degree of uncertainty in the strain measurement. The lung sat on the stage under its own weight, with its own elasticity and within a compliant structure (sample container). Therefore, the zero-strain test provided a measure of confidence in the strain measurements produced during the loading cycles (the main experiment). Much research is performed in uncertainty analysis with DVC [43]. The previous study in the highly deformable lung samples demonstrated that <5% strains were observed throughout the volume [14].
Imaging Optimisation
The first stage of the experiment involved optimizing imaging parameters. Sample images were taken in different sub-volumes of the rodent lungs ( Figure 3), with key features and imaging artefacts highlighted. When ventilating lungs for imaging (i.e., unfixed samples), timing is important to enable air to settle and to reorganize within the structure, allowing for tissues to relax to a certain extent and stopping significant motion during imaging. Increasing the scan speed can offset any risk of motion; however, it can also lead to poor signal-noise. Whilst optimizing such timings, the images were taken at different stages of motion to see whether lung motion can be mitigated and where motion blur artefact in the images can be minimized. Figure 3c-e specifically shows the lung whilst deforming. Here, there is intentional movement during the test to exaggerate the motion blur in the image. This firstly helped calibrate visual perceptions for the finer degrees of motion blur (much like focusing a camera). However, a secondary outcome was to potentially illustrate airway deformation patterns that observed during a single fast scan (<25 s). Much work is needed to extract any intelligible and quantifiable information from these images. However, the degree of motion towards its peripheries in Figure 3e indicates the potential for better ventilation in regions of increased motion blur and/or less deformation/ventilation in central regions, which appear more "in focus". This was a secondary observation made during imaging optimization when compared to the primary study performing DVC to characterize local tissue strains. The main point of this exercise was to determine appropriate image acquisition and sample dwell times for the interrupted test and to best capture the lung geometry at various states of inflation. The images presented in Figure 3 led to the final protocol/timings outlined in Section 2.
Zero-Strain Test and the Reliability of the DVC Results
Further analysis of image postprocessing techniques and the resultant impact on DVC outputs is given in Figure 4. Once imaging parameters were set, a sample was mounted and ventilated until compliance converged on a given value (i.e., the airways had all reopened, and all air had reorganized itself within the lung). The P-V curve in
Zero-Strain Test and the Reliability of the DVC Results
Further analysis of image postprocessing techniques and the resultant impact on DVC outputs is given in Figure 4. Once imaging parameters were set, a sample was mounted and ventilated until compliance converged on a given value (i.e., the airways had all reopened, and all air had reorganized itself within the lung). The P-V curve in Figure 4 illustrates the repeated cycling performed prior to each test. The classical sigmoidal form of the curve is observed. P-V measurements were monitored prior to each scan to assess any stiffening or damage sustained by the sample either due to degradation (given that these are cadaveric samples) or due to beam damage (radiation/heating). This is the first test for reliability for any subsequent strain measurements. The second test is on the repeatability of imaging the same region under the same state of deformation, where strain should be negligible: the zero-strain test. The given images are taken of a sub-volume within the thorax of a highly viscoelastic and deformable structure; it is envisaged that some motion is unavoidable and that the exact beam path may differ from scan to scan. To quantify the expected error, a zero-strain test was performed at a relatively high level of inflation. A higher level of inflation was chosen since this is where the lung is very likely to want to contract and return to a less deformed state (therefore highlighting a worst-case scenario for unwanted deformations from scan to scan). The images were taken within a minute of each other, and the resultant DVC measurement is shown in Figure 4. Within the bulk tissue, there is a relatively low level of strain computed <3%. Towards the peripheries, strains approach 5%. This is where larger areas of the image are formed of only soft tissue with few discernible features (i.e., no alveoli). Tissue/air interfaces seen in the alveoli, which are readily distinguishable and tracked by the DVC algorithm, are the focus of this study. It is understandable that a higher range of strains may be computed in the relatively feature-free solid tissue or background, since these are the noisier parts of the image with fewer features to track. Edges and boundaries are kept in the reported images as a reference to the lung surface. However, it is noted that the boundaries, where large movement occurs, may also be more susceptible to error. This error (<5%) is however very small compared to the magnitude of tissue deformation during a breathing cycle: approximately 100%.
The third test of reliability of the DVC results is in the use of image postprocessing prior to performing DVC computations. Figure 2 highlights the clarity of images when distortion corrections and Paganin filters were used on the raw reconstructed data. Figure 4 shows two strain plots: the top shows the distortion-corrected images used to compute the strain field, and the bottom shows the Paganin-filtered distortion corrected images used to compute the strain field. The two sets of images with their strain plots overlaid are shown in Figure 4. The contrast of tissue and air is observably clearer in the Paganinfiltered images and the impact on the resultant strain field is minimal. The magnitudes The second test is on the repeatability of imaging the same region under the same state of deformation, where strain should be negligible: the zero-strain test. The given images are taken of a sub-volume within the thorax of a highly viscoelastic and deformable structure; it is envisaged that some motion is unavoidable and that the exact beam path may differ from scan to scan. To quantify the expected error, a zero-strain test was performed at a relatively high level of inflation. A higher level of inflation was chosen since this is where the lung is very likely to want to contract and return to a less deformed state (therefore highlighting a worst-case scenario for unwanted deformations from scan to scan). The images were taken within a minute of each other, and the resultant DVC measurement is shown in Figure 4. Within the bulk tissue, there is a relatively low level of strain computed <3%. Towards the peripheries, strains approach 5%. This is where larger areas of the image are formed of only soft tissue with few discernible features (i.e., no alveoli). Tissue/air interfaces seen in the alveoli, which are readily distinguishable and tracked by the DVC algorithm, are the focus of this study. It is understandable that a higher range of strains may be computed in the relatively feature-free solid tissue or background, since these are the noisier parts of the image with fewer features to track. Edges and boundaries are kept in the reported images as a reference to the lung surface. However, it is noted that the boundaries, where large movement occurs, may also be more susceptible to error. This error (<5%) is however very small compared to the magnitude of tissue deformation during a breathing cycle: approximately 100%.
The third test of reliability of the DVC results is in the use of image postprocessing prior to performing DVC computations. Figure 2 highlights the clarity of images when distortion corrections and Paganin filters were used on the raw reconstructed data. Figure 4 shows two strain plots: the top shows the distortion-corrected images used to compute the strain field, and the bottom shows the Paganin-filtered distortion corrected images used to compute the strain field. The two sets of images with their strain plots overlaid are shown in Figure 4. The contrast of tissue and air is observably clearer in the Paganinfiltered images and the impact on the resultant strain field is minimal. The magnitudes and distribution of strain fields are comparable for both the distortion-corrected images and those with additional Paganin filtering. The results in this paper will focus on the filtered images as there are fewer sharp localizations in the strain. Such localizations are potentially caused by noise present within the tissue boundaries, mentioned earlier, which are reduced by filtering. The final consideration for reliability of the data lies in the use of the same sample to acquire and compute multiple deformation steps within a breathing cycle. Any degradative factors having to do with the biology or beam influence and how to quantify such potential factors are considered.
Repeat scans were taken of each sample at different stages of deformation within a given P-V cycle. Earlier, Figure 4 highlighted the potential errors that may arise in strain measurement towards the boundaries particularly where large deformations are expected. Here, these errors were minimized (given that the zero-strain test resulted in strains <3% for most of the region of interest). Smaller increments in strain (from image to image) may mitigate errors associated with large deformation experiments. This may not always be possible with highly deformable test samples such as lung tissue and its nonlinear characteristic behaviour. Figure 5 illustrates the drop in correlation coefficient where large jumps in deformation occurred from image to image. Repeat scans were taken of each sample at different stages of deformation within a given P-V cycle. Earlier, Figure 4 highlighted the potential errors that may arise in strain measurement towards the boundaries particularly where large deformations are expected. Here, these errors were minimized (given that the zero-strain test resulted in strains <3% for most of the region of interest). Smaller increments in strain (from image to image) may mitigate errors associated with large deformation experiments. This may not always be possible with highly deformable test samples such as lung tissue and its nonlinear characteristic behaviour. Figure 5 illustrates the drop in correlation coefficient where large jumps in deformation occurred from image to image. The sample is a cadaveric sample, and overtime, the lung will degrade in mechanical properties. Figure 5 highlights the repeated cycling and number of exposures that this sample observed. After 20 exposures (>6 h after the sample was collected), the compliance started to drop. The degree of control over the lung volume also degraded towards the end of the experiment, and as a result, small adjustments to the mounting of the sample were needed to minimize motion and to maintain clear airflow. The DVC correlation coefficient, highlighted in Figure 5, for the latter scans dropped from >95% for the data presented in Figure 6 to having regions significantly below this threshold for data shown in Figure 7 (again, towards the boundary for large deformations). Therefore, this paper will discuss the key results in the context of the reliable data collected early in the experiment The sample is a cadaveric sample, and overtime, the lung will degrade in mechanical properties. Figure 5 highlights the repeated cycling and number of exposures that this sample observed. After 20 exposures (>6 h after the sample was collected), the compliance started to drop. The degree of control over the lung volume also degraded towards the end of the experiment, and as a result, small adjustments to the mounting of the sample were needed to minimize motion and to maintain clear airflow. The DVC correlation coefficient, highlighted in Figure 5, for the latter scans dropped from >95% for the data presented in Figure 6 to having regions significantly below this threshold for data shown in Figure 7 (again, towards the boundary for large deformations). Therefore, this paper will discuss the key results in the context of the reliable data collected early in the experiment where compliance was stable and repeatable ( Figure 6) and where caution may be needed when interpreting the data as compliance started to decrease (Figure 7) due to degradation or beam damage. This reinforces the need for continuous monitoring of the mechanics during imaging used here to provide context for the resultant imaged microstructure.
Local vs. Global Mechanics Highlighted by DVC and P-V
Figures 6 and 7 both show two types of strain plot for one slice in the 3D volumes alongside P-V curves for each scan (strain map). During imaging, the ventilation cycling was stopped and held at a given pressure; 30-60 s was the dwell time between ventilation and the scan to enable the air to redistribute and the tissue to settle into position for that target, stopping pressure. During the dwell time, the pressure did drop as expected due to potential increase in airway volume as airways stretch and move to accommodate air pressure. This true end pressure is noted as the pressure during the given scan (highlighted in each plot).
The two strain plots in Figure 6 show the detailed strain and the averaged regional strain (with deformation vectors averaged over a larger area). Before each image was Figures 6 and 7 both show two types of strain plot for one slice in the 3D volumes alongside P-V curves for each scan (strain map). During imaging, the ventilation cycling was stopped and held at a given pressure; 30-60 s was the dwell time between ventilation and the scan to enable the air to redistribute and the tissue to settle into position for that target, stopping pressure. During the dwell time, the pressure did drop as expected due to potential increase in airway volume as airways stretch and move to accommodate air pressure. This true end pressure is noted as the pressure during the given scan (highlighted in each plot).
Local vs. Global Mechanics Highlighted by DVC and P-V
The two strain plots in Figure 6 show the detailed strain and the averaged regional strain (with deformation vectors averaged over a larger area). Before each image was taken, a new P-V loop was acquired to ensure consistent reporting of the mechanical integrity and to reach the next desired stopping pressure. The averaged strain plot helps highlight the general trend of the airways stretching just infield from the boundary pushing outwards in a tensile fashion. Each neighbouring airway is connected by shared membranes, meaning that there is significant interdependence in mechanical behaviour. Regions that are well ventilated impinge on the space of those inflating less, causing tensile strain differentials across the lung volume. The plots show that the strains peak locally around 25% up to the 60% mark when inflating from 0 cmH 2 O to 10 cmH 2 O. Note that raw output data highlights more local deformations, which are higher/lower than the averaged plots. This is indicative of the local heterogenous behaviour observed in this heterogenous airway structure. Figure 7 sees this general trend of a largely deforming airway region inset from the boundary continue. The top image, taken whilst still maintaining consistent compliance with previous scans, peaks its strains around the 80% mark for the 15 cmH 2 O measurement. However, after this, the compliance drops slightly. The general trends of major deformation locations remained consistent for the 2nd image in Figure 7, with strains reaching around the 100% mark. However, the apparent larger deformed final image had clearly lost stability and did not maintain a good volume of air, showing that strains drop just below 100% and that the true end pressure dropped more rapidly prior to the scan. This is where the test ended for this sample.
Discussion
Focusing on the established as trustworthy strain data from Figure 6 (and part of Figure 7), regional differences in how a lung deforms during respiration have been visualized. DVC was performed to characterize local stretch in lung tissue, acquired during several inflation points during a respiratory cycle. It is well-known that the local heterogeneities within the lung structure, composition, and airway resistance contribute to the complex asymmetric deformations during breathing. The DVC data shows clearly regions inset from the boundary expanding more significantly than more central regions (with respect to the airway source, i.e., trachea, towards the "left" of each strain plot). Given the contrast of air and tissue being tracked to compute the strains, more refined strain maps were not considered here. Finer DVC computations may be more susceptible to noise given smaller features than the walls (few microns wide) are not discernible in the image for tracking. Further study at higher magnifications instead is recommended, as shown in Figure 8. There is always the trade-off of visualizing a larger volume (for the global overview) vs. improved resolution (local detail). However, it was shown in Figure 5 that approximately 20 exposures did not see significant loss in lung mechanics; therefore, there is potential to run multiple magnifications sequentially to map the lung deformations across length scales within a given sample.
The higher-magnification images in Figure 8 enable strain fields approaching the wall dimensions to be produced. These too can be averaged to demonstrate the same global observation of greater stretch towards the lung boundary. Further contrast enhancement of the alveolar walls may be achieved with perfusion of a contrast agent into the microvasculature, and this would enable intra-tissue strains to be potentially evaluated. This experiment demonstrated the ability to reliably track phase contrast between the air-tissue barrier within the intact thorax. Multiple strain states (i.e., degrees of ventilation) are also computed here to enable a range of strains up to 100% to be reliably mapped compared to previous work with injured lungs showing a single instantaneous strain state in excised lungs [14]. The intact thorax provides a degree of protection from premature beam damage, enabling more strain states to be imaged in the same region (compared to an excised sample). The imaging session with continuous P-V monitoring provided indicators of where reliability in the resulting computed strains may be lost (i.e., when the specimen starts to degrade). The higher-magnification images in Figure 8 enable strain fields approaching the wall dimensions to be produced. These too can be averaged to demonstrate the same global observation of greater stretch towards the lung boundary. Further contrast enhancement of the alveolar walls may be achieved with perfusion of a contrast agent into the microvasculature, and this would enable intra-tissue strains to be potentially evaluated. This experiment demonstrated the ability to reliably track phase contrast between the air-tissue barrier within the intact thorax. Multiple strain states (i.e., degrees of ventilation) are also computed here to enable a range of strains up to 100% to be reliably mapped compared to previous work with injured lungs showing a single instantaneous strain state in excised lungs [14]. The intact thorax provides a degree of protection from premature beam damage, enabling more strain states to be imaged in the same region Further extensions of the DVC analysis can include correlating local deformations with total volume changes in the specimens and assessing how closely these relate to the global lung P-V measurements. This could help assess degrees of compressibility/expansion within the tissue and capture absolute distributions of volume across the whole lung. Given that sub-volumes of lung were imaged here, it is not straightforward to perform such a calculation. However, work is ongoing to study smaller lung volumes and larger fields of view optics, where the total lung can be imaged across all states of inflation. Then, a direct comparison on the volume measurements can be made in addition to links with global compliance and comparative distributions of ventilation volumes from region to region, presented here.
One compromise of imaging through the thorax is the reduction in signal-noise, which can lead to a loss in image clarity. The postprocessing for distortion corrections and Paganin filtering is shown to draw out the lung architecture clearly in Figure 2. The zero-strain test showed some softening of strain maxima for the Paganin-filtered images compared to the purely distortion corrected images. This again helped mitigate issues relating to imaging sub-volumes within a thick specimen (approximately 70 mm dia), where poor signal-noise may be expected particularly for very short exposures (30 ms). Furthermore, when using phase contrast imaging, there is the risk of generating shadows around features which can create sharp boundaries of dark and light within the greyscale image. It may seem preferential to create this shadow of dark around the boundary to help with resolving that boundary. However, the detail of the boundary location can be lost in this region and may also lead to apparent broken boundaries between the tissue and air. In any postprocessing, such as segmentation or DVC, such artefacts can cause errors and require considerable manual processing to mitigate such errors. There is therefore always a trade-off of resolving the feature by increasing phase contrast, whilst minimizing artefact. Paganin filtering enables the contrast to be rebalanced at each boundary to minimize errors in detecting the boundary potentially caused by shadows and light bleeding across the boundary. The filtering assisted with denoising in regions that were featureless for contributing to the DVC computation enabled tissue and air to be readily discernible.
The development of this framework to study lung deformations using DVC is extremely powerful. Although the use of such imaging techniques may not be new [12][13][14][15][17][18][19][20][21][22][23], the ability to register reliably the in situ mechanics alongside the images has streamlined its potential use in lung mechanics research. Work is continuously progressing to improve the method, to enable faster imaging to mitigate motion artefact in unfixed samples, for instance. When imaging speeds increase, continuous tomography imaging in highly deformable tissues like the lung will be possible. In Figure 3, the current limits of imaging speed with the current hardware are demonstrated, showing different degrees of motion blur during continuous imaging and in situ loading. Recent works on stiffer viscoelastic solid biomaterials demonstrated the feasibility for continuous synchronous mechanical loading and imaging [44]. With lung tissue, the deformation magnitudes are much greater than the biomaterials in Reference [44], over a shorter time period; therefore, considerable motion blur is observed (as demonstrated in Figure 3). When imaging frequency increases >1-5 kHz (compared to the 100 Hz used here), then continuous loading and imaging in such highly deformable tissues is possible. This would negate the need for interrupted loading cycles (i.e., partial relaxation) and provide a more representative view of the lung deformations present in breathing. An intermediate step to achieve this goal of continuous monitoring is to link surface mapping techniques such as digital image correlation with volume methods. Our team has extensive experience with these surface strain measurement techniques [45][46][47][48][49][50][51][52], and recently, groups have published studies exploiting the technique on lungs during mechanical ventilation [53]. Coupling and correlating imaging modalities can lead to more complete overviews of the mechanics and function without the need to compromise on the in situ loading regime. This article however demonstrated the first steps to achieving such results within an intact thorax.
The potential to study pathological lungs to derive an understanding of the local mechanics in the context of an overly stiff lung, for instance, is extremely useful. As previously discussed in [14], when considering patient management strategies, it is a challenge to mitigate further damage or collapse of airways. However, detailed strain maps shown here, coupled with global measures will enable detailed validation of computational simulations of patient management strategies. Obtaining a feeling for global pressure loading (P-V, or ventilator strategy, for instance) and its impact on the alveolar scale will enable the refinement of multiscale simulations in lung mechanics and assist in the development of protective lung management strategies. The next phase of this work will study healthy and pathological lungs in a larger-scale study to enable more quantitative parameters to be derived for modelling purposes, while the experimental technique was established here.
Conclusions
A methodology for in situ mechanical ventilation of rodent lungs with SR-micro-CT was presented. Multiple facets of experimental technique and the key considerations required, when establishing such experiments, were assessed. The study showed the local variability in strain fields observed within the lung tissue during a breathing cycle. The distinct patterns of deformations highlighted the contrast in degree of tensile strains between peripheral regions of the tissue compared to central regions. Local heterogeneous structure and its resultant mechanics were captured. The technique advanced on previous methods in terms of real-time control and monitoring during imaging, providing both improved confidence and context to the measurements. Institutional Review Board Statement: Ethical review and approval were waived for this study, due to no live animals or human subjects were tested in this study, and the materials in this research are from the waste materials.
Informed Consent Statement: Not applicable.
Data Availability Statement: Data available via www.aroraresearch.com.
|
2021-01-23T06:16:24.683Z
|
2021-01-01T00:00:00.000
|
{
"year": 2021,
"sha1": "e4bcc3e60f3bad56fb414261956c8f439fcb8980",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1944/14/2/439/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5c8af5bf77316781245310359de5c104971d80ec",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
222006744
|
pes2o/s2orc
|
v3-fos-license
|
Polariton-assisted donor-acceptor role reversal in resonant energy transfer between organic dyes strongly coupled to electromagnetic modes of a tuneable microcavity
Resonant interaction between excitonic transitions of molecules and localized electromagnetic field allows the formation of hybrid light-matter polaritonic states. This hybridization of the light and the matter states has been shown to be able to significantly alter the intrinsic properties of molecular ensembles placed inside the optical cavity. Here, we have achieved strong coupling between the excitonic transition in typical oligonucleotide-based molecular beacons labelled with a pair of organic dye molecules, demonstrating an efficient donor to acceptor resonance energy transfer, and the tuneable open-access cavity mode. The photoluminescence of this hybrid system under non-resonant laser excitation and the dependence of the relative population of light-matter hybrid states on cavity detuning have been characterized. Furthermore, by analysing the dependence of the relaxation pathways between energy states in this system, we have demonstrated that predominant strong coupling of the cavity photon to the exciton transition in the donor dye molecule can lead to such a large an energy shift that the energy transfer from the acceptor exciton reservoir to the mainly donor lower polaritonic state can be achieved, thus yielding the chromophores donor-acceptor role reversal or carnival effect. Our experimental data confirm the theoretically predicted possibility for confined electromagnetic fields to control and mediate polariton-assisted remote energy transfer thus paving the way to new approaches to remote-controlled chemistry, energy harvesting, energy transfer and sensing.
Resonant interaction between excitonic transitions of molecules and localized electromagnetic field allows the formation of hybrid light-matter polaritonic states. This hybridization of the light and the matter states has been shown to be able to significantly alter the intrinsic properties of molecular ensembles placed inside the optical cavity. Here, we have achieved strong coupling between the excitonic transition in typical oligonucleotide-based molecular beacons labelled with a pair of organic dye molecules, demonstrating an efficient donor-toacceptor resonance energy transfer, and the tuneable open-access cavity mode. The photoluminescence of this hybrid system under non-resonant laser excitation and the dependence of the relative population of light-matter hybrid states on cavity detuning have been characterized. Furthermore, by analysing the dependence of the relaxation pathways between energy states in this system, we have demonstrated that predominant strong coupling of the cavity photon to the exciton transition in the donor dye molecule can lead to such a large an energy shift that the energy transfer from the acceptor exciton reservoir to the mainly donor lower polaritonic state can be achieved, thus yielding the chromophores' donor-acceptor role reversal or "carnival effect". Our experimental data confirm the theoretically predicted possibility for confined electromagnetic fields to control and mediate polaritonassisted remote energy transfer thus paving the way to new approaches to remote-controlled chemistry, energy harvesting, energy transfer and sensing.
Introduction
Strong light-matter coupling is a quantum electrodynamics phenomenon that takes place when the rate of resonant energy exchange (i.e., coupling strength) between the exciton transition in matter and the resonant localized electromagnetic field is higher than the competing decay and decoherence processes. Light-matter coupling then leads to the formation of two new "hybrid" light-matter (polaritonic) states with different energies, instead of the two original molecular and electromagnetic field energy states. Once the strong coupling regime is reached, the coupled system exhibits new properties possessed by neither the molecules nor the cavity [1]. Hence, by controlling the coupling strength, it is possible to modulate (or even control) various properties of the system, including the eigenenergy, excited state lifetime, efficiency and efficient distances of energy transfer, conductivity, etc. [2]. This paves the way to a wide variety of breakthrough practical applications, such as modification of chemical reactivity [3], enhanced conductivity [4], development of lowthreshold sources of coherent emission [5], and even polariton simulators and logic [6,7].
Recently it has been demonstrated that strong coupling can modulate both distance and efficiency of Förster resonance energy transfer (FRET) [8,9]. FRET is a process of nonradiative energy transfer from one fluorophore (donor) to another one (acceptor). The FRET effect only occurs when several conditions are satisfied: (i) the donor emission spectrum should overlap with the acceptor absorption spectrum; (ii) the donor and acceptor fluorophores should be in a favourable mutual orientation, and, (iii) since the FRET efficiency is inversely proportional to the sixth power of the distance between the fluorophores, the distance between the donor and the acceptor should not exceed the Förster limit (10 nm).
When these conditions are satisfied, the FRET effect results in a decreased donor fluorescence emission accompanied by a simultaneously increased acceptor fluorescence emission.
Under the strong coupling regime, both donor and acceptor excitonic states could be coupled to the same microcavity optical mode, which may act as a mediator. This mediation can make it possible not only to increase the efficient distances of the energy transfer to values ten times larger than the Förster limit (to more than 100 nm) [9], but also to ensure an up to sevenfold increase in the rate of energy transfer [8]. This increase in the energy transfer rate leads to a significant decrease in donor fluorescence in the presence of the acceptor. As a result, the energy transfer efficiency, which is characterized by the ratio between the intensities of the donor fluorescence in the presence and absence of the acceptor, will be also significantly increased. It has been reported recently that under the strong coupling regime the energy transfer efficiency may be increased from 0.55 to 0.90 [8].
The possibility of increasing and controlling the FRET efficiency is promising for the development of many photonic applications, specifically for biomedical research. In this regard, one of the most powerful photonic nanotools are oligonucleotide-based molecular beacons, which are used in biosensing and specific gene identification [10][11][12][13][14][15], RNA imaging [14,[16][17][18], revealing nucleic acid mutations [19], monitoring gene expression [20] and protein-protein interactions [21], nanomedicine [14,22], cell-surface glycosylation imaging [23], etc. All these applications are based on the same principle: a specifically designed molecular beacon is a circled oligonucleotide, with the donor and acceptor dye molecules conjugated to its ends and located in close vicinity to each other; hence, a strong donor-to-acceptor FRET occurs ( Figure S1 in the SI). Due to this strong FRET effect, the donor fluorescence is completely quenched and, in this respect, the beacon is invisible. When the beacon oligonucleotide binds to its complementary oligonucleotide target, its circled structure is opening, the distance between the donor and acceptor increases, and this binding event is reported by the appearance of the originally quenched donor fluorescence signal. The possibility of controlling the FRET efficiency within the molecular beacon by making the originally invisible molecular beacons visible "on demand" may extend their applications.
One of promising ways to control the FRET efficiency is to use light-matter interaction in various regimes [8,24].
Polariton-assisted energy transfer between spatially separated molecules has been extensively studied in various configurations [8,9,25]. In Ref. [25], hybrid polaritonic states have been demonstrated to be an efficient energy transfer pathway between two spatially separated J-aggregates with an initially negligible direct energy transfer via dipole-dipole coupling. However, for many practical applications that we have mentioned above, it is even more intriguing to have a way to alternate the energy relaxation in a mixed media where the donor and acceptor molecules are located in close vicinity to each other and the FRET effect is mediated by direct dipole-dipole coupling between them. In Ref. [26], hybridization of the light and the matter states in a microcavity filled with a blend of two BODIPY fluorescent dyes with similar properties has been investigated. Here, the electromagnetic microcavity modes of light have been found to be coupled, at the same extent, with both donor and acceptor exciton transitions. It has been shown that, in this system, direct dipole-dipole coupling is more efficient than the energy transfer via strong coupling. Interestingly, the strong coupling to only one of the two excitonic states of the system has also been shown to be promising if the control over relaxation pathways in FRET systems has to be obtained. In Ref. [27], the authors have theoretically demonstrated that, whereas exclusive strong coupling of the cavity photon to the donor states can enhance the energy transfer to the acceptors, the reverse is not true. On the other hand, it has been shown that sufficiently strong coupling exclusively to the acceptor can modify the energy levels of the system in such a way that the transfer from the acceptor to the donor states mediated by polariton states can occur, leading to the chromophore role reversal or "carnival effect" [27]. It is interesting to investigate the possibility of combining these effects by means of strong coupling of exclusively the donor state to the cavity photon with a large coupling strength. This should lead to the formation of a polariton state with a relatively large fraction of the donor exciton and a small fraction of the acceptor exciton, with the energy lower than that of the original acceptor state. However, such a modification will require very strong coupling to ensure significant alteration of energy levels.
To date, most FRET studies using the strong coupling regime have employed only simple Fabry-Perot microcavities, which have relatively large mode volumes and, hence, rather moderate light-matter coupling strengths, which have significantly limited the experimental observations of the theoretically described effects [27]. Recently, we have engineered a tuneable microcavity with a lateral mode localization characterized by a drastically decreased mode volume, which allows obtaining a considerably larger coupling strength [28], and have employed this microcavity in the present study. In order to ensure a sufficiently strong coupling of the microcavity photon modes exclusively to the molecular beacon's donor state, we used the 6-carboxyfluorescein (FAM) dye, which has a large dipole moment and a high quantum yield as a donor, and the rhodamine derivative carboxytetramethylrhodamine (TAMRA) with much lower dipole moment and quantum yield as a molecular beacon's acceptor (see the Sample preparation section and the Supporting Information (SI) for details). The distance between these donor and acceptor dye molecules conjugated with the opposite termini of the molecular beacon was determined by the diameter of the DNA double helix (about 2 nm), which was short enough to ensure efficient direct dipole-dipole coupling between them (see Figure S1 in the SI). This system made it possible to investigate the polariton-assisted energy transfer in a tuneable open-access microcavity containing oligonucleotide-based molecular beacons with a donor-acceptor pair of closely located FAM and TAMRA organic dyes exhibiting an efficient FRET outside the microcavity.
In this study, we have measured the dependence of the photoluminescence (PL) properties of the molecular beacon solution in a microcavity and have analysed the dependence of the polaritonic state population on the detuning of the optical microcavity mode. We have used the Jaynes-Cummings model to calculate the eigenfunctions of the strongly coupled three-component system and have estimated the mixing of excitons and photon fractions in hybrid states. We have also estimated the possibility of changing the relaxation pathways by varying the degree of the exciton-photon mixing in polaritonic states.
Furthermore, we have explored the particular situation when the donor is much stronger coupled with the optical mode than the acceptor. For the best of our knowledge, we are the first to experimentally demonstrate the possibility to reverse the donor and acceptor roles ("carnival effect") within a donor-acceptor pair of organic dyes, which has been theoretically investigated by the group of Joel Yuen-Zhou [29].
Results and Discussion
In order to investigate the feasibility of controlling the resonant energy transfer in a donor-acceptor pair of closely located organic dyes, we have employed a tuneable microcavity with a relatively small mode volume previously developed in our group [30,31].
Briefly, the tuneable microcavity unit was composed of plane and convex mirrors that form an unstable λ/2 Fabry-Perot microcavity (Figure 1a, see the SI for details). The upper mirror was made convex in order to satisfy the plane-parallelism condition at one point, thus minimizing the mode volume. The plane bottom mirror was mounted on top of a Z-piezo positioner, which provided fine tuning of the microcavity length in the range of up to 10 µm with a nanometre precision. The plane-convex design of the tuneable microcavity is characterized by rather high quality factors (up to several hundred units), whereas the mode volumes can be as low as tens of thus combining the advantages of both optical and plasmonic cavities [31]. In this study, the quality factor of the microcavity mode was about 35 and the mode volume was about 15 ´ for all the detunings used. Previously, we have demonstrated the advantages of the developed tuneable microcavity, such as a controllable distance between the mirrors with a nanometre accuracy and a small mode volume, which result in a much higher Rabi splitting energies compared to the standard optical microcavities [28]. In particular, we have demonstrated a strong coupling of the ensemble of Rhodamine 6G molecules with a Rabi splitting as high as 225 meV at room temperature, which has been previously shown only for the case of surface plasmon-polaritons. A drawback of the tuneable setup developed is that the effect of strong coupling cannot be simply observed in the transmission spectra, because the concentration of the molecules inside the cavity is relatively low and the number of photons from the white LED used for measurements of the transmission considerably exceeds the number of generated excitons. However, the developed by us microcavity combining the advantages of optical microcavity (low energy dissipation) and plasmonic cavity (small mode volume) significantly increases the number of materials suitable for operation in a strong coupling regime, thus paving the way to plenty of new practical applications. In this study, we employed this set-up to achieve a high Rabi splitting energy of hybrid states formed by the exciton transitions in donor-acceptor pairs of organic dyes exhibiting the FRET effect and the localized resonant electromagnetic field. FAM is a well-known organic dye with a quantum yield as high as 97% [33], with the main exciton transition characterized by a relatively large transition dipole moment ranging from 7 to 12 D [33] and the main emission peak maximum at about 2.36 eV. The TAMRA dye has a much lower quantum yield of about 22% [34], as well as lower values of the transition dipole moment [35]. The PL emission maximum of the TAMRA dye solution is about 2.13 eV. However, it is noteworthy that the transition dipole moments and PL quantum yields of the dyes may be changed upon their conjugation with the oligonucleotide due to the possible appearance of new intermolecular interactions in the conjugated samples. Figure 1b shows that the PL spectra of the FAM-TAMRA donor-acceptor pair excited at 450 nm could be obtained by linear superposition of the PL spectra of these dyes measured separately, which indicates the absence of direct ground-state interaction between these dye molecules in the molecular beacon. Despite the low quantum yield, the PL emission from TAMRA was stronger than that from FAM in the case of the donor-acceptor pair operating in the FRET regime. The efficiency of the resonance energy transfer from FAM to TAMRA molecules was estimated to be about 80% (see the SI for details).
We further analysed the PL spectra of the dyes conjugated with the oligonucleotide of the molecular beacon alone and in the form of a donor-acceptor pair operating in the FRET regime and placed into the tuneable microcavity at different cavity detunings (Figure 2). The cavity mode tuning has been performed by changing the distance between the microcavity mirrors from 735 to 945 nm at 15-nm steps. The corresponding PL emission spectra are shown in Figures 2a-2c. It is worth mentioning that the specific properties of our microcavity allowed us to detect the PL emission from molecules weakly coupled to the transverse modes of both the lowest and higher orders of the cavity (see the SI for details). This PL associated with the enhanced emission peak following the cavity mode, slightly blue-shifted relative to the cavity photon spectral position corresponding to the lowest-order transverse mode of the cavity.
Qualitatively, this blue shift can be understood considering the difference in signal collection efficiency between the transmission and PL measurements (see the SI for details). The resulting emission corresponding to this peak is satisfactorily fitted in the weak coupling approximation. Figure 2d presents the experimentally measured dependence of the spectral position of the emission maximum associated with the weakly coupled molecules on the cavity mode detuning extracted from Figure 2c and the corresponding model calculations. It can be seen that the calculated dependence is in good agreement with the experimental data, the difference not exceeding the full-width-at-half-maximum (FWHM) of the cavity mode. It is also important to note that the large shift of the emission peak from the cavity mode in the region of low mode energies was due to the poor spectral overlap of the dye PL spectra with the lowest-order transverse cavity mode. Similar PL emissions from weakly coupled molecules were observed for the FAM and TAMRA dyes separately (Figures 2a, 2b). It is noteworthy that, due to the non-resonant excitation through the lower mirror of the microcavity, the power of excitation inside the cavity depended on the cavity detuning.
Although the intensities of the measured PL spectra presented in Figure 2 are not calibrated against the excitation power, the necessary corrections have been made in the analysis presented below.
The molecular beacons labelled with the TAMRA dye alone and placed into the microcavity exhibited only the emission from weakly coupled molecules (Figure 2a), which can be explained by higher losses, lower transition dipole moments of the rhodamine derivative, and the resultant low coupling strength [35]. However, it is noteworthy that the change in the emission intensity from the weakly coupled states of TAMRA was much bigger than that for the case of the molecular beacon labelled with the FAM dye alone. Indeed, the Purcell PL intensity enhancement for emitters with a lower quantum yield is known to be stronger than that for emitters with a higher quantum yield, because the Purcell effect changes only the radiative relaxation rate [36].
The PL of molecular beacons labelled with the FAM dye alone and placed into the microcavity is shown in Figure 2b. However, at positive detunings, we observed a PL peak determined by emission from the LPB, as in the case of the cavity containing molecular beacons labelled with the donor dye alone, which was red-shifted relative to the emission spectrum of the uncoupled dye molecules. Similarly to the previous cases, we were unable to detect any emission that could have been attributed to the UPB due to the typically fast energy relaxation from the upper polariton to the donor exciton reservoir. Thus, we omit the UPB from most of the discussion.
Nevertheless, the presence of strong coupling was clearly evidenced by anticrossing of the LPB at the cavity detunings where the cavity photon mode and donor excitons had to be degenerated. Considering the exitonic constituents as independent harmonic oscillators, we assumed that our cavity could couple together three oscillators, FAM, TAMRA, and the cavity mode. Therefore, we characterized the observed dispersion by three polariton branches.
In the case of a negatively detuned cavity, we observed an emission peak that could be attributed to the middle polariton branch (MPB). However, due to the pronounced emission from weakly coupled states and large broadening of the emission spectra, this part of the emission spectrum can hardly be used to quantitatively analyse the spectral position of the MPB.
The dependence of the polariton branches on the cavity photon energy presented in Figure 3 confirms that strong coupling occurred in this case. Experimental data were derived from the peak energies in the PL spectra for different distances between the cavity mirrors shown in Figure 2c. In order to obtain a satisfactory fitting, we used the Jaynes-Cummings In order to determine the Hopfield coefficients, that describe the donor exciton, acceptor exciton, and photon mixing for the given coupling strengths, we further calculated the eigenfunctions of the Jaynes-Cummings Hamiltonian and represented each of them as a superposition of the initial pure photon and exciton states (see the SI for details). Figure 4 shows the Hopfield coefficients for each polariton branch and their dependences on the cavity detuning. In the analysed PL spectral region, the polariton branches displayed quite different behaviours. It can be seen from Figure 4 that the upper and the lower polaritons mainly consist of the donor exciton and photon fractions, whose ratio is reversed for both branches upon cavity tuning. On the other hand, for the MPB, the contribution from the acceptor exciton strongly dominates the other components. However, this contribution decreases with an increase in the cavity mode energy in the same way as the relative contribution of the acceptor exciton increases significantly in the LPB. Such a redistribution of fractions in the polariton branches is in accordance with the exciton-photon coupling strengths, which differ significantly for the donor and acceptor excitons, as we have mentioned above.
In order to quantitatively correlate the polariton population dependence on the detuning and the PL intensity measured experimentally, one need to introduce corrections arising from variations of both the photon fraction and the excitation intensity [25,26]. The finite elements method was used to calculate the dependence of the excitation field intensity inside the cavity on the detuning (see the SI for details). In order to make the necessary corrections, we divided the measured PL intensity of the LPB by the integral intensity of nonresonant excitation at 450 nm over the volume of the microcavity in the region of cavity detuning. Finally, the relative polariton population (Figure 5) was obtained using the following equation [25]: where is the experimentally observed intensity of PL from the LPB corrected for the excitation intensity, and is the Hopfield coefficient for the LPB defining the fraction of the cavity photon. The intensity of emission from the polariton state depends linearly on the cavity photon fraction.
As can be seen from Now, we will analyse the population and depopulation mechanisms for the LPB using an approach described in detail in Ref. [25]. The simplified consideration is based on the assumptions on fast relaxation from the UPB to the donor excitonic reservoir (which typically occurs on the femtosecond timescale [37]) and efficient FRET of most of the energy to the acceptor reservoir. Therefore, the number of states in the acceptor excitonic reservoir can be considered constant. In principle, there are three different mechanisms determining the LPB population: scattering with vibrations from both excitonic reservoirs of the donor and acceptor and direct radiative pumping. The efficiencies of the first two mechanisms strongly depend on the corresponding exciton fraction in the LPB and should be changed with the detuning. The radiative pumping mechanism is a direct absorption of the photon emitted by the weakly coupled exciton transitions, which is accounted for by the photonic fraction in the LPB.
However, it is almost negligible for cavities with a short cavity photon lifetime that contain an ensemble of molecules with a low optical density. The depopulation of the LPB occurs via radiative and non-radiative relaxations, which depend on the photon and exciton lifetimes, respectively. Thus, the mean polariton population (NLPB) in the steady state can be described by the following equation: (2) where are the proportionality constants for the terms corresponding to the LPB population through vibration scattering from the acceptor and donor reservoirs and direct radiative pumping, respectively; are the proportionality constants for the terms describing the depopulation via radiative ( ) and non-radiative ( ) relaxations. The first two terms describe the processes accompanied by the emission of molecular vibrations, which depend on the Bose-Einstein distribution , where is the bare exciton energy, is the energy of the LPB, is the temperature in kelvins, and is the Boltzmann constant. For the approximation of the population of the LPB in the steady state ( Figure 5), we used the following equation, which could be easily obtained from Equation (2) under some reasonable assumptions described below: (3) To derive Equation (3), we first assumed that the radiative pumping of the LPB is negligible compared to the vibrational scattering due to the low optical density of the medium inside the cavity and low cavity Q-factor. Second, non-radiative relaxation of LPB to the exciton reservoirs strongly depends on the local vibrational environment and assumed to be negligible comparing to the radiative decay mechanism [25,38], which is determined by the cavity photon lifetime (less than 10 fs). Thus, radiative decay through the photonic fraction becomes a prevailing depopulation mechanism, resulting in . Finally, our experimental conditions should be taken into consideration. In order to obtain the best fit, we minimized the standard deviation by varying the ratio between the parameters . The best fit obtained with the use of this model and the experimentally observed relative population of the LPB are shown in Figure 5. It is important to note that the best fit was obtained with tending to be zero. This corresponds to the low efficiency of the LPB population caused by scattering from the donor exciton reservoir, which may have been due to the relatively low rate of this process compared to the depopulation of the donor excitonic reservoir through FRET to the bare acceptor states. This mechanism was previously shown to be dominant over polariton-assisted energy transfer in mixed donor-acceptor ensembles [26]. Thus, the energy relaxation pathways in our system can be described as follows (Figure 6). The non-radiatively pumped population of the UPB rapidly decays to the donor exciton reservoir [37]. Then, due to the short distance between the donor and acceptor dye molecules in the molecular beacon, direct dipole-dipole FRET occurs with the efficiency close to unity.
The FRET efficiency for the molecular beacon placed into the microcavity was found to be increased compared to that for the molecular beacon outside the cavity (about 80%), because we did not observe any emission from the bare donor states at negative detuning, in contrast to the donor-only case. This may have been due to the decreased rate of radiative relaxation of bare states at negative detunings, leading to the increase in FRET efficiency. The most interesting is that, once the energy had been transferred to the acceptor excitonic reservoir, it started to populate the LPB, which is mostly a mixture of the donor and cavity photon fractions due to the much higher coupling strength between the donor and the cavity photon.
Thus, vibration scattering from the acceptor reservoir was shown to be the main population mechanism of the lower polariton state with the donor exciton fraction exceeding the acceptor one. It was demonstrated previously that a small absolute value of the specific exciton fraction in the polariton branch still allowed energy transfer with the corresponding excitonic reservoir [25]. In our experiments, the population of the LPB depended on the relative variation of a small fraction of the acceptor exciton in the polariton state despite the considerably higher absolute value of the Hopfield coefficient, corresponding to the donor exciton. Finally, we can state that we have engineered a strongly coupled system with donoracceptor role reversal or the "carnival effect" [27]. Indeed, we have developed a system with dominant coupling between the donor and acceptor leading to the formation of a donor-like polariton state with the lowest energy in the system. This allowed energy transfer first from the donor exciton reservoir to the acceptor exciton reservoir via standard FRET and then from the acceptor reservoir to the donor-like lower polariton state.
Conclusions
We have investigated strong coupling between the optical modes of a tuneable microcavity and the excitonic transitions of two closely located organic dye molecule labels of an oligonucleotide-based molecular beacon. The anticrossing dependence of the emission spectra of the donor-acceptor dye pair operating in the FRET regime on the detuning of the microcavity mode has been demonstrated by varying the distance between the cavity mirrors.
We have estimated the dependence of the polaritonic state population on the detuning and photon-exciton mixing, which has been calculated by fitting the experimental results with the three-level Jaynes-Cummings model and rate equations. In addition to the efficient FRET between the bare exciton states, significant alteration of the relaxation pathways by changing the photon and exciton mixing in the lower polariton state has been demonstrated.
We have confirmed that, in our system of molecular beacons labelled with organic dyes and located inside a tuneable open-access microcavity, the resonance energy transfer via direct FRET remains the dominant process despite the strong coupling of the dye excitons to the cavity mode, as has been previously demonstrated [26]. However, we have shown that the PL
Supporting Information
The Supporting Information is available from the Wiley Online Library or from the author. It includes: detailed descriptions of the properties of the molecular beacon samples, tuneable microcavity setup, and PL/Transmission collection system; calculations of the pumping intensity dependence on the cavity detuning; and description of the Jaynes-Cummings model.
S1 Materials
6-Carboxyfluorescein (FAM) was selected as donor molecule, Carboxytetramethylrhodamine (TAMRA) as acceptor molecule. The most common way to fix the required for efficient FRET distance between chromophores is to use oligonucleotide-based molecular beacons as in this case distance between donor and acceptor is of the order of the diameter of the DNA double helix, which is 2 nm. In this study, donor and acceptor were conjugated with self-complementary oligonucleotide, 5'-TGG AGC GTG GGG ACG GCA AGC AGC GAA CTC AGT ACA ACA TGC CGT CCC CAC GCT CCA-3'. Donor-and acceptor-only labeled hairpins were also obtained and studied as controls. ( Figure S1). Oligonucleotide sequence of 57 units with 18 base pairs was selected so that to ensure hairpin stability and small distance between donor and acceptor. Figure S2). Here we used different excitation wavelength depending on the absorption spectra of each compound. In order to estimate FRET efficiency via dipole-dipole interaction we placed solution containing beacons labeled with FAM, TAMRA, and FRET pair alternately on the lower mirror of the microcavity without the upper one and excited it non-resonantly with 450 nm laser. In such way we ensured the same experimental conditions as in experiments with the media placed inside the cavity. The measured Pl spectra are shown in Figure S3. From these spectra it can be seen that FAM-only labeled hairpin shows high fluorescence (probably due to FAM high quantum yield of 97%) intensity with a peak at 525nm while TAMRA-only counterpart shows low intensity. These results correspond to higher efficiency excitation at 450 nm of FAM ( ~20% of maximum at 495 nm) compared to TAMRA (~2% of maximum at 546 nm). Both donor and acceptor labeled hairpin shows significant decrease in donor intensity and simultaneous increase in acceptor intensity, a result that indicates energy transfer from donor to acceptor. From these data FRET efficiency can be estimated using following expression [s1,s2] .
where E is the efficiency of FRET, FDA and FD are the donor fluorescence in the presence and absence of acceptor, respectively. These spectra lead to estimated FRET efficiency value of 80%.
S2 Experimental setup
Experimental setup is shown in Figure S4. In order to enter strong coupling regime, tunable microcavity, originally introduced in [s3] , was used. Briefly, our versatile tunable microcavity cell (VTMC) [s4] is composed of plane and convex mirror that form unstable λ/2 Fabry-Perot microcavity. One mirror is made convex in order to satisfy the plane-parallelism condition and minimize the mode volume. Plane mirror is mounted on top of a Z-piezopositioner to provide fine tuning of a microcavity length in a range up to 10 µm with a nanometer precision, while landing procedure is carried out by the high precision differential micrometer DRV3 (Thorlabs) which is indirectly connected to the convex mirror. The alignment of the plane-parallelism point and a sample is made by moving convex mirror in lateral direction by means of the XY precision positioner. A sample is deposited directly onto the plane mirror that consists of standard (18x18 mm) glass coverslips with ca 35 nm layer of aluminum metallization (Al) on their upper side. The VTMC is mounted onto inverted confocal microspectrometer consisting of an Ntegra-base (NT-MDT) with a 100X/0.80 MPLAPON lens (Olympus) on a Z-piezo-positioner, an XY scanning piezo-stage and a homemade confocal unit. The fluorescence spectra of each sample were excited by a 2.1 W 450-nm laser (L450P1600MM (Thorlabs) with LDS5-EC (Thorlabs) power supply), while for transmission spectra the MCWHF2 white LED (Thorlabs) with homemade optical condenser was used. It should be noted that in experiments laser power was far from saturation. The registration system includes an Andor Shamrock 750 monochromator equipped with an Andor DU971P-BV CCD (Andor Technology Ltd) and two 488-nm RazorEdge® ultrasteep long-pass edge filters (Semrock).
S3 Accounting for changes in pumping intensity
For correct calculation of the lower polariton branch population it was necessary to account for changes in pumping intensity during tuning the cavity length. When the exciting field is in resonance with one of the cavity eigenmodes, significant rise in field intensity inside the cavity appears. To account this effect we used numerical model [s5] implemented for calculating spectral and spatial properties of the microcavity electromagnetic modes with the finite elements method [s6] . It should be noted that higher transverse modes of the microcavity were also taken into account. It was necessary because during the excitation process light of the pumping laser was focused by the objective lens with quite high numerical aperture (NA=0.95) and subsequently pumping radiation excited the higher transverse modes (Figure S5a). In the transmission experiments illuminating light had approximately planar wavefront (Figure S5b), and so there are no appearances of higher transverse modes on the transmission spectra ( Figure S6a). Using developed model, the spectral distribution of the electromagnetic energy was calculated for experimental set of cavity lengths. Then for all this set of spectra the points corresponding to the excited laser frequency were picked. These values were used as the pumping intensity at certain cavity length (Figure S6b). Figure S6. Panel (a) shows calculated spectrum of electromagnetic energy inside the microcavity with (red) and without mode selection (blue). The grey area represents corresponding experimental transmission spectrum. Panel (b) shows the pumping intensity dependence on the cavity mode frequency.
S4 Calculation of the coupling strengths and Hopfield coefficients
Spectral properties of the experimental system based on donor-acceptor pair placed inside the microcavity were researched using the Jaynes-Cummings Hamiltonian that describes interaction between a cavity mode and dipole moments of emitters. For our hybrid system this Hamiltonian is as follows: & "# = ħ( $%& ) ' ) + It should be noted here that expression (S1) doesn't have the term describing direct interaction between dipole moments of chromophores. This is due to this process is much slower than energy exchange between emitters and cavity mode for coupling strengths taking place in the current research [s7,s8] . The Hamiltonian also can be written in matrix representation: We diagonalize this matrix using QuTiP [s9] what let us obtain eigenstates and eigenvalues of this Hamiltonian. Values of energies of microcavity electromagnetic mode, donor and acceptor excitons were extracted from the experimental data. For finding the coupling strengths of our hybrid system we used differential evolution method [s10] , stochastic method for obtaining extrema of function of several variables. In order to obtain correct experimental coupling strengths, we minimized the absolute value of difference between Hamiltonian eigenvalues and energies corresponding to experimental spectra maxima. As a result of this procedure we obtain the magnitudes of ! *"#$% and ! +"#$% that are equal to 435 meV and 41 meV.
In order to find the Hopfield coefficients, we obtained the eigenfunctions of Hamiltonian with coupling strengths corresponding to our experimental data. Every eigenfunction was represented as superposition of functions of pure photon and exciton states. The coefficients of the decomposition define Hopfield fractions of the polaritonic states.
|
2020-09-09T01:00:48.838Z
|
2020-09-07T00:00:00.000
|
{
"year": 2020,
"sha1": "f9188dac83b7ffd6f2dba20818e374791b5b8f43",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "f9188dac83b7ffd6f2dba20818e374791b5b8f43",
"s2fieldsofstudy": [
"Physics",
"Chemistry"
],
"extfieldsofstudy": [
"Physics"
]
}
|
11157508
|
pes2o/s2orc
|
v3-fos-license
|
Contrasting population genetic structure among freshwater-resident and anadromous lampreys: the role of demographic history, differential dispersal and anthropogenic barriers to movement
The tendency of many species to abandon migration remains a poorly understood aspect of evolutionary biology that may play an important role in promoting species radiation by both allopatric and sympatric mechanisms. Anadromy inherently offers an opportunity for the colonization of freshwater environments, and the shift from an anadromous to a wholly freshwater life history has occurred in many families of fishes. Freshwater-resident forms have arisen repeatedly among lampreys (within the Petromyzontidae and Mordaciidae), and there has been much debate as to whether anadromous lampreys, and their derived freshwater-resident analogues, constitute distinct species or are divergent ecotypes of polymorphic species. Samples of 543 European river lamprey Lampetra fluviatilis (mostly from anadromous populations) and freshwater European brook lamprey Lampetra planeri from across 18 sites, primarily in the British Isles, were investigated for 13 polymorphic microsatellite DNA loci, and 108 samples from six of these sites were sequenced for 829 bp of mitochondrial DNA (mtDNA). We found contrasting patterns of population structure for mtDNA and microsatellite DNA markers, such that low diversity and little structure were seen for all populations for mtDNA (consistent with a recent founder expansion event), while fine-scale structuring was evident for nuclear markers. Strong differentiation for microsatellite DNA loci was seen among freshwater-resident L. planeri populations and between L. fluviatilis and L. planeri in most cases, but little structure was evident among anadromous L. fluviatilis populations. We conclude that postglacial colonization founded multiple freshwater-resident populations with strong habitat fidelity and limited dispersal tendencies that became highly differentiated, a pattern that was likely intensified by anthropogenic barriers.
Introduction
Although the abandonment of migration remains a poorly understood aspect of evolutionary biology, there is evidence to suggest that this phenomenon might act as an initiator for adaptive radiation (Bell & Andrews 1997;Winker 2000;R€ as€ anen & Hendry 2008;Langerhans & Riesch 2013). Differences in life history traits between resident and migrant individuals can be thought of as adaptive behaviours that act to increase growth, survival rate, fecundity and egg quality. This is reflected in the fitness outcomes of both life history strategies, with residency favoured when the cost of migration exceeds the benefits of doing so, particularly in terms of growth potential and mortality risk before reproduction (Fryxell & Sinclair 1988;Bell & Andrews 1997;Dingle 2006;Br€ onmark et al. 2008;Shaw & Couzin 2013).
Anadromy, which involves reproduction in freshwater and the majority of growth in the marine environment, is a distinctive migratory trait that is recognized in 18 fish families and 120 species (McDowall 1997;Chapman et al. 2012). Anadromy inherently offers an opportunity to colonize previously unexploited freshwater environments, and the shift from an anadromous to a wholly freshwater life history has occurred repeatedly in many taxa of fishes (e.g. Petromyzontiformes, Salmonidae, Gasterosteidae ;Potter 1980;Taylor et al. 1996;Lucas & Baras 2001). Glacial cycles may have supported the evolution of wholly freshwater forms by either blocking migration routes and preventing anadromy or, upon deglaciation, making available new habitat and food resources that are inaccessible through freshwater but easily reached by anadromous fish (Bell & Andrews 1997;Lee & Bell 1999).
The extent to which anadromy is obligatory varies among species. Many populations of anadromous fishes contain a component that does not migrate to sea and instead remains in freshwater where they mature and spawn. In some cases, they may subsequently move little, but in other cases migrate between distinct freshwater habitats (potamodromy), often reproducing with their anadromous conspecifics (Lucas & Baras 2001;McDowall 2001). 'Partial migration' is the term coined for this resident-migratory dimorphism within populations (Chapman et al. 2011), and it is widespread in mammals, invertebrates, birds (Lundberg 1988;Jahn et al. 2010) and fishes (Olsson & Greenberg 2004;Brodersen et al. 2008;Kerr et al. 2009;Chapman et al. 2012).
Incipient speciation in these systems may be promoted through both allopatric and sympatric mechanisms (Chapman et al. 2011). Reduced gene flow between migrants and freshwater-residents breeding in allopatry could promote differentiation by genetic drift or local adaptation. Conversely, population differentiation is limited by the large-scale dispersal capacity of migrants, resulting in a greater chance of panmixia (Hoarau et al. 2002;Coltman et al. 2007). Migratory populations that exhibit philopatry, or habitat fidelity, however, can maintain discrete genetic differences between populations within species. For example, anadromous Atlantic salmon (Salmo salar) undergo extended oceanic migrations, yet exhibit significant local adaptation and substantial reproductive isolation between populations owing to precise philopatry and a high homing fidelity to their natal river or tributary (Taylor 1991).
In contrast to anadromous salmonids, anadromous lampreys (Petromyzontiformes) generally show very low interpopulation differentiation across geographically distant river systems (Almada et al. 2008;Goodman et al. 2008) and have been shown to use pheromones released by stream dwelling larvae as partial cues to find suitable spawning habitats (Fine et al. 2004). An evolutionary trend among lampreys is the occurrence in most genera of 'paired species' (Zanandrea 1959), whereby larvae are morphologically indistinguishable, while the adults of two putative species adopt either a nonparasitic freshwater-resident or a parasitic life history which can be either potamodromous or anadromous.
Nonparasitism has arisen repeatedly among lampreys (Docker 2009) and even within species (Espanhol et al. 2007), suggesting that feeding type is plastic and nonparasitic lineages may be polyphyletic (Docker 2009;Renaud et al. 2009). Nonetheless, there has been much controversy about the taxonomic status of many paired lamprey species (Zanandrea 1959;Hardisty 1986a;Schreiber & Engelhorn 1998;Youson & Sower 2001;Gill et al. 2003;Renaud et al. 2009;Docker et al. 2012). Although various studies have found little genetic differentiation between lamprey paired species (Docker et al. 1999;Yamazaki et al. 2006;Espanhol et al. 2007;Blank et al. 2008;Lang et al. 2009), Mateus et al. (2013b found significant differentiation between sympatric European river and brook lamprey populations in Portugal based on nuclear genomic data, and Taylor et al. (2012) report differentiation between anadromous and freshwater-resident parasitic lampreys in British Columbia based on eight microsatellite DNA loci.
Here, we explore the population genetics of the anadromous European river lamprey (Lampetra fluviatilis L. 1758) and its nonparasitic freshwater-resident derivative the European brook lamprey (Lampetra planeri Bloch 1784), together with several L. fluviatilis populations that comprise potamodromous individuals that migrate within freshwater only (i.e. freshwater-residents; Maitland et al. 1994;Inger et al. 2010). We use a combination of mtDNA and microsatellite nuclear DNA markers to test the hypothesis that the postglacial expansion of anadromous L. fluviatilis during the Holocene prompted the establishment of multiple freshwater-resident L. planeri populations that subsequently became genetically differentiated. We also investigated the possibility that anthropogenic barriers are isolating lamprey populations and provide a robust quantitative assessment of this. In some freshwater fishes, the fragmentation of habitats by dams can promote genetic differentiation between the upstream and downstream populations resulting from the reduction of gene flow, often compounded by founder effects and subsequent genetic drift (Yamamoto et al. 2004;Palkovacs et al. 2008). Population divergence and dispersal at local to catchment scales were examined enabling inference about population connectivity and evolutionary viability, which may indicate important applications in conservation management (Latta 2008) and enhance our understanding of the systematics of these ancient fish.
Sampling and DNA isolation
Tissue samples were collected across a total of 18 sites (Fig. 1, Table S1, Supporting information). Unlike, for example, in some Baltic regions (Sj€ oberg 2011), there is no evidence or likelihood of historical stocking or translocation of lampreys at any of these sites. MtDNA loci were examined in n = 108 individuals from six sites including two paired sites (i.e. where Lampetra planeri and Lampetra fluviatilis were obtained from the same river; Table S1, Supporting information, Table 1, Fig. 1). For microsatellite loci, 543 samples were collected from 18 sites, including seven paired sites (PS; Table S1, Supporting information, Fig. 1). One of these paired sites also included a freshwater-resident L. fluviatilis population (PS7; Loch Lomond, Scotland, Table S1, Supporting information). Three additional sites for L. fluviatilis were also included in the analysis (sites 9, 17, 18, Table S1, Supporting information, Fig. 1); one of which is a freshwater-resident population of L. fluviatilis in the River Bann (site 17). In Loch Lomond (PS7 in Table S1, Supporting information; Scotland), all three 'ecotypes' (i.e. L. planeri, L. fluviatilis and freshwater-resident L. fluviatilis) are truly sympatric; however, in all other paired sites, L. planeri samples were obtained upstream (within the same river) of anadromous L. fluviatilis populations, which were usually separated by migration barriers (Table S2, Supporting information). It should also be noted that the location from which the River Swale L. planeri samples were obtained is a spawning site for both L. planeri and sometimes L. fluviatilis.
Samples were obtained by hand-netting, electro-fishing and the utilization of static double-funnel traps to capture spawning, and upstream-migrating lampreys (Table S1, Supporting information). Both L. fluviatilis and L. planeri were sampled where they were found to be locally abundant prior to the spawning period and so were, in most cases, captured in the vicinity of their Table S1, Supporting information for detail). Inset is a detailed map of part of the Ouse subcatchment of the Humber catchment, showing sampling locations. Only sampled rivers are shown. spawning grounds. L. planeri were normally captured in the upstream reaches of rivers where they were abundant, and in all cases, except at the Endrick Water, Loch Lomond were sampled upstream of the L. fluviatilis spawning areas. Only adult and juvenile lampreys, unambiguously identifiable to species, were included in this study. Adult anadromous, and freshwater-resident L. fluviatilis (e.g. Loch Lomond, Morris 1989; and the R. Bann, Goodwin et al. 2006), as well as nonparasitic L. planeri can be separated using standard lamprey taxonomic characteristics (Renaud 2011). Individuals were identified and measured under anaesthesia (MS-222, 0.1 g/L) using a field key (Gardiner 2003), and fin clips taken from the second dorsal fin were stored in 20% DMSO saturated NaCl solution (Amos & Hoelzel 1991). Total genomic DNA was extracted from samples using a proteinase K digestion procedure followed by the standard phenol-chloroform method and stored at À20°C.
Amplification and sequencing of mitochondrial DNA
The PCR primers ATPfor and ATPrev (Espanhol et al. 2007) were used to amplify 838 bp of the mitochondrial gene ATPase subunits 6 and 8. This locus was chosen to facilitate comparison with previous data from Espanhol et al. (2007) and Mateus et al. (2011). Each 20 lL reaction contained 1.2 lL (final conc. 1.5 mM) MgCl 2 , 2 lL dNTPs (2.0 mM), 0.2 lL of each primer (10 mM), 4 lL of Colorless GoTaq â Reaction Buffer (Promega), 0.1 lL GoTaq DNA polymerase (Promega) and 1 lL of template DNA. Cycle conditions were as follows: initial denaturation at 94°C for 3 min, followed by 30 cycles of; denaturation at 94°C for 1 min, annealing temperature 57.1°C for 1 min and extension at 72°C for 2 min; and followed by a final extension at 72°C for 2 min. The resulting PCR products were purified using the Qiagen PCR Purification kit and sequenced using an ABI PRISM 3730 DNA Analyser (DBS genomics Durham University).
Microsatellite loci were multiplex amplified using a Qiagen Multiplex kit. Thermal cycler conditions were as follows: initial denaturation at 95°C for 15 min; followed by 35 cycles of denaturation at 94°C for 30 s, annealing temperature 60°C for 90 s and extension at 72°C for 60 s; and followed by a final extension at 60°C for 30 min. PCR products were genotyped on a 3730 ABI DNA Analyser (DBS Genomics, Durham, UK) and visualized with Geneious VR6 (Biomatters). Microsatellite loci were tested for null alleles, large allele dropout and scoring errors due to stutter peaks using MICROCHECKER 2.2.3 (van Oosterhout et al. 2004). The program ARLEQUIN 3.5 (Excoffier & Lischer 2010) was then used to test deviation from Hardy-Weinberg equilibrium. Tests for linkage disequilibrium were carried out for each pair of loci using an exact test based on a Markov chain method as implemented in GENEPOP 4.2 (Raymond & Rousset 1995;Rousset 2008). The program Lositan (Antao et al. 2008) was used to test for outliers MtDNA analysis was performed on only a subset of the 543 lampreys and 18 sites used for the microsatellite analysis. The 'Site No.' column corresponds to the site numbers in Fig. 1 and Table S1 (Supporting information). indicating positive or balancing selection (using a forced neutral mean F ST , a confidence interval of 0.99 and false discovery of 0.1), and no loci with evidence for selection were found.
Genetic diversity and structure
MtDNA sequences were aligned manually using GENE-IOUS vR6 (Biomatters). The program DNASP 10.4.9 (Rozas et al. 2003) was then used to calculate mitochondrial DNA polymorphism estimated as haplotypic diversity (Nei & Tajima 1981) and nucleotide diversity (Nei 1987). To determine the level of genetic differentiation between pairs of populations, F-statistics (Weir & Cockerham 1984) were calculated for mtDNA and microsatellite DNA loci using ARLEQUIN version 3.5. Significance was tested using 1000 permutations. ARLEQUIN was also used to calculate Fu's F, Tajima's D and mismatch distributions. We estimated the putative time of population expansion from the mismatch distribution using the statistic tau (s; Rogers & Harpending 1992). Substitution rate was estimated after Ho et al. (2007) who suggest an average of~50% per site per million years for the control region, based on recent evolutionary time frames, although of course this varies among species. The substitution rate for the control region can be ten times faster than the rest of the mitochondrial genome (McMillan & Palumbi 1997). Therefore, 5% per site per million years was used as a rough estimate for ATPase. Mutation rates of 1% and 10% per million years were also used to illustrate the effect that the rate of divergence will have in the expansion times. The relationship between haplotypes was investigated using a medianjoining network (MJN) constructed using the program NETWORK 3.1.1.1 (Bandelt et al. 1999) and epsilon values of 0, 10, 20 and 30 were tested. For microsatellite DNA data, allelic richness for each locus and population and F IS (inbreeding coefficient) were calculated using the program FSTAT 2.9.3 (Goudet 1995). STRUCTURE 2.0 was used to assign individuals by genotype to a putative number of populations (K; Pritchard et al. 2000). DK, a measure of the second order rate of change in the likelihood of K (Evanno et al. 2005), was calculated using STRUCTURE Harvester (Earl & vonHoldt 2012) to assess the highest hierarchical level of structure. Four independent runs for each K value were performed at 2 000 000 Markov chain Monte Carlo (MCMC) repetitions and 500 000 burn-in using no population prior information and assuming correlated allele frequencies and admixture. STRUCTURE was also used with a location prior (LOCPRIOR) to clarify population structure within the Loch Lomond system (Hubisz et al. 2009). Burn-in and run lengths were the same as for runs without prior population information. Due to the large number of putative population subdivisions, subsamples were compared by region to increase resolution, in addition to an analysis involving all regions. Full-sibling pairs within a sampling site (for the five localities where there are populations of both putative species: Wear, Dee, Derwent, Nidd & Ure) were identified using the maximum-likelihood method in COLONY version 2.0.1.1 with male and female polygamy permitted and a medium run length (Jones & Wang 2010).
Patterns of microsatellite differentiation were subsequently examined using a factorial correspondence analysis (FCA) implemented in GENETIX 4.05.2 (Belkhir et al. 1996(Belkhir et al. -2004, which gives a visual representation of individual genotype clustering. A test for a positive association between genetic [F ST /(1 À F ST )] and geographic distances [Isolation by distance (IBD)] based on microsatellite DNA loci was carried out using a Mantel test (10 000 permutations) in GENEPOP v4.2. Geographic distances were calculated between sample sites using linear referencing tools in Quantum GIS (Lisboa). A Mantel test was also carried out to test for association between genetic distances and number of physical barriers (defined as any anthropogenic feature larger than 0.5 m height at base river level which reaches the full width of the river) between sample sites. The 0.5 m value was subjective, based on the fact that many structures of this height or greater generate discrete water level differences (upstream-downstream) at base flows, on published and unpublished data on the impact of different height potential barriers on lamprey movement (L. fluviatilis, Lucas et al. 2009; L. planeri, M. C. Lucas personal observation), and on our ability to identify potential barriers in field surveys and databases. Only river systems for which information on barriers was available were utilized in the Mantel tests (including the Dee, Wear and all rivers within the Ouse subcatchment, excluding the Swale due to the low sample size attained for L. planeri).
MIGRATE-N (v 3.2.6) was used to estimate levels of historical gene flow between populations (Beerli & Felsenstein 2001;Beerli 2006;Beerli & Palczewski 2010). Pairwise comparisons were carried out between putative species (i.e. L. fluviatilis and L. planeri) at six locations (Wear, Dee, Lomond, Nidd, Ure, Derwent), of which the latter three are all tributaries in the same river catchment, where samples from both species were available. To implement Bayesian inference in MIGRATE-N, the Brownian motion approximation was selected with an MCMC search of 100 000 burn-in steps followed by 5 000 000 steps with parameters recorded every 100 steps; exponential prior on theta (min: 0, mean: 30, max: 60); and an exponential prior on migration (min: 0, mean: 650 max: 1300). MIGRATE-N was run with parameter values starting from F ST -based estimates, and the distribution of parameter values was compared across runs to ensure overlap of 95% CI. BAYE-SASS 1.3 (Wilson & Rannala 2003) was used to estimate the magnitude and directionality of contemporary gene flow between L. fluviatilis and L. planeri. Pairwise comparisons were carried out for the same six locations that were used in the MIGRATE-N analysis. In contrast to MIGRATE-N, BAYESASS estimates all pairwise migration rates rather than a user-defined migration matrix and provides unidirectional estimates of migration for each population pair. BAYESASS does not assume a migrationdrift equilibrium, an assumption that is frequently violated in natural populations (Whitlock & McCauley 1999). A total of 10 000 000 MCMC iterations were run of which 1 000 000 were for the burn-in. All other options were left at their default settings. Five to 10 runs with a different starting point were performed for each population pair and results are given as means. The program TRACER version 1.5 (Rambaut & Drummond 2007) was used as a method to qualitatively assess MCMC convergence.
MtDNA
ATPase subunits 6 and 8 were sequenced and haplotypes determined for 108 lampreys (Lampetra fluviatilis and Lampetra planeri) from six sampling sites (Table 1). Over all populations, haplotype and nucleotide diversity were low, with freshwater-resident populations of both L. planeri and L. fluviatilis generally exhibiting lower haplotype and nucleotide diversity than the anadromous L. fluviatilis populations. Both Tajima's D and Fu's F were negative and highly significant (Table 1), consistent with a population expansion (e.g. after a bottleneck) or a selective sweep. Using the value of tau, which was 0.673 (Fig. S1, Supporting information), an expansion time of 16 263 (10 182-26 952; 95% CI) years ago was calculated using the mutation rate of 5% per million years. Using mutation rates of 1% and 10%, expansion times would be 81 182 and 8118 years ago, respectively. Sixteen haplotypes were observed, with private haplotypes found only in the L. planeri population from the River Nidd and no species-specific lineages (see median-joining network in Fig. 2a). F ST values between sites ranged from 0.01955 to 0.94093 with only F ST values associated with the Nidd (L. planeri) being statistically significant (P < 0.0001; Table 2).
A network showing the European haplotype distribution, incorporating data from Espanhol et al. (2007) and Mateus et al. (2011), revealed 46 haplotypes with Portuguese populations being visibly further removed from the majority of other samples (Fig. 2b). Identified lineages were concordant with those reported by Mateus et al. (2011), and as observed by Espanhol et al. (2007), not species specific. Clades I, II and III were considered to be composed of adult L. planeri (Mateus et al. 2011; now regarded as three cryptic species, L. alavariensis, L. auremensis and L. lusiticanica, Mateus et al. 2013a) and larvae of unknown specific status, while clade IV comprises L. planeri, anadromous and freshwater-resident L. fluviatilis adults and larvae.
Microsatellite analysis
A total of 543 lampreys were genotyped at thirteen loci. All loci were in Hardy-Weinberg equilibrium and not impacted by null alleles for most populations, and there were no consistent issues for any given population (Table S4, In COLONY, tests for the proportion of putative fullsiblings (as an indicator of close kin) in populations of either species showed this to be rare, 0% in some cases for both species, and no higher than 0.74%. One randomly chosen individual of each full-sibling pair was excluded, and analysis was repeated. There were no differences that affected inference in the results with full-siblings included or excluded, so all individuals were included in the analysis. STRUCTURE analyses consistently identified L. planeri populations as being separate from L. fluviatilis populations (anadromous and freshwater-resident) and from each other (Fig. 4). The only exception was the small sample of L. planeri on the Swale compared to the L. fluviatilis population downstream on the same river (Fig. S2a, Supporting information). Figure 4a shows the most likely population structure among 12 sampling locations in England and Wales (excluding the Scottish Loch Lomond system) incorporating both species, where K = 6 showed the highest LnP(D) (Fig. S3, Supporting information). Lampetra fluviatilis samples appear as a single mixed population. Representing a higher hierarchical level, DK = 2 primarily supports separation of L. fluviatilis and L. planeri (Figs S3 and S4, Supporting information). Figure S4c (Supporting information) shows a comparison across all populations where K = 9 [the highest LnP(D) outcome]. There were several peaks for DK at K = 2, 5 and 8 (Fig. S4d, Supporting information), but the maximum LnP(D) result (K = 9) was most informative, distinguishing all L. planeri and L. fluviatilis freshwater-resident populations (with the exception of the L. planeri population in the Swale).
When only L. planeri populations were compared, the highest likelihood result identified all populations as distinct (Fig. 4b). In this case, DK was 4 (Figs S3 and S4, Supporting information); however, this linked samples from the Nidd with the Dee, and Loch Lomond with the Derwent, in each case populations on opposite sides of British Isles (see Fig. 1; Fig. S4b, Supporting information). When only anadromous L. fluviatilis populations were compared, the outcome was K = 1 (not shown). The Loch Lomond system (which contains anadromous L. fluviatilis, freshwater-resident L. fluviatilis and L. planeri populations) was compared to an anadromous L. fluviatilis population (Nidd) and another freshwaterresident L. fluviatilis population (Bann). STRUCTURE identified three populations with highest likelihood, while DK was 2 ( Fig. 4c; Fig. S3, Supporting information). Using prior location information for Loch Lomond, five populations were identified. However, DK = 2, showing differentiation at a higher hierarchical level between the freshwater-resident L. fluviatilis population in Loch Lomond and the other populations (Fig. S2b,c, Supporting information). Location priors did not provide any useful additional inference for other analyses. The FCA plots support essentially the same clusters as identified in STRUCTURE showing L. fluviatilis as being dominated by one large grouping, with the freshwaterresident populations differentiated (Fig. S5a, Supporting information) and L. planeri populations as all being separated from each other (Fig. S5b, Supporting information). Mantel tests for correlation between genetic and geographic distance showed a significant negative trend for L. planeri populations (R² = 0.2963; P < 0.05; Fig. 5a) and a weak but significant positive linear relationship for all L. fluviatilis populations (R 2 = 0.0841; P < 0.05). However, when freshwater-resident L. fluviatilis populations were excluded (Bann and Loch Lomond), the positive relationship was much stronger (R 2 = 0.40, P < 0.0001; Fig. 5b). Mantel tests examining correlations between genetic distance and the number of barriers along migration/dispersal routes for L. fluviatilis and L. planeri (populations included as described in methods) showed a highly significant positive correlation (R 2 = 0.8256, P < 0.0001; Fig. 5c).
Migration rate estimates between species (using MIGRATE-N) ranged from 3.73 to 10.43 migrants/generation from L. fluviatilis to L. planeri, and from 4.18 to 16.28 from L. planeri to L. fluviatilis (Table S6, Supporting information). The six pairwise comparisons all suggested asymmetric gene flow greater in the direction from L. planeri to L. fluviatilis (which apart from Loch Lomond was always in the downstream direction), but 95% confidence intervals were large and overlapping. The BAYESASS analysis indicated low-level contemporary gene flow between the putative species, and some comparisons also suggest the downstream direction from L. planeri to L. fluviatilis (especially in the Derwent; Table S7a,b, Supporting information). It also indicated ongoing gene flow between the three forms in the Loch Lomond system (Table S7b, Supporting information).
Population history
This study was based in a geographic region that has undergone profound cyclical changes over the course of the Pleistocene (2.58 Ma-11 700 years ago), with suitable riverine habitat available only during interglacial periods (Hays et al. 1976). For our study sites in the UK, mtDNA failed to show any differentiation between the two putative Lampetra species or among populations, which is consistent with data for some other northern European populations (Espanhol et al. 2007). While this may suggest ongoing gene flow or the incomplete sorting of ancestral polymorphisms, it is also consistent with recent founder events establishing these populations. The network analyses and neutrality tests support this, indicating small founder populations and subsequent expansion. Conversely, for populations in southern Europe where the climate has been more stable over time, there are far higher nucleotide diversities and significant mtDNA phylogeographic structuring (Pereira et al. 2010;Mateus et al. 2011). Espanhol et al. (2007 suggested that Lampetra planeri in Europe may be polyphyletic and have originated within at least two evolutionary lineages, possibly the result of independent divergence events from Lampetra fluviatilis with the repeated loss of anadromy. Pereira et al. (2010) have since found several Portuguese populations of L. planeri which are isolated among themselves and also from the anadromous lamprey population. These populations had only private haplotypes, suggesting that a significant amount of time had passed to establish independent evolutionary histories. (Table S5). Numbers on axes are marked with a square to represent L. planeri and a circle to represent freshwater-resident L. fluviatilis.
The fact that genetically distinct non-migratory Lampetra populations are found in many Portuguese rivers (Pereira et al. 2010;Mateus et al. 2011Mateus et al. , 2013a suggests lamprey were once more abundant and widespread in Iberia. The higher levels of divergence shown in our mtDNA median-joining network that included Portuguese lampreys (Fig. 2), compared to other populations examined across Europe, also suggests that sufficient time may have passed to establish a complex of incipient freshwater-resident species, although further nuclear DNA data would help resolve this question. Similar processes generating multiple origins have been suggested, for example, in the marine to freshwater transitions of three-spine sticklebacks (Gasterosteus aculeatus; Hohenlohe et al. 2010).
Our study estimates the expansion time of L. planeri and L. fluviatilis populations in the British Isles and northern Europe as 16 236 (10 182-26 952) years ago using tau and a mutation rate of 5% per million years, which roughly coincides with the last glacial maximum (19-26 000 years ago; Clark et al. 2009). The Pleistocene climatic fluctuations impacted much of Europe (Hays et al. 1976;Webb & Bartlein 1992) and significantly influenced the distribution and genetic diversity of plants and animals (Hofreiter & Stewart 2009). In addition to cycles of habitat loss and release as glaciers extended and receded, the 'refugium theory' proposes that temperate species survived the glacial maxima in southern refugia and colonized northern latitudes during interglacial periods (Taberlet et al. 1998;Hewitt 2000). The results shown here, coupled with data from the Iberian Peninsula, suggest that southern latitudes served as an important refugium for Lampetra during the Pleistocene glaciations, intermittently acting as a point of dispersal for postglacial expansion (Espanhol et al. 2007;Mateus et al. 2012Mateus et al. , 2013b. Therefore, there may have been a tendency during interglacial periods, while anadromous Lampetra were expanding northwards, for populations at lower latitudes to abandon anadromy and eventually become restricted to freshwater. This is consistent with the findings of a recent study utilizing restriction site associated DNA sequencing (RAD seq.) that identified strong genetic differentiation between sympatric L. fluviatilis and L. planeri in the Iberian Peninsula with numerous fixed and diagnostic single nucleotide polymorphisms (SNPs) between the two putative species, some associated with genes related to osmoregulation (Mateus et al. 2013b). A study using RAD sequencing to compare Pacific lamprey (Entosphenus tridentatus) geographic populations also found evidence consistent with local adaptation (Hess et al. 2013). Our median-joining network in Fig. 2 shows that for the available samples, only clade IV shares a haplotype with the lineage representing the northern expansion, suggesting a possible link between these lineages (with clade IV providing the ancestor of the anadromous group that founded the postglacial population in northern Europe). With expansion into previously unoccupied territory, it is expected that genetic diversity should decrease from the south to the north (Hewitt 2000), consistent with our findings.
Population structure
In contrast to mtDNA, we found considerable structure at microsatellite DNA loci between L. fluviatilis and L. planeri populations, especially among populations of L. planeri, but much less among anadromous L. fluviatilis populations. Anadromous lampreys (Lethenteron spp.) in Japan (Yamazaki et al. 2011) and Petromyzon marinus in North America (Bryan et al. 2005) and Europe (Almada et al. 2008) exhibit similar levels of panmixia, with little or no genetic structure, despite their widespread distribution. Spice et al. (2012) found that Pacific lamprey along the west coast of North America showed low but significant differentiation among locations. However, instead of being philopatric like many other anadromous fish species (McDowall 2001), differentiation was suggested to be due to greater restrictions to dispersal at sea compared to other anadromous lamprey species. The lack of population structure found in our study was, therefore, consistent with the general lack of natal homing seen for other anadromous lamprey species. The absence of a clear genetic signal for specieslevel differences between anadromous and freshwaterresident populations is consistent with findings for other paired lamprey species (Espanhol et al. 2007;Hubert et al. 2008;Docker 2009;April et al. 2011;Mateus et al. 2011;Boguski et al. 2012;Docker et al. 2012). Greater differentiation among populations within L. planeri, than between L. planeri and L. fluviatilis, suggests the unexpected pattern of greater gene flow between the putative species than within L. planeri (while the greatest gene flow occurs among populations of L. fluviatilis). Gene flow between the putative species may be possible owing to a combination of interspecific nest association (Huggins & Thompson 1970;Lasne et al. 2010) and sneaker male behaviour (Malmqvist 1983;Hume et al. 2013). As larvae of both species tend to move downstream through voluntary and involuntary drift behaviour (Hardisty & Potter 1971a;Moser et al. 2015), the distribution and overlap of spawning adults of the two species ultimately depends on a combination of the degree of downstream drift of L. planeri from upstream tributaries where they predominate, towards L. fluviatilis-dominated zones, and the subsequent upstream movements of freshwater-resident L. planeri and anadromous or freshwater-resident L. fluviatilis (Hardisty & Potter 1971b;Malmqvist 1980). Both assignment (BAYESASS) and coalescent (MIGRATE-N) methods suggested directionality in genetic migration, favouring the direction of L. planeri to L. fluviatilis, although the confidence limits were broad. Asymmetric gene flow occurring in these types of freshwater systems can significantly influence the distribution of genetic variation, with downstream populations typically exhibiting higher genetic diversity than headwater populations (Caldera & Bolnick 2008;Morrissey & de Kerckhove 2009;Julian et al. 2012). Yamazaki et al. (2011) found gene flow to exist at multitemporal scales between 'potentially sympatric' lamprey populations and suggested ongoing gene flow was the result of imperfect size-assortative mating and the plastic determination of life histories. The observed increase in genetic diversity as one moves downstream towards the lower reaches of the river could result from historical patterns of colonization, with contemporary dispersal reflecting movement bias, fragmented habitat or the presence of dispersal barriers (Morrissey & de Kerckhove 2009;Dehais et al. 2010). Asymmetric gene flow would be expected if L. planeri populations remain primarily resident further up the catchments with occasional migrants moving further downstream to where they may encounter spawning L. fluviatilis.
Connectivity and anthropogenic factors
Mantel tests for isolation by distance revealed a positive correlation between geographic and genetic distance for anadromous L. fluviatilis, and a counterintuitive negative correlation among L. planeri populations (Fig. 5). However, while the correlation for L. fluviatilis was significant (especially when freshwaterresident L. fluviatilis were omitted), and consistent with expectations (implying that long-range dispersal is less common), the correlation with L. planeri was weak and showed a broad range of values for a given distance (see Fig. 5a). The L. planeri correlation may, therefore, simply reflect a stochastic pattern or ancestral relationships.
The number of anthropogenic barriers between populations was found to be significantly positively correlated with genetic distance, and such barriers have been shown to limit the upstream migration of L. fluviatilis (Lucas et al. 2009). Anthropogenic barriers could therefore be amplifying (beyond natural processes) the isolation of L. planeri populations by inhibiting the upstream movement of anadromous L. fluviatilis and preventing geneflow mediation in this manner between populations. Meldgaard et al. (2003) also detected a statistically significant increase of F ST with the number of weirs between grayling (Thymallus thymallus) populations in a Danish river system. Similar decreases of genetic diversity from downstream towards upstream populations have been observed in other fish species in relation to anthropogenic barriers (Yamamoto et al. 2004;Caldera & Bolnick 2008;Raeymaekers et al. 2009). Yamazaki et al. (2011) found freshwater-resident nonparasitic lamprey populations in the upper regions of dammed rivers to be genetically divergent from seasonally sympatric, anadromous, parasitic populations. This pattern is consistent with a scenario where barriers amplify the asymmetry of gene flow from upstream towards downstream sites by allowing some passive downstream drift, while obstructing active upstream migration. Spice et al. (2012) also found that larvae from an anadromous population of E. tridentatus at a spawning site upstream of nine dams (which only a small number of adults successfully pass each year) exhibited higher genetic differentiation (i.e. higher F ST values) than most other population comparisons.
When a freshwater-resident lamprey population is physically isolated from anadromous parasitic populations (which may mediate gene flow between freshwater-resident populations), acceleration in genetic divergence may result in the subsequent establishment of allopatric speciation (Yamazaki & Goto 2000). It is probable, however, that freshwater-resident L. planeri populations would have become, and tended to remain, isolated without the added anthropogenic hurdles, as there is a degree of population separation that is due to the natural extent of upstream migration in anadromous L. fluviatilis. As previous studies have shown, this is usually limited to higher order channels, and individuals do not generally penetrate the smaller streams even where access is unhindered by barriers (Hardisty & Potter 1971c;Hardisty 1986b).
The system in Loch Lomond offers evidence of the potential for gene flow between morphologically differentiated ecotypes, indicating that where they are found sympatrically, gene flow between L. fluviatilis and L. planeri can occur. This scenario is also supported by the lack of evidence for differentiation between the geographically proximate L. fluviatilis and L. planeri populations on the River Swale, although the sample size for the latter population was small (Fig. S2, Supporting information). Similarly, Docker et al. (2012) found no genetic differentiation between silver (Ichthyomyzon unicuspis) and northern brook (I. fossor) lampreys occurring sympatrically (also using microsatellite loci), but did find differentiation among parapatric populations. Yamazaki et al. (2011) also found a lack of differentiation between sympatric populations of Arctic lamprey (Lethenteron camtschaticum) and its nonparasitic derivatives in the Ohno River, Japan.
The BAYESASS analysis suggests that contemporary gene flow is occurring between all three populations in Loch Lomond, consistent with a tendency for interbreeding when there are no environmental barriers to limit connectivity. The divergence of the freshwaterresident L. fluviatilis population would then suggest a period of differentiation in isolation. Therefore, in Loch Lomond, the anadromous strategy is also paralleled by a population component with potamodromous behaviour, with some fish apparently showing migration mostly between the loch and spawning streams. While all three Loch Lomond populations were significantly differentiated from each other (Fig. 3, Table S5, Supporting information), there were also data indicating contemporary gene flow among them, and the anadromous L. fluviatilis and sympatric L. planeri populations both showed evidence of connectivity with the wider L. fluviatilis populations.
Conclusions
Alternative life history strategies are common among fishes inhabiting postglacial lakes, often resulting from adaptation to different foraging strategies or environments (Robinson & Parsons 2002). This is one of the best supported mechanisms for speciation in sympatry, for example among cichlid species in Holocene lakes (Barluenga et al. 2006). The divergence of multiple independent populations is a common trend in the evolution of diversity for diadromous fish (Schluter & Nagel 1995;Waters & Wallis 2001), and a number of studies have shown the influence of glacial movement within the Holocene on the phylogeographical structure of freshwater fishes (Harris & Taylor 2010;Boguski et al. 2012). However, in our study, the geographic scale is small for the extent of differentiation observed. It is apparent that at an initial stage, there was a postglacial expansion of anadromous Lampetra fluviatilis from southern refugia and the subsequent establishment of multiple freshwater-resident Lampetra planeri populations. These may have been relatively small founder groups that retained some degree of reproductive isolation that was likely intensified, although perhaps not entirely determined, by the anthropogenic introduction of barriers. Moreover, it was ascertained that there is gene flow between L. fluviatilis and L. planeri in both long-term and contemporary timescales and the pattern of gene flow is apparently asymmetric. This has significant implications for the management of L. planeri populations and the extent to which this is underpinned by natural processes will have important evolutionary implications with respect to the mechanisms that generate diversity. Our data emphasizes the importance of founder events in the evolution of diversity among populations and as a frequent component of the speciation process (Templeton 2008). These data also strongly support a scenario of multitemporal and multispatial radiation. In contrast to higher levels of Lampetra divergence present in the Iberian Peninsula, the northern European populations appear to have been established relatively recently, and the process of differentiation is still ongoing. There may be a natural tendency towards speciation in freshwater-resident populations that remain environmentally stable over time, but a dynamic process instead at higher latitudes experiencing a cycle of habitat loss and release.
Institute CDT). JH was funded by the University of Glasgow and a Scottish Natural Heritage award.
Supporting information
Additional supporting information may be found in the online version of this article.
Table S1
Numbers collected and location of origin for all genetic samples of Lampetra planeri and Lampetra fluviatilis and by whom they were collected. Table 1. where Swale Lp is compared to another Lampetra planeri population and Lampetra fluviatilis form the same river (b) DK = 2 when prior location information is used to analyse the Loch Lomond system which shows the freshwater-resident Lomond population to be differentiated and (c) DK = 5 (LnP(D) = À4681) when a location prior is used.
|
2018-04-03T05:23:55.905Z
|
2015-03-01T00:00:00.000
|
{
"year": 2015,
"sha1": "9589b99e482f8df51ee828560e22bbba1b3d7048",
"oa_license": "CCBY",
"oa_url": "https://europepmc.org/articles/pmc4413359?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "373cb1f2cb22984082283281a363e5af213477e3",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
55421747
|
pes2o/s2orc
|
v3-fos-license
|
Ordered groupoid quotients and congruences on inverse semigroups
We introduce a preorder on an inverse semigroup $S$ associated to any normal inverse subsemigroup $N$, that lies between the natural partial order and Green's ${\mathscr J}$-relation. The corresponding equivalence relation $\simeq_N$ is not necessarily a congruence on $S$, but the quotient set does inherit a natural ordered groupoid structure. We show that this construction permits the factorisation of any inverse semigroup homomorphism into a composition of a quotient map and a star-injective functor, and that this decomposition implies a classification of congruences on $S$. We give an application to the congruence and certain normal inverse subsemigroups associate to an inverse monoid presentation.
INTRODUCTION
Let S be an inverse semigroup with semilattice of idempotents E(S). Recall that the natural partial order on S is defined by s t ⇐⇒ there exists e ∈ E(S) with s = et .
The natural partial order may be characterized in a number of alternative ways, including: • there exists f ∈ E(S) with s = tf , • s = ss −1 t, • s = ts −1 s.
(see [6,Proposition 5.2.1]). In this paper, we shall generalize the natural partial order by introducing a preorder N on S for any normal inverse subsemigroup N : the natural partial order then corresponds to the minimal normal inverse subsemigroup E(S), and at the other extreme, the preorder associated to S itself is the J -preorder. Symmetrizing the preorder N yields an equivalence relation N (the identity when N = E(S) and the J -relation when N = S). However, this relation need not be a congruence, and so the set of equivalence classes S/ N need not be an inverse semigroup.
However, we may investigate N further by exploiting the relationship between inverse semigroups and ordered groupoids. An ordered groupoid is a small category in which every morphism is invertible, equipped with a partial order on morphisms. (The definition is recalled in detail in section 1.) An inverse semigroup can be considered as an ordered groupoid in which the identities form a semilattice, and from any such ordered groupoid a corresponding inverse semigroup can be constructed. In the study of inverse semigroups, it is often fruitful to extend the point of view to ordered groupoids, and this is a major theme of [8]. We show that the quotient set S/ N always inherits a natural ordered groupoid structure. Moreover, to any homomorphism φ : S → Σ of inverse semigroups, we associate its kernel K = {s ∈ S : sφ ∈ E(Σ)}, and φ then factorises as S −→ S/ K −→ Σ with the map S/ K → Σ a star-injective functor from the ordered groupoid S/ K to Σ (considered as an ordered groupoid). Now any congruence ρ on an inverse semigroup determines a normal inverse subsemigroup K, its kernel, which consists of all elements of S that are ρ-equivalent to idempotents. We compare the structures of the ordered groupoid S/ K and the quotient inverse semigroup S/ ρ, and we show that if K is a congruence, then it is the minimal congruence with kernel K. We show how to classify congruences by the factorisation S −→ S/ K −→ S/ρ and look at certain congruences and their kernels associated with an inverse monoid presentation, and the relationships between them.
ORDERED GROUPOIDS AND INVERSE SEMIGROUPS
A groupoid G is a small category in which every morphism is invertible. We consider a groupoid as an algebraic structure following [5]: the elements are the morphisms, and composition is an associative partial binary operation. The set of identities in G is denoted E(G), and an element g ∈ G has domain gg −1 and range g −1 g.
An ordered groupoid (G, ) is a groupoid G with a partial order satisfying the following axioms: and if the compositions g 1 h 1 and g 2 h 2 are defined, then g 1 h 1 g 2 h 2 , OG3 if g ∈ G and x is an identity of G with x gd, there exists a unique element (x|g), called the restriction of g to x, such that (x|g)(x|g) −1 = x and (x|g) g, As a consequence of [OG3] we also have: OG3* if g ∈ G and y is an identity of G with y gr, there exists a unique element (g|y), called the corestriction of g to y, such that (g|y) −1 (g|y) = y and (g|y) g, since the corestriction of g to y may be defined as (y|g −1 ) −1 .
Let G be an ordered groupoid and let a, b ∈ G. If a −1 a and bb −1 have a greatest lower bound ∈ E(G), then we may define the pseudoproduct of a and b in G as a ⊗ b = (a| )( |b), where the right-hand side is now a composition defined in G. As Lawson shows in Lemma 4.1.6 of [8], this is a partially defined associative operation on G.
If E(G) is a meet semilattice then G is called an inductive groupoid. The pseudoproduct is then everywhere defined and (G, ⊗) is an inverse semigroup. On the other hand, given an inverse semigroup S with semilattice of idempotents E(S), then S is a poset under the natural partial order, and the restriction of its multiplication to the partial composition s · t = st ∈ S defined when s −1 s = tt −1 gives S the structure of an ordered groupoid, with set of identities E(S). These constructions give an isomorphism between the categories of inverse semigroups and inductive groupoids: this is the Ehresmann-Schein-Nambooripad Theorem [8,Theorem 4.1.8]. We call a product st ∈ S with s −1 s = tt −1 a trace product. Any product in S can be expressed as a trace product, at the expense of changing the factors, since st = stt −1 · s −1 st.
Let e ∈ E(G). Then the star of e in G is the set star G (e) = {g ∈ G : gg −1 = e}. A functor φ : G → H is said to be star-injective if, for each e ∈ E(G), the restriction φ : star G (e) → star H (eφ) is injective. A starinjective functor is also called an immersion. If G is inductive, then star G (e) is just the Green R-class of e in the inverse semigroup (G, ⊗).
NORMAL INVERSE SUBSEMIGROUPS AND QUOTIENTS
An inverse subsemigroup N of an inverse semigroup S is normal [13] if it is full -that is, if E(N ) = E(S) -and if, for all s ∈ S and n ∈ N , we have s −1 ns ∈ N . A normal inverse subsemigroup N of S determines a relation N on S, defined using the natural partial order on S, as follows: Note the requirement that trace products occur here. We define the relation N by symmetrizing N : (2.2) s N t ⇐⇒ there exist a, b, c, d ∈ N such that a · s · b t and c · t · d s and so s S t. (c) Since N is full, ss −1 , s −1 s ∈ N and s = ss −1 · s · s −1 s t. (d) Suppose that a, b ∈ N with a·s·b e. Therefore a·s·b = f e with f ∈ E(S), and then s = a −1 a · s · bb −1 = a −1 f b −1 ∈ N . Conversely, if n ∈ N then nn −1 · n · n −1 = nn −1 and so n N nn −1 . (e) We have a, b ∈ N with a · s · b s 2 and so s a −1 s 2 b −1 . Therefore s = ss −1 a −1 s 2 b −1 and it follows that (g) It is clear that is reflexive. Suppose that s, t, u ∈ S and that s N t N u. There exist a, b, p, q ∈ N such that a · s · b t and p · t · q u. Then (pa)s(bq) u, and (pa)s(bq) is the trace product (pa) · s · (bq) since and aa −1 tt −1 . Similarly s −1 s = (bq)(bq) −1 . Therefore (pa) · s · (bq) u and s N u.
Corollary 2.2. The normal inverse subsemigroup N is determined by the preorder N , and we obtain an order-preserving embedding of the poset of normal inverse subsemigroups of S into the poset of preorders on S that contain the natural partial order.
Proof. Part (d) of Lemma 2.1 shows that N = {s ∈ S : there exists e ∈ E(S) with s N e}. Remark 2.3. Not every preorder containing the natural partial order arises from a normal inverse subsemigroup. Consider the symmetric inverse monoid I n and define a preorder by α β ⇐⇒ d(α) ⊆ dβ where d(γ) is the domain of γ ∈ I n . Then α id for all α ∈ I n , and so the normal inverse subsemigroup associated to is I n itself, but is not the J -preorder on I n and so is not equal to In .
We denote the -class of s ∈ S by [s] N . Proof.
(a) If n ∈ N then n −1 · n · n −1 = n −1 and n · n −1 · n = n, and hence n N n −1 . Similarly nn −1 · n · n −1 = nn −1 and nn −1 · nn −1 · n = n, whence n N nn −1 . (b) Suppose that s ∈ S and that for some n ∈ N we have s N n. By part (a) we may assume that n ∈ E(S): then there exist p, q ∈ N such that p · s · q n. Hence for some e ∈ E(S) we have p · s · q = e and so s = p −1 · e · q −1 = p −1 q −1 ∈ N . (c) This follows from parts (a) and (b).
In this example, the poset of J -classes is just a three-element chain and so is a semilattice.
Here the J -classes do not form a semilattice, and so T is not a congruence, and the quotient T / T is not an inverse semigroup: it is the poset Following the notation in [1], we shall denote the quotient S/ N by S/ /N and let π : S → S//N be the quotient map. Our next result sets out the ordered groupoid structure on S/ /N : it is a special case of [1, Theorem 3.14], but the description is simpler for quotients of inverse semigroups and seems worth stating in detail.
Theorem 2.6. For any inverse semigroup S and normal inverse subsemigroup N , the quotient set S/ /N is an ordered groupoid, with the following structure: (a) the identities are the classes [e] N where e ∈ E(S), and a class Proof. It follows from part (d) of Proposition 2.4 that the domain and range of [s] N and its inverse [s] −1 N are well-defined. Suppose that s, t ∈ S and s −1 s N tt −1 bu that we choose another element z ∈ N with zz −1 s −1 s and z −1 z = tt −1 . Then and so for fixed s, t the N -class of the element sat does not depend on the choice of a. We denote this class by s t. Suppose that s N s 1 and that we choose a 1 ∈ N to form Similarly, s t does not depend on the choice of the element t within its N -class, and the product Now the relation s −1 s N tt −1 furnishes not only a ∈ N with aa −1 s −1 s and a −1 a = tt −1 but also p ∈ N with pp −1 s −1 s and pp −1 = tt −1 . Consider saps −1 ∈ N : we have give a groupoid structure on S/ /N . Now by Lemma 2.1, N induces the given partial order on the N -classes, and it remains to show that this partial order makes S/ /N into an ordered groupoid.
we may as well assume that s 1 s and t 1 t.
To show that κ is a functor, suppose that Therefore (sφ)(tφ) is a trace product defined in the inductive groupoid (Σ, ·) and κ is a functor.
To show that κ is star-injective, suppose that for some u, v ∈ S we have [uu −1 ] K = [vv −1 ] K and uφ = vφ. We claim that u K v. By symmetry, it is sufficient to show that u K v. Now by part (e) Proposition 2.4 there exist and so u −1 b −1 v ∈ K and u k v as required.
Corollary 2.8. The factorization of φ : S → Σ is unique, in the sense that if φ also factorizes as S → S/ /N ν → Σ with ν a star-injective functor, then N = K (and hence ν = κ.) Proof. If n ∈ N then by part (a) of Proposition 2.4, we have n N nn −1 and so nφ ∈ E(Σ). Hence N ⊆ K. Now if k ∈ K then kφ ∈ E(Σ), and since ν is star-injective, then [k] n is an identity in S//N and so, for some e ∈ E(S) we have k N e. Then there exists a, b ∈ N such that a · k · b e, and so a · k · b = f ∈ E(S). But a −1 a = kk −1 and bb −1 = k −1 k, so that Hence K ⊆ N and so N = K.
We note that this factorisation of an inverse semigroup homomorphism requires the use of an intermediate ordered groupoid. We shall apply it to the study of congruences in section 3. For the further study of inverse semigroups, it is clearly of interest to know when we can form a quotient inverse semigroup S//N . Since an inverse semigroup is equivalent to an ordered groupoid in which the poset of identities is a semilattice, we have the following. Example 2.10. In the symmetric inverse monoid I n , let N be the subset of non-permutations together with the identity map id. Then N is a normal inverse subsemigroup. Since only id ∈ N can form trace products with permutations, N restricts to the identity on the symmetric group S n ⊂ I n . Moreover, for any id = ν ∈ N and σ ∈ S n we have ν −1 · ν · σ| r(ν) σ so that ν N σ. On the elements of N , the relation N is equal to the D (and J ) relation, and so for non-identity α, β ∈ N we have α N β ⇐⇒ |d(α)| = |d(β)|. It follows that I n / /S n consists of the group S n as a set of n! maximal elements, and a chain e n−1 > e n−2 > · · · > e 1 > e 0 of identitites corresponding to the cardinalities of non-identity elements of N .
Example 2.11. Polycyclic and gauge monoids. Let A = {a 1 , a 2 , . . . , a n } with n 1. The polycyclic monoid P n (introduced in [12]) is the inverse hull of A * : its underlying set is (A * × A * ) ∪ {0} and the multiplication of non-zero elements is given by: otherwise.
The semilattice of idempotents is and the natural partial order between non-zero elements is given by (u, v) (s, t) if and only if u = ps, v = pt for some p ∈ A * .
Full inverse subsemigroups of P n have the form Q ∪ {0} where Q is a left congruence on A * : see [9,Theorem 3.3], with a change to left congruence required by our differing conventions. Meakin and Sapir [11] established the first correspondence of this kind, showing that the lattice of congruences on A * is isomorphic to the lattice of positively self-conjugate submonoids of P n , where an inverse submonoid R is positively self-conjugate if (w, 1)R(1, w) ⊆ R for every w ∈ A * . Proof. By [9, Theorem 3.3] Q must be a left congruence on A * . Suppose that (q 1 , q 2 ) ∈ Q and that (h 1 , h 2 ), (k 1 , k 2 ) (w 1 , w 2 ) ∈ P n . Then h i = uw i and k i = vw i for i = 1, 2 and some u, v ∈ A * . Then N is normal if and only where q 1 = uw 1 and q 2 = vw 1 . Therefore N is normal if and only if, for all w 1 , w 2 ∈ A * , we have that (uw 1 , vw 1 ) ∈ Q implies that (uw 2 , vw 2 ) ∈ Q and this is equivalent to Q being a right cancellative two-sided congruence on A * .
The gauge inverse monoid G n is defined by It was introduced in [7], and by Lemma 2.12 it is a normal inverse submonoid of P n . By [7, Lemma 3.4] Green's relations D and J coincide in G n , and clearly (s, t) and (u, v) are D-related in G n if and only if |s| = |t| = |u| = |v|. Thus the non-zero J -classes are indexed by the non-negative integers and the J -order on them is trivial. The J -class of 0 is minimal and E(P n )/J Gn is a semilattice, and so P n / /G n is an inverse semigroup. To identify it, we note that (u, v) Gn (s, t) if and only if there exist h, k ∈ A * such that |h| = |u|, |k| = |v| and It follows that h = ps and k = pt for some p ∈ A * and so (u, v) Gn (s, t) ⇐⇒ there exists p ∈ A * such that |u|−|s| = |v|−|t| = |p| 0 .
The relation Gn is then given by (u, v) Gn (s, t) ⇐⇒ |u| = |s| and |v| = |t| and the Gn classes of the non-zero elements in P n are thus parametrized by pairs of non-negative integers. Now in P n / /G n we have and so P n //G n is isomorphic to the Brandt semigroup on the set of nonnegative integers.
CONGRUENCES AND KERNELS
Let ρ be a relation on an inverse semigroup S. Following [13], the trace tr(ρ) of ρ is its restriction to E(S), and the kernel ker ρ is the set ker ρ = {s ∈ S : s ρ e for some e ∈ E(S)}.
Proof. This follows from Lemma 2.1(c), Proposition 2.4(a) and (d).
Recall from [13] that a congruence ρ on the semilattice of idempotents E(S) of S is normal if, for all s ∈ S, e ρ f implies that s −1 es ρ s −1 f s. Then [13, Definition 4.2] a congruence pair (K, ν) on S consists of a normal inverse semigroup K of S and a normal congruence ν on E(S) such that (3.1) if e ∈ E(S) and s ∈ S satisfy se ∈ K and s −1 s ν e then s ∈ K, (3.2) if u ∈ K then uu −1 ν u −1 u.
For any congruence ρ, its kernel and trace form a congruence pair. Conversely, given a congruence pair (K, ν) the relation ρ (K,ν) defined by is a congruence with kernel K and trace ν. This correspondence is the basis of the characterization of congruences in [13,Theorem 4.4]. The lattice of all congruences on both regular and inverse semigroups was earlier studied by Reilly and Scheiblich [15].
If ρ is a congruence on S, let ρ(s) be the class of s ∈ S and let ρ * : S → S/ρ be the quotient map, s → ρ(s). Now S/ρ is an inverse semigroup and so is an inductive groupoid with its trace product. If K = ker ρ then K is a relation on S, and as in [1] we have the quotient map π : S → S/ /K, s → [s] K where S//K is an ordered groupoid. Applying Corollary 2.7 to the homomorphism S → S/ρ we obtain: Proposition 3.2. If K is the kernel of the congruence ρ on an inverse semigroup S then s K t implies that s ρ t, and the induced mapping κ : S//K → S/ρ carrying [s] K → ρ(s) is a surjective star-injective functor.
The converse of proposition 3.2 is the following. Then (N, ν) is a congruence pair, and the associated congruence ρ (N,ν) on S is that determined by the composition Proof. It is clear that ν is a normal congruence on E(S). Suppose that e ∈ E(S), s ∈ S, se ∈ N and (s −1 s)φ = eφ. Then and (se)φ ∈ E(Q) since se ∈ N . Hence sφ = sπν ∈ E(Q). Since ν is star-injective, then sπ ∈ E(S/ /N ) and so s ∈ N by part (c) of Proposition 2.4. Now if u ∈ N then uφ = uπψ ∈ E(Q) and so uφ = u −1 φ = (uu −1 )φ = (u −1 u)φ and therefore uu −1 ν u −1 u. This confirms that (N, ν) is a congruence pair.
Howie [6,Exercise 5.11.16] defines a full inverse semigroup N of an inverse semigroup S to have the kernel property if, whenever s, t ∈ S with st ∈ N and n ∈ N then snt ∈ N . A full inverse subsemigroup with the kernel property is called normal in [4]. It is easy to see that an inverse subsemigroup wth the kernel property is normal in the sense of [13] (the sense used in this paper), and that the kernel of any congruence has the kernel property. Moreover, an inverse subsemigroup with the kernel property is the kernel of its syntactic congruence, and so: Hence, if N is a congruence, N must have the kernel property. Now if ρ is a congruence with kernel N we have, for s, t ∈ S, and so N is minimal.
3.1. Idempotent separating congruences. A congruence ρ on an inverse semigroup S is idempotent separating if its trace is the identity relation on E(S). The classification of congruences by congruence pairs [13,Theorem 4.4] shows that an idempotent separating congruence is entirely determined by its kernel K, a normal inverse subsemigroup of S which, by (3.2), must also satisfy the property that for all a ∈ K, aa −1 = a −1 a. (Hence K is a Clifford inverse semigroup). The congruence ρ is then defined, according to (3.3), by: (3.4) s ρ t ⇐⇒ st −1 ∈ K and s −1 s = t −1 t . Proposition 3.6. If ρ is an idempotent-separating congruence on S with kernel K then the relations ρ and K are equal, and so κ : S/ /K → S/ρ is an isomorphism of inverse semigroups.
Proof. If s ρ t then st −1 ∈ K and since s −1 s = t −1 t, we see that st −1 is a trace product in (S, ·). Hence s · t −1 · t is also a trace product in S, and st −1 t s. Since st −1 in K, this shows that [t] K [s] K . By symmetry, they are equal (or we can repeat the argument using ts −1 ∈ K and ts −1 s t). So if ρ is idempotent-separating then s ρ t implies that s K t, and Proposition 3.2 gives the reverse implication.
Remark 3.7. The converse of this result is not true: see Example 3.9 below.
3.2. Closed inverse subsemigroups. For a subset A of an inverse semigroup S, we denote by A ↑ the smallest closed subset of S containing A. If A is an inverse subsemigroup of S, then so is A ↑ .
Let N be a closed inverse subsemigroup of S: so if n ∈ N and n s then s ∈ N . The relation a ≡ N b ⇐⇒ ab −1 ∈ N is then an equivalence relation on the subset star S (N ) = {s ∈ S : ss −1 ∈ N } and the equivalence classes are the cosets of N . This notion of coset was introduced by Schein [16]. If N is normal, then star S (N ) = S and ≡ N is an equivalence relation on S and it is easy to see that it is then also a congruence, with kernel N . If s, t ∈ S and there exists e ∈ E(S) with es = et then est −1 e ∈ E(S) ⊆ N and, since N is closed, st −1 ∈ N . It follows that ≡ N contains the minimal group congruence σ on S, and if σ * : S → S/σ then S/ ≡ N is isomorphic to the quotient group (S/σ)/N σ * .
The relation N is finer than ≡ N : If T is not a group, then ≡ T is not idempotent separating, and so the converse of Proposition 3.6 is not true in general.
INVERSE MONOID PRESENTATIONS
Let P = X : R be a presentation of the inverse monoid M . We assume that R consists of a set of pairs ( , r) with , r ∈ FIM(X), the free inverse monoid on X. The pairs in R generate a congruence P on FIM(X) with M isomorphic to the quotient FIM(X)/ P . We let π : FIM(X) → M denote the quotient map.
Let K(P) be the kernel of P . We note that K(P) is the image in FIM(X) of the idempotent problem (see [3]) of P in (X X −1 ) * . By Theorem 3.4 K(P) is a full inverse subsemigroup of FIM(X) with the kernel property, and hence is normal. Proof. Elements of N (P) are products of conjugates of elements of Q(R) and their inverses and idempotents in FIM(X) Since each element of Q(R) is mapped by π to an idempotent of M we therefore have N (P) ⊆ K(P).
Suppose now that u P v: then there exists u = u 0 , u 1 , . . . , u k−1 , u k = v such that, for all i with 0 i k − 1, there exist p i , q i ∈ FIM(X) such that u i = p i q i and u i+1 = p i rq i , or vice versa. Assume the former: then Hence u i+1 n i u i and so, for some n ∈ N , we have v nu. Hence if for some e ∈ E(FIM(X)) we have e P v, that is if v ∈ K(P), then v ne with ne ∈ N and so v ∈ N ↑ . Proof. Suppose that M is E-unitary, that u v in FIM(X) and that v ∈ K(P). Then vπ ∈ E(M ) and since uπ vπ we have uπ ∈ E(M ). By Lallement's Lemma [6,Lemma 2.4.3], there exists e ∈ E(FIM(X)) with eπ = uπ, and so u ∈ K(P).
Conversely, suppose that K = K(P) is closed. Then by Proposition 4.1, we have K = N (P) ↑ . As in section 3.2, the relation u ≡ K v ⇐⇒ uv −1 ∈ K is a congruence on FIM(X) with kernel K, and the quotient FIM(X)/ ≡ K is isomorphic to the quotient group F (X)/N (P)σ * , where F (X) is the free group on X and σ * is the canonical map FIM(X) → F (X). This quotient group is the maximal group image M of M . By Proposition 3. (a) Take M = I 2 with τ = 1 2 2 1 and ε = 1 2 1 * . Let X = {t, e}, and let P be a presentation of I 2 with generating set X, with tπ = τ and eπ = ε. Since I 2 is not E-unitary, K is not closed.
Consider the free inverse monoid M on two commuting generators [10], presented by P = a, b : ab = ba . Then baba −1 b −1 b −1 ∈ N and so u = baba −1 b −1 and v = b lie in the same coset of N ↑ in FIM(X), but u = v in M . This is verified by mapping M → I 2 by a → 1 2 1 * and b → 1 2 * 2 .
Then u maps to 0 but v does not. We note that in this case, M is E-unitary [10, Proposition 2.4] , and so K(P) = N ↑ .
|
2016-01-29T16:58:50.000Z
|
2016-01-29T00:00:00.000
|
{
"year": 2018,
"sha1": "e68bc344f8261ac245dc04ef11374990b09c2cec",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1601.08194",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "d3ac04ddaa2094e8c2318888a89c025241cd2f51",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
264891146
|
pes2o/s2orc
|
v3-fos-license
|
Associations between traditional Chinese medicine body constitution and obesity risk among US adults
Background: Traditional Chinese medicine (TCM) body constitution (BC), primarily determined by physiological and clinical characteristics, is an important process for clinical diagnosis and treatment and play a critical role in precision medicine in TCM. The purpose of the study was to explore whether the distributions of BC types differed by obesity status. Methods: We conducted a study to evaluate BC type in US population during 2012–2016. A total of 191 White participants from Personalized Prevention of Colorectal Cancer Trial (PPCCT) completed a self-administered Traditional Chinese Medicine Questionnaire (TCMQ, English version). In this study, we further compared the distribution of major types of TCM BC in the PPCCT to those Chinese populations stratified by obesity status. Results: We found the Blood-stasis frequency was higher in US White adults, 22.6% for individuals with BMI <30 and 11.2% for obese individuals, compared to 1.4% and 1.8%, respectively, in Chinese populations. We also found the percentages Inherited-special and Qi-stagnation were higher in US White adults than those in Chinese populations regardless of obesity status. However, the proportions of Yang-deficiency were higher in Chinese populations than those in our study conducted in US White adults regardless of obesity status. Conclusions: These new findings indicate the difference in distribution of BC types we observed between US and Chinese populations cannot be explained by the differences in prevalence of obesity. Further studies are needed to confirm our findings and understand the potential mechanism including genetic background and/or environmental factors.
Introduction
In recent decades, obesity has surged to epidemic proportions and has been a major contributor to the global burden of chronic noncommunicable diseases and the associated disability.According to the World Health Organization (WHO) in 2016, over 1.9 billion aged 18 years and older were overweight and 650 million adults were obese.Obesity is not only a condition of increased body weight but also a complex disorder that affects glucose, lipid, and protein metabolism.Furthermore, it is closely related to metabolic disorders such as insulin resistance (IR), type 2 diabetes mellitus (T2DM), and liver steatosis, leading to significant morbidity, mortality, and societal burden (1).
Body constitution (BC), the foundational concept of traditional Chinese medicine (TCM), classifies individuals into nine types: Balanced constitution, Qi-deficiency constitution, Yang-deficiency constitution, Yin-deficiency constitution, Phlegm-dampness constitution, Damp-heat constitution, Blood-stasis constitution, Qi-stagnation constitution and Inheritedspecial constitution (2).This classification is based on physical, psychological, and physiological characteristics, as well as susceptibility to illnesses and adaptability to the environment.These constitutions are dynamic and can be influenced by acquired factors including lifestyles and environment (3).BC is an important process for clinical diagnosis and treatment and play a critical role in precision medicine and precision prevention.
However, TCM plays a very limited role in Western medical practice.Almost all these previous studies using TCM BC have been conducted in Chinese populations except for one study of 400 White college students attending three Beijing universities in China (4).
Very recently, in the first conducted in an American population (5), we found the distribution of TCM BC types in a US White population were different from the distribution among Chinese individuals in China.TCM BC has been previously associated with obesity in China (6) and the US population has higher prevalence of obesity than the Chinese populations (7).Thus, it is possible that the difference in distribution of major types of TCM BC may be due to higher obesity prevalence in our study population.In this study, we compared the distribution of major types of TCM BC between our study population and Chinese populations according to obesity status.
Study population
The participants in this study were from the Personalized Prevention of Colorectal Cancer Trial (PPCCT, NCT01105169 at ClinicalTrials.gov) which was a double-blind, placebocontrolled, randomized precision-based magnesium trial conducted between March 2011 and January 2016.Aged 40 to 85 years with history of colorectal polyp or high risk of colorectal cancer (CRC) participants were enrolled from Vanderbilt University Medical Center.Detailed inclusion and exclusion criteria have been reported (8,9).In the parent study, 250 participants enrolled and 239 completed the study with 11 participants finishing part of the study before withdrawal.One participant completed questionnaires and provided samples at baseline and at the end of the trial after withdrawal.In total, 240 participants were included (shown in Table 1).Traditional Chinese Medicine Questionnaire (TCMQ, English version: Health Questionnaire) as a component of the study was approved by IRB on May 31, 2012, which was a year after the trial started.A total of 191 participants (80% of 240) completed the self-administered TCMQ from May 31, 2012 to Jan 30, 2016 (5).The study was conducted in accordance with the Declaration of Helsinki (as revised in 2013).The study was approved by the Vanderbilt Institutional Review Board (IRB) (No. 100106) and informed consent was obtained from all individual participants.
Constitution in TCMQ
TCMQ is an English version of the standardized Constitution in Chinese Medicine Questionnaire (CCMQ) was developed by Wang et al. in China (10).As reported in our study (5), the construct validity of TCMQ was confirmed with scaling success rates that ranged from 75.0% to 100%, and confirmatory factor analysis indicate an acceptable model fit.The results of reliability for test-retest reliability (intra-class correlation coefficients) were 0.7-0.8, the Cronbach's alphas ranged from 0.44 to 0.72 during three-month period.The TCMQ consists of 60 questions and each item (question) was graded on a 5-point Likert scale, ranging from 1 (not at all) to 5 (very much).Each of the nine subscales within the TCMQ assessed one type of the TCM BC individually.A total score of each subscale was obtained by summing relevant item scores and the transformed score was created for each type by using the following equation.
Following the criteria, a higher score in a specific TCMQ BC subscale indicates a higher likelihood of the corresponding BC type, with a score of 30 being a "threshold" for case definition.The following rule was applied to ultimately classify a participant within a specific TCM BC: when (I) the score for the Balanced subscale was greater than or equal to 60 and other BC type scores were less than 30, the study participant was diagnosed as "Balanced" which was considered the balanced BC; when (II) the score for an imbalanced BC type (all other BCs) was greater than or equal to 40, then the participant was regarded as one or more of eight imbalanced BC types; and when (III) a BC type score was between 30-40, the diagnosis was made by a well-trained Chinese Medicine Practitioner.
Anthropometric measurements
Anthropometry or body measures for the participants were conducted during their clinic visits by trained personnel.Weight was taken on a digital scale and measured in kilograms.
When the participant was properly positioned and the digital readout was stable, the study nurse recorded the number on the screen.Standing height was measured in centimeter with a fixed stadiometer with a vertical backboard and a moveable headboard.They were asked to stand on the floor with the heels of both feet together and the toes pointed slightly outward at approximately a 60° angle.When the participant was properly positioned, the study nurse recorded the height.At least two measurements were required, the 3 rd measurement was made only if the difference between the first two was greater than the difference threshold (0.1 kg for weigh and 1 cm for height).The means of two or three measurements (including weight, height, waist circumference, hip circumference) separated by 30 seconds were used for the analysis.BMI values are calculated for participants using measured height and weight values as follows: weight (kilograms)/height (meters squared).BMI criteria are used to for weight categories: normal or desirable weight (BMI values <25.0), overweight (BMI values 25.0-29.9),obese (BMI values ≥30.0).
Statistical analyses
Mean ± standard deviation for continuous demographic variables and percentage for categorical demographic variables were presented in Table 1.We performed general linear model for continuous variables or Pearson chi-squared tests, and Fisher's exactly tests for categorial variables.We compared the distribution of nine TCM BC types by obesity status and population.All P values are two sided and statistical significance was determined using an alpha level of 0.05.The data analyses used software SAS Enterprise Guide 7.1.
Results
We compared baseline demographic characteristics between 191 participants who completed TCMQ and all participants (n=240) in the parent study (Table 1).No significant difference was found for baseline characteristics between these two groups.Table 2 showed the distributions of the nine-body constitution by body weight and no difference was found between normal, overweight, and obese participants.
We found that the Blood-stasis proportion was higher in US White adults, 22.6% for individuals with BMI <30 and 11.2% for obese individuals, compared to 1.4% and 1.8%, respectively, in Chinese populations (Table 3).Also, the percentages of Inherited-special were higher in US White adults, 12.8% for individuals with BMI <30 and 7.9% for obese individuals, than 1.7% and 1.6%, respectively, in Chinese populations (11).Likewise, we observed that the proportions of Qi-stagnation were higher in US White adults, 4.9% for individuals with BMI <30 and 7.9% for obese individuals, than 1.0% and 1.6%, respectively, in Chinese populations.Conversely, the proportions of Yang-deficiency were higher in Chinese populations, 18.8% for individuals with BMI <30 and 10.6% for obese individuals, than 4.9% and 4.5%, respectively in our study conducted in US White adults.(P 1 <0.0001 among BMI <30 and P 2 <0.0001 among obesity adults).
In our recent report (5), the distribution of BC types varied greatly between our study conducted in US White adults and those conducted in Chinese populations.Balanced (29.8%) was the predominant BC type in our study, while the three most common pathologic BC types were Blood-stasis (17.3%),Qi-deficiency (13.6%), and Inherited-special (10.5%).
In contrast, the most common pathologic BC types found in Chinese populations were Qi-deficiency, Yang-deficiency, Yin-deficiency, and Phlegm-Dampness.In this study, the differences of the three BC proportions between US population and Chinese population remained after considering obesity status.
Discussion
These new findings indicate the difference in distribution of BC types we observed between populations cannot be explained by the differences in prevalence of obesity.However, our finding was not conclusive.The different distribution between our study population in US and that conducted in previous studies may be caused by selection biases.In our study, over 46% of the study participants were obese whereas only 10.1% of participants in studies conducted in China were obese.Thus, the difference could be due to the different prevalence rates in obesity.In addition, our study participants had a personal history of colorectal polyp or were at high risk of CRC.Another possibility is that individuals diagnosed with colorectal polyp or at high risk of CRC had different BC types.Tao et al. recently reported that the observed differences in the distribution of Blood-Stasis cannot be attributable to this selection bias, as well as different distributions in sex and age (12).Thus, it is possible that the observed differences are due to the different genetic background or environmental factors, including different dietary patterns.
Conclusions
The dissimilarity in distribution of BC types between US population and Chinese population cannot be solely attributed to the variations in obesity prevalence.Additional research is necessary to validate and comprehend the potential reasons for the observed disparity.
1 :
the comparison of US and Chinese population among BMI <30; P 2 : the comparison of US and Chinese population among BMI ≥30.TCMQ, Traditional Chinese Medicine Questionnaire; PPCCT, Personalized Prevention of Colorectal Cancer Trial.Longhua Chin Med.Author manuscript; available in PMC 2024 May 31.
Table 1
Baseline characteristics of participants with or without TCMQ, results from PPCCT Continuous variables were presented as mean ± SD, P values (the comparison between overall and participants who completed TCMQ) were calculated using GLM model; categorical variables presented as n (%), P values were calculated using chi-square test or fisher's exactly test.TCMQ, Traditional Chinese Medicine Questionnaire; PPCCT, Personalized Prevention of Colorectal Cancer Trial; BMI, body mass index; SBP, systolic blood pressure; DBP, diastolic blood pressure; HDL, high-density lipoproteins; LDL, low-density lipoproteins.
Longhua Chin Med.Author manuscript; available in PMC 2024 May 31.
Table 2
Comparison of distribution of different body mass index among the TCMQ constitution groups, results from PPCCTCategorical variables presented as n (%), P values were calculated using chi-square test.TCM, traditional Chinese medicine; TCMQ, Traditional Chinese Medicine Questionnaire; PPCCT, Personalized Prevention of Colorectal Cancer Trial.
Longhua Chin Med.Author manuscript; available in PMC 2024 May 31.
Table 3
Comparison of distribution of different body mass index among the TCMQ constitution groups
|
2023-11-02T15:14:56.064Z
|
2023-10-01T00:00:00.000
|
{
"year": 2023,
"sha1": "332f8fcc50fcfe281578a33576054a8ceacf74bc",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.21037/lcm-23-7",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ae8af556c2e49fa93afa52a839cf24cef7f50452",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
118638525
|
pes2o/s2orc
|
v3-fos-license
|
Single qudit realization of the Deutsch algorithm using superconducting many-level quantum circuits
Design of a large-scale quantum computer has paramount importance for science and technologies. We investigate a scheme for realization of quantum algorithms using noncomposite quantum systems, i.e., systems without subsystems. In this framework, $n$ artificially allocated"subsystems"play a role of qubits in $n$-qubits quantum algorithms. With focus on two-qubit quantum algorithms, we demonstrate a realization of the universal set of gates using a $d=5$ single qudit state. Manipulation for an ancillary level in the systems allows effective implementation of operators from ${\rm U}(4)$ group via operators from ${\rm SU}(5)$ group. Using a possible experimental realization of such systems through anharmonic superconducting many-level quantum circuits, we present a blueprint for a single qudit realization of the Deutsch algorithm, which generalizes previously studied realization based on the virtual spin representation [A.R. Kessel et al., Phys. Rev. A 66, 062322 (2002)].
Design of a large-scale quantum computer has paramount importance for science and technologies. We investigate a scheme for realization of quantum algorithms using noncomposite quantum systems, i.e., systems without subsystems. In this framework, n artificially allocated "subsystems" play a role of qubits in n-qubits quantum algorithms. With focus on two-qubit quantum algorithms, we demonstrate a realization of the universal set of gates using a d = 5 single qudit state. Manipulation with an ancillary level in the systems allows effective implementation of operators from U(4) group via operators from SU(5) group. Using a possible experimental realization of such systems through anharmonic superconducting many-level quantum circuits, we present a blueprint for a single qudit realization of the Deutsch algorithm, which generalizes previously studied realization based on the virtual spin representation [A.R. Kessel Building of a large-scale quantum computer is one of the most challenging domains of quantum information technologies [1][2][3][4][5][6]. This generation of computational devices demonstrates a potential to outperform their classical counterparts greatly [2][3][4][5][6]. Examples include searching an unsorted database [5] as well as integer factorization and discrete logarithm problems [6] to name a few.
From a physical point of view, a quantum computer is an open quantum system with a large number of subsystems, which play a role of information units. These systems can be realized via a variety of physical platforms. Quantum states of a composite system are described by the density operator in the abstract Hilbert space being a product, of the Hilbert spaces of the physical subsystems. A crucial requirement to such systems as platforms for quantum information processing is scalability with respect to number of qubits [7]. Success in scalability of the systems results in increasing the number of subsystems making the problem of achieving a suitable degree of control more and more challenging. However, a set of required features for quantum technologies is available not only in composite systems but in noncomposite systems as well [8][9][10][11][12][13][14]. Recent experimental study of photonic qutrit states demonstrates fundamentally non-classical behavior of noncomposite quantum systems [10]. The idea behind this result dates back to the Kochen-Specker theorem [15], which provides certain constraints on hidden variable theories, that could be used to explain probability distributions of quantum measurement outcomes. The Hilbert space of noncomposite systems is arranged in the opposite way to (1), however it is equivalent to that mathematically: it can be represented in form (1), i.e., as a product of the Hilbert spaces of abstract subsystems. Investigations of information and entropic characteristics of noncomposite quantum systems [11][12][13][14] have confirmed possibilities of their applications in quantum technologies. Furthermore, a potential gain from the use of noncomposite many-level quantum systems has been demonstrated in quantum coin-flipping and bit commitment [16], protocols for quantum key distribution [17][18][19][20], quantum information processing [8,9,[21][22][23][24] and clock synchronization algorithms [25].
In the present work, we stress on the implementation of quantum algorithms via noncomposite quantum systems with focus on their realization via anharmonic superconducting many-level quantum circuits using addressing to a particular transition. Our consideration is valid for an arbitrary realized many-level quantum system. However, we focus on many-level superconducting circuits due to significant progress in their design [28][29][30][31][32].
Recent experiment with a superconducting four-level quantum circuit has explored "hidden" two-qubit dynamics [31]. Therefore, it is interesting to study possibilities of demonstration computational speed-up from single qudit realization of oracle-based algorithms using superconducting many-level circuits.
Here, we consider a qudit state with d = 5, where four levels are used for storage of two-qubit quantum states and an ancillary fifth level is employed for effective realization of operators from U(4) group via operators from SU(5) group (see Fig. 1). We demonstrate that this trick makes it possible to construct the universal set of twoqubit quantum gates consisting of Hadamard, π/8 and controlled NOT gates [48].
The main emphasis of our work is on a single qudit realization of one the first oracle-based quantum algorithm -the Deutsch algorithm [49]. Employment of the ancillary level is a novel feature compared to our previous study [50], where we considered a d = 4 qudit state and proposed a scheme for Hadamard gates from the universal set only, as well as with previously studied realization of the Deutsch algorithm [9]. The suggested single qudit realization of the Deutsch algorithm differs from previously studied [9], where the operated physical environment allowed to apply arbitrary quantum gates without using of ancillary levels.
Our paper is organized as follows. In Section II, we consider a correspondence between a qudit state with d = 5 and a two-qubit quantum system as well as propose scheme for the universal set of quantum gates for twoqubit algorithms using noncomposite quantum systems. Using the universal set of quantum gates, we present a realization for a single qudit realization of the Deutsch algorithm in Section III. We conclude the paper and summarize results in Section IV.
II. UNIVERSAL SET OF GATES
The composite representation of noncomposite quantum d-level systems with d > 2 corresponds to any possible mapping of its Hilbert space on a tensor product of several Hilbert spaces, which correspond to abstract subsystems.
In this paper, we consider the five-dimensional Hilbert space of anharmonic superconducting many-level quantum circuit (see Fig. 1). The correspondence between the stationary energy states and two-qubit logic basis can be presented as follows: This mapping resembles the virtual spin representation suggested in Ref. [8]. We assume that the population of the fifth level is negligible and we keep them in consideration only for the implementation of quantum gates. Due to above-stated assumptions, the state of the system, written in the original basis, can be presented as while the states of allocated "subsystems" A and B turn to have a form where the matrices are written in their corresponding computational bases. We assume that our toolbox the system manipulation consists of applying θ−pulses on the transition between arbitrary pair of energy levels. In general, it can be done via coupling of a superconducting many-level quantum circuit to an external resonant field [28][29][30][31][32].
The corresponding elementary procedure turns to be rotation around X-axis of the "Bloch sphere" of the particular two-dimensional Hilbert subspace: where the matrix superscript j, k ∈ {0, 1, 2, 3, 4} indicates that it is written in the basis {|j , |k }, ⊕ stands for the direct sum, I n stands for the identity operator in n-dimensional Hilbert space and superscript (jk) indicates that the identity operator acts in the orthogonal complement (Span{|j , |k }) ⊥ , then R (jk) X (θ) acts in the whole original five-dimensional Hilbert space.
The appropriate sequence of rotations around X-axis results in the effective rotation around Y -axis: where the l-th level is one from (jk). We note that (5) and (6) correspond to SU(5) group of unitary operations with the unit determinant. It is well-known [48] that for the case of two-qubit systems the universal set of gates consists of one-qubit Hadamard and π/8-gates, Gates Action/Control qubit A Action/Control qubit B Hadamard gates (9) Here, arrows in superscripts indicate which qubit is the control one in this operation. In our setup, we can implement the Hadamard gates on particular qubits A and B as follows: In turn, the π/8 gates, acting on a particular qubit, have the following form: It easy to see that the determinant of four-dimensional operators T ⊗ I 2 and I 2 ⊗ T is equal to i, so they cannot be represented as a sequence of operators from SU(4). This fact is the crucial reason for the introduction of the ancillary level. In the case of π/8 gates, it accumulates the phase −i to obtain a unit determinant and makes it possible to realize the desired operation.
Finally, the CNOT gates can be implemented as where again we have to address the ancillary level to accumulate additional phase.
The full form of resulting operators is presented in Table I.
III. DEUTSCH ALGORITHM
Let us consider a "black box", which implements a Boolean function of one argument f . In fact, there are only four possible variants of such function: We note that f 1 and f 2 return the same value for all possible inputs and are called unbalanced functions, while f 3 and f 4 returns value 1 for half of the inputs, and 0 for the other half, and are called balanced ones. The considered task is to determine whether the unknown function f is balanced or not by minimal amount of queries to the black box. Clearly, in a classical domain one need at least two queries to cope with this task. In a quantum domain, the same problem is formulated using a set of two-qubit quantum gates {U i } 4 i=1 , where each gate performs the following operation U i |x ⊗ |y = |x ⊗ |y XOR f i (x) , x, y ∈ {0, 1}. (13) The problem is to determine whether, for the given gate U j the corresponding function f j is balanced or not. The In (b) a corresponding sequence of θ-pulses for an anharmonic superconducting many-level quantum circuit used as a platform for qudit. Blue dashed arrows denote a corresponding sequence from (16). The answer to the original question about the type of the function fj could be obtained using a coarse-grained measurement of the energy level: one needs to know whether it is higher than the energy of |1 or not. In (c) a proposal for the readout scheme based on the variation of the potential and checking whether the tunneling effect takes place. Similar readout scheme for an anharmonic four-level superconducting quantum circuit has been used in Ref. [31].
Deutsch algorithm [49] copes with this problem within only one query. It turns out that the following sequence of quantum gates gives: where we introduce the following notations: One can see that the resulting value of the first qubit contains the answer for the task. Indeed, it is 1 for balanced function functions and 0 otherwise. In the framework of the considered the two-qubit mapping (2) the counterparts of gates (13) can be realized as follows: It can be directly checked that the acting of U j given by (16) on the density matrix in form (3) after mapping (2) gives the same result as acting of U j (13) on corresponding two-qubit density matrix.
Depending on particular transformation (16), one can obtain the following state: with where H (AB) stands for Hadamard gates acting on both subsystems A and B. This operation can be implemented as follows: A complete scheme for a single qudit realization of the Deutsch algorithm is presented in Fig. 2.
By considering the set of states in (18), one can conclude that the answer to the original question whether the function f j is balanced or not could be obtained by coarse-grained measurement of energy level. Indeed, one needs to know whether it is higher than the energy of |1 or not. It can be performed by variation of the potential and checking whether the tunneling effect takes place (Fig. 2c). Similar experimental setup for the readout scheme has been used in Ref. [31].
IV. CONCLUSION
In the present Letter, we used the correspondence between d = 5 qudit states and "two-qubit" quantum systems with an ancillary level given by (2) to present single qudit schemes for the universal set of two-qubit gates: Hadamard (9), π/8 gates (10), and CNOT gates (11). In our framework, an ancillary fifth level in the systems allowed us to implement operators from U(4) group via operators from SU(5) group.
We suggested a scheme for a single d = 5 qudit realization of the Deutsch algorithm using an anharmonic four-level superconducting quantum circuit as a platform and applying θ−pulses on the transition between arbitrary pair of energy levels as basic operation for realizing gates. In our scheme, a standard way to readout based on the variation of the potential can be implemented. It is interesting to study possible realization of another class of quantum algorithms and investigate a potential gain from using of noncomposite quantum systems.
|
2015-04-05T12:01:04.000Z
|
2015-03-05T00:00:00.000
|
{
"year": 2015,
"sha1": "2c380b6757722db789f1481a87bd9da9b5e55d5e",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1503.01583",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "2c380b6757722db789f1481a87bd9da9b5e55d5e",
"s2fieldsofstudy": [
"Physics",
"Computer Science"
],
"extfieldsofstudy": [
"Physics"
]
}
|
236976554
|
pes2o/s2orc
|
v3-fos-license
|
Effects of empagliflozin on erythropoiesis in patients with type 2 diabetes: Data from a randomized, placebo‐controlled study
Sodium‐glucose cotransporter‐2 (SGLT2) inhibitors have been shown to significantly reduce hospitalization for heart failure (HHF) and cardiovascular (CV) mortality in various CV outcome trials in patients with and without type 2 diabetes mellitus (T2D). SGLT2 inhibition further increased haemoglobin and haematocrit levels by an as yet unknown mechanism, and this increase has been shown to be an independent predictor of the CV benefit of these agents, for example, in the EMPA‐REG OUTCOME trial. The present analysis of the EMPA haemodynamic study examined the early and delayed effects of empagliflozin treatment on haemoglobin and haematocrit levels, in addition to measures of erythropoiesis and iron metabolism, to better understand the underlying mechanisms. In this prospective, placebo‐controlled, double‐blind, randomized, two‐arm parallel, interventional and exploratory study, 44 patients with T2D were randomized into two groups and received empagliflozin 10 mg or placebo for a period of 3 months in addition to their concomitant medication. Blood and urine was collected at baseline, on Day 1, on Day 3 and after 3 months of treatment to investigate effects on haematological variables, erythropoietin concentrations and indices of iron stores. Baseline characteristics were comparable in the empagliflozin (n = 20) and placebo (n = 22) group. Empagliflozin led to a significant increase in urinary glucose excretion (baseline: 7.3 ± 22.7 g/24 h; Day 1: 48.4 ± 34.7 g/24 h; P < 0.001) as well as urinary volume (baseline: 1740 ± 601 mL/24 h; Day 1: 2112 ± 837 mL/24 h; P = 0.011) already after 1 day and throughout the 3‐month study period, while haematocrit and haemoglobin were only increased after 3 months of treatment (haematocrit: baseline: 40.6% ± 4.6%; Month 3: 42.2% ± 4.8%, P < 0.001; haemoglobin: baseline: 136 ± 19 g/L; Month 3: 142 ± 25 g/L; P = 0.008). In addition, after 3 months, empagliflozin further increased red blood cell count (P < 0.001) and transferrin concentrations (P = 0.063) and there was a trend toward increased erythropoietin levels (P = 0.117), while ferritin (P = 0.017), total iron (P = 0.053) and transferrin saturation levels (P = 0.030) decreased. Interestingly, the increase in urinary glucose excretion significantly correlated with the induction of erythropoietin in empagliflozin‐treated patients at the 3‐month timepoint (Spearman rho 0.64; P = 0.008). Empagliflozin increased haemoglobin concentrations and haematocrit with a delayed time kinetic, which was most likely attributable to increased erythropoiesis with augmented iron utilization and not haemoconcentration. This might be attributable to reduced tubular glucose reabsorption in response to SGLT2 inhibition, possibly resulting in diminished cellular stress as a mechanism for increased renal erythropoietin secretion.
delayed effects of empagliflozin treatment on haemoglobin and haematocrit levels, in addition to measures of erythropoiesis and iron metabolism, to better understand the underlying mechanisms. In this prospective, placebo-controlled, double-blind, randomized, two-arm parallel, interventional and exploratory study, 44 patients with T2D were randomized into two groups and received empagliflozin 10 mg or placebo for a period of 3 months in addition to their concomitant medication. Blood and urine was collected at baseline, on Day 1, on Day 3 and after 3 months of treatment to investigate effects on haematological variables, erythropoietin concentrations and indices of iron stores. Baseline characteristics were comparable in the empagliflozin (n = 20) and placebo (n = 22) group. Empagliflozin led to a significant increase in urinary glucose excretion (baseline: 7.3 ± 22.7 g/24 h; Day 1: 48.4 ± 34.7 g/24 h; P < 0.001) as well as urinary volume (baseline: 1740 ± 601 mL/24 h; Day 1: 2112 ± 837 mL/24 h; P = 0.011) already after 1 day and throughout the 3-month study period, while haematocrit and haemoglobin were only increased after 3 months of treatment (haematocrit: baseline: 40.6% ± 4.6%; Month 3: 42.2% ± 4.8%, P < 0.001; haemoglobin: baseline: 136 ± 19 g/L; Month 3: 142 ± 25 g/L; P = 0.008). In addition, after 3 months, empagliflozin further increased red blood cell count (P < 0.001) and transferrin concentrations (P = 0.063) and there was a trend toward increased erythropoietin levels (P = 0.117), while ferritin (P = 0.017), total iron (P = 0.053) and transferrin saturation levels (P = 0.030) decreased. Interestingly, the increase in Kirsten Thiele and Matthias Rau contributed equally to the work. urinary glucose excretion significantly correlated with the induction of erythropoietin in empagliflozin-treated patients at the 3-month timepoint (Spearman rho 0.64; P = 0.008). Empagliflozin increased haemoglobin concentrations and haematocrit with a delayed time kinetic, which was most likely attributable to increased erythropoiesis with augmented iron utilization and not haemoconcentration. This might be attributable to reduced tubular glucose reabsorption in response to SGLT2 inhibition, possibly resulting in diminished cellular stress as a mechanism for increased renal erythropoietin secretion. inhibitors (EMPA-REG OUTCOME with empagliflozin, 1 the CANVAS programme 2 and CREDENCE 3 with canaglifozin, DECLARE with dapagliflozin 4 ) demonstrated a reduction in cardiovascular (CV) events as well as a reduction in hospitalization for heart failure (HHF) in patients with diabetes. 1 In addition empagliflozin and dapagliflozin were recently found to reduce heart failure (HF) events in patients with HF with reduced ejection fraction, with and without diabetes. 5,6 The underlying mechanisms of these beneficial effects of SGLT2 inhibitors on HF-related events remain unclear. Additionally, all CVOTs showed an increase in haematocrit and haemoglobin levels in response to SGLT2 inhibition and these changes appeared to best predict CV death reduction in a mediation analysis of the EMPA-REG OUTCOME trial. 3 The increase in haemoglobin and haematocrit concentrations seems attributable to an increase in erythropoietin (EPO) levels and stimulated erythropoiesis. 7 Several hypotheses have been raised to explain this effect on EPO production, including an increase in renal tissue oxygen delivery by activation of tubuloglomerular feedback through enhanced sodium detection at the distal renal tubules by the juxtaglomerular apparatus, or a direct stimulation of EPO by ß-Hydroxybutyrate. 8 In addition, SGLT2 inhibitor-induced glucosuria could reduce ATP consumption by the Na/K pump in proximal tubular epithelial cells, thus leading to improvement in hypoxia and inflammation in the microenvironment around the proximal tubules, with subsequent reversion of myofibroblasts to EPO-producing fibroblasts. 9 However, the exact mechanisms contributing to the increase in EPO resulting from SGLT2 inhibitor treatment remain unexplored; therefore, the present post hoc analysis of the EMPA haemodynamics study (EudraCT Number: 2016-000172-19), 10 a prospective, placebo-controlled, double-blind, randomized, two-arm parallel, exploratory study in patients with T2DM, examined the different potential mechanisms.
| METHODS
In the present study, performed at the University of Aachen, 44 patients with T2D were randomized to receive empagliflozin 10 mg or placebo for 3 months in addition to their concomitant medication ( Figure S1). The primary endpoint was the effect of empagliflozin on haemodynamic characteristics, and, as secondary endpoints, we data and median and interquartile range for non-normally distributed data; P values at baseline were calculated using t-tests and P values for the intervention effect at Day 1, Day 3 and Month 3 were calculated using the Wald method. Correlations between changes from baseline at different timepoints were calculated for selected variables using the Spearman rho correlation coefficient.
| RESULTS
Empagliflozin treatment in the present study had no significant effect on haemodynamic variables after 1 or 3 days, nor after 3 months, but led to rapid and sustained significant improvement of diastolic function. 5 As expected, empagliflozin treatment significantly increased urinary glucose excretion already after 1 day from 7.3 ± 22.7 to 48.4 ± 34.7 g/24 h (P < 0.001). Urinary volume significantly expanded in parallel with glucosuria after Day 1 from 1740 ± 601 to 2112 ± 837 mL/24 h (P = 0.011) and remained significantly increased after 3 months of treatment (2319 ± 873 mL/24 h; P = 0.001) compared to placebo ( Figures S3 and S4). In addition, empagliflozin treatment led to a significant increase in haematocrit and haemoglobin levels after 3 months (haemoglobin: baseline: 136 ± 19 g/L; Month 3: 142 ± 25 g/L; P = 0.008). Empagliflozin increased red blood cell count (P < 0.001), erythropoietin levels (P = 0.117) and transferrin concentrations (P = 0.063) while decreasing ferritin (P = 0.017), total iron (P = 0.053) and transferrin saturation (P = 0.030) only after 3 months of treatment ( Figure 1A). To further explore the potential mechanisms involved in empagliflozin-induced EPO production we performed additional analyses ( Figure S5). As shown in Figure 1B
| DISCUSSION
In this randomized, placebo-controlled, double-blind study in patients with T2D and prevalent atherosclerotic CVD or high CV risk, F I G U R E 1 A, Comparison of laboratory values and 24-hour urine during the study. Values are mean ± SD for normally distributed data and median and interquartile range for non-normally distributed data; P values at baseline were calculated using t-tests; P values for the intervention effect on Day 1, Day 3 and Month 3 were calculated using the Wald method. B, Correlation between changes in erythro poietin (EPO) and changes in glucosuria, sodium excretion and ß-Hydroxybutyrate at different timepoints. Correlations between changes from baseline at different timepoints were calculated for selected variables using the Spearman rho correlation coefficient. C, Correlation of changes in natriuresis, ß-Hydroxybutyrate and glucosuria with changes in EPO levels after 3 days (upper panel) and 3 months of empagliflozin treatment (lower panel) resembling the populations studied in CVOTs with SGLT2 inhibitors, empagliflozin led to a significant increase in haematocrit and haemoglobin levels after 3 months, an effect not yet present after 1 or 3 days of treatment, suggesting that haemoconcentration might not be the main mechanism leading to the increase in these variables.
However, others have reported that SGLT2 inhibition simultaneously reduced extracellular and plasma volume 11 ; consequently, we cannot exclude a more delayed occurrence of blood volume reduction contributing to the observed increase in haematocrit at the 3-month time point in our study. 11 Interestingly, the relevance of volume unloading for the therapeutic efficacy of SGLT2 inhibition was recently challenged by post hoc analysis of the EMPEROR-Reduced Trial, which showed comparable therapeutic efficacy of empagliflozin in patients with and without recent volume overload. 12 As an alternative mechanism, we detected stimulated erythropoiesis with augmented iron utilization as shown by increased erythropoietin levels and appropriate changes in iron measures. This suggests that the increase in EPO levels might be attributable to reduced tubular glucose reabsorption in response to SGLT2 inhibition, possibly resulting in diminished cellular stress, as a potential mechanism for increased renal erythropoietin secretion. However, this observation does not imply causality and we cannot exclude the possibility that other factors directly or indirectly regulated by SGLT2 inhibition are of relevance in this context. This might include modulation of metabolic pathways favouring a fasting-like response with induction of ketone bodies and catabolism of free fatty acids and branched chain amino acids. 13 Further, modulation of erythropoiesis might relate to inhibition of inflammatory pathways as a potential mechanism for improved EPO production. [14][15][16][17] The present analysis has certain limitations: our data provide an exploratory finding in a limited number of patients and warrant confirmation in a larger study, with changes in haematological variables and erythropoiesis defined as primary outcomes. Furthermore, 81% of study participants were male, precluding sex-specific analysis of treatment response to SGLT2 inhibition. In addition, the study included only few patients with chronic kidney disease, a population known to exhibit impaired erythropoietin production. Additional studies are warranted to extend our findings to these populations. Nevertheless, the present study was randomized, blinded and placebo-controlled, and changes in blood and urine levels were predefined exploratory endpoints.
In summary, our data showing a strong correlation between empagliflozin-induced glucosuria and the increase in erythropoietin levels in T2DM bolster the hypothesis that SGLT2 inhibitors could limit metabolic and oxidative stress in the microenvironment around the proximal tubules with subsequent reversion of myofibroblasts to EPO-producing fibroblasts. This interpretation is limited, however, by the small sample size of the present study and additional investigation of the kidney microenvironment in response to SGLT2 inhibition is required.
|
2021-08-12T06:23:49.619Z
|
2021-08-11T00:00:00.000
|
{
"year": 2021,
"sha1": "d8719ddd494ab64f4bdd6ca064c53da1647ed7d4",
"oa_license": "CCBYNC",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/dom.14517",
"oa_status": "HYBRID",
"pdf_src": "Wiley",
"pdf_hash": "4a4e6af9870640e7b5079457bbe8bd8ba92becd8",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
52211523
|
pes2o/s2orc
|
v3-fos-license
|
A Global Perspective on Milestones of Care for Children with Sickle Cell Disease A Global Perspective on Milestones of Care for Children with Sickle Cell Disease
Sickle cell disease (SCD) is one of the most common severe and monogenic disorders worldwide. Acute and chronic complications deeply impact the health of children with SCD. Milestones of treatment include newborn screening, comprehensive care and prevention of cerebrovascular complications.
Introduction
Sickle cell disease (SCD) is one of the most common severe and monogenic disorders worldwide with an average of 300,000 children born annually with sickle syndromes, the majority in Africa [1,2]. SCD was initially endemic in areas of malaria disease (Africa, Southern India, Mediterranean countries, Southern Asia), but various waves of migration brought populations from areas of high prevalence of the HbS gene to the Americas and Europe (Figure 1). Moreover, the recent migration movements of the past decade have further increased the frequency of SCD in areas where it was generally uncommon. In Europe, SCD has become the paradigm of immigration hematology [3] and is now the most prevalent genetic disease in France [4] and the United Kingdom [5]; its frequency is steadily rising in many other countries of northern, central and southern Europe [6][7][8][9][10] posing a challenge to health systems. In addition, awareness regarding SCD is increasing in India [11] and in many African countries [12] where the prevalence of the disease is high. Although in low-resource settings a great effort in terms of funding, care and research is still mainly destined to infectious diseases, the burden SCD poses on mortality and health systems in Africa is finally starting to be recognized [13][14][15][16]. Several African countries have developed dedicated services for children with SCD [17][18][19][20], including newborn screening [21][22][23][24][25][26]. Patients with SCD in many centers are being evaluated in a standardized comprehensive manner both in prospective observational cohorts [17,19,27] and randomized clinical trials [28,29]. Although some experiences are still conceived as pilot programs and have yet to be scaled up at a national level, their results are promising and demonstrate increased commitment to tackle SCD at a global level. Red dots represent the presence and blue dots the absence of the HbS gene. The regional subdivisions were informed by Weatherall and Clegg and are as follows: the Americas (light gray), Africa, including the western part of Saudi Arabia, and Europe (medium gray) and Asia (dark gray); (b) Raster map of HbS allele frequency (posterior median) generated by a Bayesian model-based geostatistical framework. The Jenks optimized classification method was used to define the classes; (c) the historical map of malaria endemicity was digitized from its source using the method outlined in Hay et al. The classes are defined by parasite rates (PR 2−10 , the proportion of 2-up to 10-year-olds with the parasite in their peripheral blood): malaria-free, PR 2−10 =0; epidemic, PR 2−10 =0; hypoendemic, PR 2−10 <0.10; mesoendemic, PR 2−10 ≥0. 10 SCD can be defined as a globalized disease, and its presence in so ethnically diverse populations, living in extremely variable environments and in very different socio-cultural societies, is a factor that must be taken into consideration when addressing its management. In fact, although SCD is a monogenic disorder, its phenotype can be highly variable, not only among individuals, but also among ethnic groups and populations [30,31].
In this chapter, we will review the management of children with SCD from a global perspective focusing on the three milestones of care: newborn screening, comprehensive prix-en-charge and cerebrovascular complications.
Neonatal screening programs
Newborn screening programs for SCD allow the early identification of patients, with the advantage of starting prophylaxis with penicillin at two months of age, significantly reducing mortality from infections. Moreover, newborn screening allows the early enrollment of patients in specialized programs of care in reference centers, thereby reducing morbidity and subsequent mortality from acute and chronic complications and improving quality of life.
The first screening program for SCD has been introduced in the USA since 1975 [32] and in the UK in 1993 [33].
In 1987, an NIH Consensus Conference stated that every child should be subjected at birth to screening for HbS to prevent the severe childhood complications of SCD, mainly infections and splenic sequestration, both potentially fatal [34]. Subsequently, a randomized study demonstrated the effectiveness of neonatal screening in dramatically reducing infant mortality from infection, allowing early initiation of prophylaxis with penicillin [35].
International Guidelines on the treatment of SCD recommend universal newborn screening on a national basis, best if integrated with existing neonatal screening programs and programs of prix-en-charge in specialized hematology reference centers [5,36].
The recommendation is that newborn screening for identification of SCD is performed to all newborns. All patients must be identified promptly and taken in charge by dedicated and specialized services in order to begin penicillin prophylaxis within two months of age (Strength of Recommendation A).
Since the late 1980s numerous are the experiences of newborn screening programs, many organized by national health systems other confined to single-center experiences or supported by private funding.
Neonatal screening for SCD: major international experiences
United States-In the United States, the first neonatal screening program for SCD dates back to the 1970s (State of New York and Columbia), but after the publication of the NIH recommendations, all the states have organized a universal neonatal screening program for the S gene, associated with neonatal screening for other diseases. The analysis is performed at the birth from capillary blood taken by pricking the heel and using Guthrie paper. The analysis is done in most cases by Hygh performance liquid chromatography (HPLC).
The results of 20 years of the program show an average incidence of the S gene in the general population of 1:64 (1.5%) and an average incidence of SCD of 1: 2000 (0.05%) [37]. The program in the United States was effective in significantly reducing the mortality of children with SCD [38].
Canada-In 1988 a targeted screening pilot program was started at the University of Montreal on babies with at least one African parent. The test, performed by HPLC, identified a proportion of 10% with trait and 0.8% of the affected. A significant number of non-enrolled patients was reported by the program, as many as 11 patients born in the reporting period: 5 of 72 of infants escaped enrollment in a care program, six infants were not identified as affected, and three false negatives and three inadequate samples were identified, stressing the importance of an absolute rigor in the organization of a screening program [39].
In 2006 a universal neonatal screening program for SCD, on a national basis, was initiated in Ontario and subsequently implemented in eight other Canadian provinces, comprising 10 provinces and three territories. The survey is performed on the cord blood or capillary blood on tissue paper by HPLC, using Iso-Electric-Focusing (IEF) or hemoglobin electrophoresis as a confirmatory test. A debate is ongoing on whether to inform parents of the carriers subject [36].
In Brazil, many states organized a neonatal screening program for the identification of patients with SCD. Since 2001 in the State of Janeiro Rio a universal neonatal screening program is active, funded by the National Health System, which includes the analysis by HPLC of the sample from the Guthrie test, performed on the baby after discharge in association with the first vaccine administration; the program provides for the subsequent taking charge of patients with SCD at the Reference Center.
The results of the first 10 years of experience (2001-2011) showed a SCD incidence of 1:1335 births and incidence of the trait by about 5% of births [40,41]. The mortality was 3.7% significantly lower than the mortality of 25% of a cohort of Brazilian children not included in a screening program [42] but also significantly lower than that of a population of children undergoing neonatal screening, but not incorporated in a comprehensive program of followup. In fact, mortality in this population was found to be 5.6% [43]. The importance of integrating neonatal screening in an effective program of care of the patient at a specialized reference center has been more recently confirmed by another recent Brazilian study, which indicated in Minias Gerais State a mortality of 7.5% patients with SCD in the first 14 years of life, even though they were undergoing newborn screening, because of a non-effective care program [44].
In Europe, although there is strong evidence that hemoglobinopathies are an increasingly important public health problem [3], as a result of recent migration flows from the Mediterranean countries, Africa and Asia, there is very little data regarding the overall prevalence of SCD; the health policy of the governments, regarding the management of SCD, is uneven in the various nations. The European Network for Rare and Congenital Anemias (ENERCA) estimates that there are around 44,000 people in Europe suffering from hemoglobinopathises, 70% of which are SCD, and strongly recommends that the National Health Systems develop screening programs and specialized reference centers for the care of the patient and their family [6].
The United Kingdom (UK) was the first European country to organize, in 1993, a universal neonatal screening program for SCD. The initial pilot program, which began in England, was updated and since 2010 is extended to the whole of Britain. The program, supported by the National Health System (NHS), provides universal neonatal screening, performed with analysis of Guthrie test concurrently with other screenings. Samples are analyzed at 13 reference hematological laboratories by HPLC, each laboratory screening between 25,000 and 100,000 newborns a year. The organization provides for centralized analysis in reference laboratories, each with a minimum of 25,000 tests per year. The incidence of carriers in the UK is an average of 15/1000 (1.5%) and 1:1900 (0.05%) with significant variations by region and ethnicity [45].
A national program of universal newborn screening of Guthrie by HPLC [46] is active in the Netherlands since 2007. A debate on whether to notify the carrier state to avoid stigmatization is currently underway [10].
In Belgium, since 1994 in the city of Brussels and in 2004 in the city of Liege, all newborns are subjected to universal screening for SCD. The analysis of umbilical cord blood is performed by IEF and HPLC as a possible confirmatory test. The affected frequency is determined to be 1:1559 [47].
In Spain, since 2000 universal neonatal screening programs have been initiated in Extremadura, Basque Country, Madrid, Valencia and Catalonia with plans of extending it from 2016 to the whole country. The prevalence of the affected varies from 1:3900 in Catalonia to 1:5900 in the region of Madrid [8,48,49].
In Germany, pilot programs of universal newborn screening were organized since 2011, first in Berlin, then in Heidelberg and in the Southeast Region of Germany and then in Hamburg. The tests were offered to all newborns although the original population was not at risk of hemoglobinopathy. The goal was to provide information about the global prevalence in Germany of a disease that has high prevalence in immigrant populations, coming mainly from areas at risk. The test was carried out by PCR for S chain from Guthrie paper in Hamburg, and by HPLC in the other experiences; the incidence of the affected ranged from 1:2385 to 1:8348 [9].
The results of the pilot studies were considered adequate to justify a universal neonatal screening program on a national basis, extended to the entire Germany. The activation of the project is planned for 2016. The carrier status is not communicated for fear of stigma.
Since 1985, France has organized a universal neonatal screening program for SCD in Guadeloupe: in the following years, many pilots studies were initiated in France; since 2000 a national screening program targeted at infants at risk of hemoglobinopathy was extended to the entire country; the selection is based on ethnic belonging. Although the program is not universal, it appears to be effective in intercepting almost all affected infants, ensuring their ultimate takeover by the Reference Centres [50].
In Italy, some experiences of neonatal screening for SCD have been reported. Screening was universal, run on Guthrie by HPLC. The experience was suspended for lack of funding [51].
In 2013 in Novara a project of newborn screening targeted to babies with a parent coming from areas at risk of hemoglobinopathy was implemented. A total of 337 of 2447 were tested and 20 carriers identified (6%) [52].
In Modena, since 2011 an antenatal screening program targeted at at-risk women by ethnicity was developed. The pilot study showed the presence of hemoglobinopathy in 27% of the 330 women tested (coverage of 70% of the program). Successively, the screening of infants of carrier mothers, run on cordon and analyzed by HPLC, has identified 48 carriers and 9 HbSS [53]. The universal antenatal screening program, extended to all pregnant women and including infants at risk of maternal positivity, is currently ongoing and supported with funding from the Province of Ferrara.
Since 2010, a centralized program of targeted neonatal screening (at least one parent from outside the region) is active in Friuli Venezia Giulia, financed by the region. The figures, as yet unpublished, report 6018 infants tested from 2010 to 2015, a percentage of carriers between 1.74 and 4.7% depending on the provinces (F Zanolli, personal communication).
A pilot program of universal newborn screening has been running since May 2, 2016, in Padua and is currently being activated in Monza.
The first program started in Ghana in 1995 [22], and after 10 years, a total of 202,244 infants were screened through public and private clinics in Kumasi, Tikrom and a nearby rural community. 3745 (1.9%) infants were identified as having possible SCD with IEF: 2047 (1.04%) SS, 1684 (0.83%) SC.
In Central Africa, between July 2004 and July 2006, 1825 newborn dried blood samples were collected onto filter papers in four maternity units from Burundi, Rwanda and the East of the Democratic Republic of Congo. The presence of hemoglobin C and S was tested in the eluted blood by an enzyme-linked immunosorbent assay (ELISA) test using a monoclonal antibody. All positive samples were confirmed by DNA analysis. Of the 1825 samples screened, 97 (5.32%) were positive. Of these, 60 (3.28%) samples were heterozygous for Hb S, and four (0.22%) for Hb C; two (0.11%) newborns were Hb SS homozygotes.
Management of sickle cell disease in childhood
SCD is a chronic and complex multisystem disorder requiring comprehensive care that includes screening, prevention, health education, management of acute and chronic complications [5,57]. Poor service organization and episodic health care cause higher rates of acute events and chronic complications, with subsequent increased burden on hospital structures and higher costs for health systems [58].
Neonatal screening program for SCD is not successful without a comprehensive care program at a specialized reference center for the treatment of the disease.
The organization of the Comprehensive Sickle Cell Centers proved crucial integration of screening programs, providing health education, preventive treatment (prophylaxis of infections, up-to-date vaccinations, stroke prevention), appropriate diagnostic therapeutic pathways for the treatment of acute and chronic complications, planning of blood transfusion and administration of HU, accompanying the transition to adult care for adolescents and young adults through structured transition programs. The care delivered by a specialized and multidisciplinary team in referral centers is effective in reducing mortality and improving the quality of life [38,59]. Where these facilities were lacking, the effectiveness of neonatal screening program was reduced [60].
A recommended examinations schedule for yearly follow-up of children with SCD is shown in Table 1 [61], while the services that a reference center should offer are displayed in Table 2 [5]. Table 2. Characteristics of a specialized reference center [5] and services that it should be able to offer directly or in agreement with nearby centers.
A comprehensive approach to the care of children with SCD should include the following goals: to improve quality of life, by preventing and treating infections, adequate pain man-agement and anemia control; to prevent organ damage, mainly stroke, renal and lung; to prevent SCD related mortality [61].
Management of sickle cell disease in childhood: open issues at a global level
In spite of the strong evidence to perform newborn screening and comprehensive care, these services are far from optimally delivered to patients with SCD not only in Europe and the USA, but mainly in areas of Africa and India where the majority of the children with SCD live. Many pilot programs were initiated in the last decade in many countries of Latin America, Middle East, Asia, and Africa; some were integrated with other screening programs. These data are encouraging but such programs need to be further enhanced.
Increased North-South, South-South and East-West collaboration could be an important way to increase service delivery to all affected children.
Cerebrovascular complications of sickle cell disease: stroke and silent infarcts
In the most severe forms of SCD, the homozygous SS and the double etherozygous Sβ°, the brain is frequently affected (Figure 2). Overt ischemic stroke occurs in 11% of untreated children as a result of stenosis or occlusion in the large arteries of the Circle of Willis [62,63]. Cerebral silent infarcts (CSI), affecting 40% of children by the age of 14, are caused by small vessel disease [64,65] although recent evidence suggests that also a combination of chronic hypoperfusion or hypoxic events, favored by an underlying artheropathy of the large vessels, can lead to CSI [66]. In the past 15 years, improvements have been made in the management of stroke and CSI [66,67]. In fact, algorithms for screening, prevention and management of stroke and CSI based on neuroimaging techniques such as transcranial Doppler (TCD) and magnetic resonance imaging/angiography (MRI/MRA) are routinely used in clinical practice [67][68][69][70].
TCD screening is recommended starting at age 2 years in children with HbSS and HbSβ°, and those identified at risk of stroke are offered chronic transfusion as stroke prevention [67]. Stroke can be virtually eliminated or dramatically reduced if a proper TCD screening program followed by chronic transfusion for at risk patients is established [59,68]. Recently, a randomized study demonstrated that after one year of chronic transfusion, hydroxycarbamide (HU) can be safely offered to children with normal neuroimaging under strict surveillance [71]. While TCD allows identifying patients at risk of stroke and initiate appropriate treatment, it is not useful to screen for the other cerebrovascular complications of SCD such us CSI. Moreover, its usefulness in identifying risk of stroke in other genotypes of SCD such as HbSC and HbSβ+, in which stroke is less common, has yet to be evaluated.
Screening with MRI/MRA, although unable to indentify children at risk of developing CSI, is strongly recommended in many centers starting at age 5 years, when sedation is no longer necessary [66,68,72], to ensure diagnosis at young age and promptly start therapeutic or educational measures. In case of abnormal TCD, developmental delay or cognitive impairment or any other clinical reason, MRI is indicated even before 5 years of age. Both chronic transfusions and HU have been shown to stabilize CSI [66,67,73], but at present there is no general agreement on prevention strategies.
Stroke and silent infarcts: open issues at a global level
In spite of extensive research performed in the United States and Europe on the management of stroke and CSI in children with SCD in the past decades, the delivery of routine TCD screening to children with SCD has been quite low. Primary stroke prevention through TCD is recommended in all national and international guidelines, but less than 50% of children in the USA [74] and the United Kingdom benefit from this technique [75]. Data regarding the coverage of TCD screening are not available for other countries of Europe, South America or the Middle East at a national level, but only for single-center experiences [59,66,69,72,76], and this is a gap that should be filled.
TCD data are not yet available from many areas of the world like India, Northern and Sub-Saharian Africa. Nevertheless, personnel training on the correct protocol of TCD screening for SCD has been performed in Africa, and promising pilot studies are being conducted in Nigeria [77][78][79]. These studies demonstrate the feasibility of primary and secondary prevention programs in low-resource settings with huge numbers of patients. They also allow us to explore the efficacy of alternative protocols compared to those in use in the USA and Europe and to demonstrate the benefit of HU in reducing TCD velocities [80].
A challenge that a global approach to SCD can address is the reported variability of stroke and cerebrovascular complications in populations of different ethnic backgrounds. Stroke and CSI seem to occur with different frequencies across populations, although data are still poor and warrant further investigation [81][82][83][84][85]. Moreover, biological factors such as G6PD deficiency and alfa thalassemia co-inheritance as well as coagulation activation and single nucleotide polymorphisms (SNPSs) do not seem to have the same role on the genesis of cerebrovascular complications in different populations [86][87][88][89][90].
In conclusion, more TCD and MRI/MRA data from SCD populations across the world could aid in designing wide population studies to explore genetic and biological modifying factors of cerebrovascular disease as currently performed in other pathologic conditions [91]. Coordinating cerebrovascular studies across countries and continents can be challenging [79,[92][93][94][95] but is now warranted to improve patients access to recommended screening tools and to better target treatment interventions according to biological disease-modifying factors, which may vary across ethnicities.
Future directions
The main objective would be to increase the access to the milestones of care: -Expand universal newborn screening to all Countries with sufficient prevalence of disease -Expand access to vaccinations, antibiotic prophylaxis and transcranial Doppler for stroke prevention, by increasing the number of skilled personnel and service availability -Increase the use of disease-modifying treatments for the pediatric age, such as hydroxiurea, in formulations that are suitable for children (low dose tablets or syrups) in all countries Strengthening collaboration at a global level and developing North-South, South-South and East-West partnerships could aid in reaching the above mentioned aims.
|
2018-10-18T18:35:30.691Z
|
2016-11-10T00:00:00.000
|
{
"year": 2016,
"sha1": "71d154139faa9cb3a64baa565be1f6de8b1c9fc7",
"oa_license": "CCBY",
"oa_url": "https://www.intechopen.com/citation-pdf-url/51910",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "6103e1e1dc19d31da4e58e767a36c39a3e26a596",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
198488473
|
pes2o/s2orc
|
v3-fos-license
|
Adaptive and Innate Immunity in Psoriasis and Other Inflammatory Disorders
Over the past three decades, a considerable body of evidence has highlighted T cells as pivotal culprits in the pathogenesis of psoriasis. This includes the association of psoriasis with certain MHC (HLA) alleles, oligoclonal expansion of T cells in some cases, therapeutic response to T cell-directed immunomodulation, the onset of psoriasis following bone marrow transplantation, or induction of psoriasis-like inflammation by T cells in experimental animals. There is accumulating clinical and experimental evidence suggesting that both autoimmune and autoinflammatory mechanisms lie at the core of the disease. Indeed, some studies suggested antigenic functions of structural proteins, and complexes of self-DNA with cathelicidin (LL37) or melanocytic ADAMTSL5 have been proposed more recently as actual auto-antigens in some cases of psoriasis. These findings are accompanied by various immunoregulatory mechanisms, which we increasingly understand and which connect innate and adaptive immunity. Specific adaptive autoimmune responses, together with our current view of psoriasis as a systemic inflammatory disorder, raise the question of whether psoriasis may have connections to autoimmune or autoinflammatory disorders elsewhere in the body. While such associations have been suspected for many years, compelling mechanistic evidence in support of this notion is still scant. This review sets into context the current knowledge about innate and adaptive immunological processes in psoriasis and other autoimmune or autoinflammatory diseases.
SETTING THE STAGE: PSORIASIS AS AN IMMUNE-MEDIATED DISORDER
If I was to name diseases that in recent years have increased our understanding of both adaptive and innate immune mechanisms on the one hand and have contributed decisively to the development of modern biological therapies on the other, then psoriasis would certainly occupy one of the top ranks. Psoriasis is currently viewed as a systemic chronic inflammatory disease with an immunogenetic basis that can be triggered extrinsically or intrinsically (1,2). Research into its pathophysiology has led to impressive therapeutic improvements (3,4). The disease is based on close interactions between components of the adaptive and the innate branches of the immune system (3,(5)(6)(7)(8)(9) (Figure 1). Since it was shown in the late 1970s that psoriasis can be ameliorated by cyclosporin A (10), it can no longer be seriously denied that T lymphocytes play a central role in the pathogenesis of this disease. This view is substantiated by numerous subsequent observations FIGURE 1 | Complex fine-tuning of innate and adaptive immune mechanisms determines onset, course, and activity of psoriasis. As detailed in the text, intricate interactions between components of the innate (exemplified here by dendritic cells and macrophages) with components of the adaptive immune system (exemplified here by T cells) lie at the core of the pathophysiology of psoriasis. Once established, the relative contribution and fine-tuning of various mediators of adaptive and innate immunity determine the clinical manifestation toward chronic stable vs. highly inflammatory and/or pustular psoriasis. over the past four decades: psoriasis can be precipitated by bone marrow transplantation (11) and, similar to other autoinflammatory diseases, the disease is frequently associated with certain HLA expression patterns (7,(12)(13)(14). Drugs that specifically inhibit the function of T lymphocytes (such as CD2 blockade in the early days of biologics) can improve psoriasis (15). A therapeutic effect can also be achieved by interleukin (IL)-4, which pushes the cytokine milieu toward a Thelper (Th) cell 2-dominated immune response (16), probably through attenuation of Th17 function following diminished IL-23 production in antigen-presenting cells (17) and through induction of the transcription factor GATA3 (18,19). IL-10 can also ameliorate psoriatic symptoms by modulating T cell functions (20). In addition, psoriasis-like skin inflammation in animal models can be initiated by certain CD4+ T cells (21)(22)(23)(24), and T cells can induce psoriatic lesions in human skin xenografts (25,26). Finally, the more recent discoveries that complexes of the antimicrobial peptide LL37 (a 37 amino acid C-terminal cleavage product of the antimicrobial peptide, cathelicidin) with own DNA or the melanocytic antigen ADAMTSL5 may function as autoantigens (27,28), support the central role of T cells in the pathogenesis of psoriasis (29, 30).
AUTOIMMUNE PROCESSES IN PSORIASIS
The Plot Thickens: Actual Auto-Antigens in Psoriasis Pathogenic T cells in psoriatic skin lesions facilitate hyperproliferation of keratinocytes, influx of neutrophilic granulocytes, as well as production of other inflammatory cytokines, chemokines and antimicrobial peptides. They feature a Th17 signature, i.e., they express IL-17A, IL-22, and IFN-γ (3, 31, 32) (Figure 2). Dendritic cells maintain activation and differentiation of lesional Th17 cells primarily through secretion of IL-23 [reviewed in (8)].
In general, both HLA restriction and peptide specificity of a given T cell are determined by its T cell receptor (TCR) repertoire (33). Activation and clonal expansion of T cells occur upon antigenic stimulation. In the absence of foreign antigens, clonal T cell expansion is highly suggestive for autoimmunity in inflammatory diseases (34). Indeed, oligoclonal T cell expansion has been identified in psoriatic lesions in early well-designed studies (35-40) as well as in more recent investigations (41). It has been interpreted as an indicator for antigen-specific immune responses.
In psoriasis, oligoclonality of cutaneous T-cell populations is usually confined to lesional skin. This suggests that psoriasis is driven by locally presented antigens (35,(42)(43)(44)(45)(46). Likewise, the clonal TCRs arguably mark T cells which mediate the disease process. Several landmark publications during the past years lent support to this notion through identification of putative autoantigens in psoriasis.
It was previously known that complexes of LL37 and self-DNA can activate dermal plasmacytoid dendritic cells (pDC) through toll-like receptor (TLR) signaling (52)(53)(54). These stimulate pDC then facilitate the psoriatic inflammatory cascade (52,53,55), FIGURE 2 | Initiation of psoriasis by antigen-dependent and antigen-independent immune mechanisms. Complexes of self-DNA with fragments of the antimicrobial peptide, cathelicidin, can stimulate plasmacytoid dendritic cells through TLR9. They can also be presented by HLA-C*06:02 molecules and specifically activate T cells through their TCR. Likewise, the melanocyte-derived ADAMTSL5 can activate pathogenic CD8+ T cells after presentation by HLA-C*06:02. a mechanism that is alluded to in more detail below. The activation via innate immune mechanisms was extended later by the finding that complexes of self-DNA and LL37 can also induce adaptive antigen-specific immune responses. Indeed, LL37 can trigger profound TCR and MHC (HLA-C * 06:02)-dependent Tcell responses (28). It remains to be confirmed, however, that the LL37-related candidate peptides can be derived from the parent protein by antigen processing within the antigen presenting cell and then be presented by HLA-class I-molecules.
A more recent strategy to identify potential targets of pathogenic T-cells in psoriasis was based on the generation of T-cell hybridomas expressing the paired Vα3S1/Vβ13S1 TCR of clonal CD8 + psoriatic T cells of an HLA-C * 06:02expressing psoriasis patient (27). This elegant approach identified melanocytes as target cells of the psoriatic immune response (27). A peptide derived from ADAMTS-like protein 5 (ADAMTSL5) by proteasomal cleavage and post-cleavage trimming induced the specific immune response. The auto-antigenic function of melanocytic ADAMTSL5 was then confirmed by mutation and knock-down experiments. Moreover, peripheral lymphocytes of the majority of psoriasis patients but not individuals without psoriasis responded to ADAMTSL5 with production of IL-17 or IFNγ (27) (Figure 2). In contrast to LL37, which has been shown to activate both CD8 + cytotoxic T cells and CD4 + T helper cells, ADAMTSL5 appears to activate preferentially CD8 + T cells. Of note, both antigens are recognized by T cells when being presented by HLA-C * 06:02, i.e., the most prominent psoriasis risk gene in the genome [located on PSORS1 (psoriasis susceptibility locus 1) on chromosome 6p21.3].
While the role of cellular adaptive immunity is becoming increasingly plausible, only recently autoantibodies, i.e., elements of humoral adaptive immunity, have been described in patients with psoriasis and psoriatic arthritis. Interestingly, these IgG are directed against (carbamylated/citrullinated) LL37 or ADAMTSL5 (56,57). Since the serum concentrations of these antibodies were associated with the severity of psoriasis and since patients with psoriatic arthritis had higher serum levels, it is conceivable that a causal pathogenetic relationship and a contribution to systemic inflammation exist (56). It is also possible that the respective autoantibodies exert protective functions through scavenging autoantigens. However, their roles need to be clarified in future studies.
The Other Side: Antigen Presentation by HLA Molecules in Psoriasis
While most, if not all, autoimmune diseases are linked with certain HLA alleles (58)(59)(60), HLA-C * 06:02 is the predominant psoriasis risk gene (61)(62)(63). HLA class I molecules present short peptide antigens (8-10 amino acids) to αβ TCRs of CD8+ T cells. Such antigenic peptides are usually derived within the antigen presenting cell from (intracellular) parent proteins by proteasomal cleavage and loaded onto HLA-class I molecules. The HLA/peptide complex is then transported to the cell membrane where it can be recognized by CD8+ T cells (64,65). Thus, HLA-class I-restricted immune responses are usually directed against target cells which produce the antigenic peptide.
HLA-C * 06:02-presented non-apeptides (9 amino acids long) possess anchor amino acids at residues 2 (arginine) and 9 (leucine, valine, and less frequently methionine and isoleucine), along with a putative anchor at residue 7 (arginine). HLA-C * 06:02 features very negatively charged pockets and thus binds to distinct positively charged peptides. Given that between 1,000 and 3,000 different self-peptides have been detected on HLA-C * 06:02 under experimental conditions, multiple cellular proteins should be, in principle, presented by this HLA molecule and recognizable by CD8+ T cells (66,67).
HLA-C * 06:02, and other psoriasis-related HLA types such as HLA-C * 07:01, HLA-C * 07:02, and HLA-B * 27 utilize identical anchor residues and present partially overlapping peptide residues (66,67). Moreover, a negatively charged binding pocket is shared with another risk allele, HLA-C * 12:03 (68,69), resulting in similar functional domains and peptidebinding characteristics (67,70). Thus, several HLA-class I types implicated in psoriasis appear to share similar peptide-binding properties. It is, therefore, conceivable that they can substitute for each other in conferring psoriasis risk. However, HLA-C * 06:02 is the prototype allele within this spectrum and is associated with the highest risk for psoriasis.
Supporting Acts: Indispensable Players in the Ensemble of Psoriasis Immunology
Autoantigen presentation alone does not suffice to induce the psoriatic cascade in genetically predisposed individuals. Rather, costimulatory effects of various gene products orchestrate the activation of the actual autoimmune response. Such risk gene variants modulate inflammatory signaling pathways (e.g., the IL-23 pathway), peptide epitope processing and/or Th/c17 differentiation (a selection of important factors is summarized in Table 1).
These genetic variations create costimulatory signals which modulate innate and adaptive immune mechanisms and shape the proinflammatory environment. In sum and in conjunction with the appropriate HLA molecules and autoantigens, they may eventually exceed the thresholds for activation and maintenance of pathogenic autoimmune and autoinflammatory responses in psoriasis (29, 71). Likewise, regulatory mechanisms involving programmed death (PD)-1 signals have emerged recently as modulators of chronic inflammation in psoriasis (72). However, the complex interactions of various players are by no means fully understood. Therefore, they are listed here only as a whole.
The autoantigens described so far cannot fully explain the genesis of psoriasis. To give just one example of the latter notion: Psoriatic lesions can also occur in vitiligo foci that do not contain melanocytes (73,74). Alterations of resident cell types such as vascular endothelial cells or the cutaneous nervous system are also involved in the disease process (75)(76)(77). Further research is certainly needed here.
SHADES OF GRAY: CROSSTALK BETWEEN ADAPTIVE AND INNATE IMMUNITY IN PSORIASIS
In addition to the antigen-specific facilitation of inflammation in psoriasis, there are several strong connections to components of the innate immune system. The crosstalk between the innate and adaptive branches of the immune system in psoriasis is complex and can only be highlighted by a few selected examples. Its fine-tuning arguably determines the actual clinical correlate within the spectrum of the disease. Indeed, there is accumulating circumstantial evidence that in patients with stable and mild disease, mechanisms of adaptive immunity are more likely to be in the foreground, while innate mechanisms seem to be more important in patients with active severe disease, systemic involvement and comorbid conditions (78) (Figure 1). The impact on systemic comorbid diseases has been interpreted, at least in part, as a systemic "spillover" of innate inflammatory processes in severe psoriasis (78). Of course, such factors are not specific for psoriasis, but appear to account for a general inflammatory state in patients with severe psoriasis.
The serum levels of inflammatory cytokines have been proposed as parameters for disease severity (85). Such general inflammatory markers are accompanied by increased numbers of Th1, Th17, and Th22 cells in patients with severe psoriasis (86,87), which provides a direct link with autoimmune (adaptive) processes. Moreover, there is an increasing number of modulating factors, such as autoimmune reactivity to ribonucleoprotein A1 (HNRNPA1) (88), which impact on the course and severity of psoriasis.
One of the perhaps most vivid recent examples of how individual mediators influence the spectrum of psoriasis by shifting innate or adaptive immune processes comes from research on the interplay between IL-17-and IL-36-driven inflammation (89). The three IL-36 isoforms (IL-36α, β, and γ) belong to the IL-1 family and are upregulated in psoriatic skin (90,91). They bind to the IL-36 receptor (IL-36R), thereby inducing transcription of several inflammatory mediators through NF-κβ activation. IL-36Ra (IL-1F9), an antiinflammatory natural IL36R antagonist, is encoded by the IL36RN gene and is abundantly present in the skin of patients with psoriasis vulgaris, which may constitute part of the "checks and balances" that control the psoriatic inflammation (90,92). Function-abrogating mutations in the IL36RN gene may result in unrestrained inflammatory effects of IL-36. Absence of IL-36Ra then leads to excessive neutrophil accumulation as observed in some cases of familial generalized pustular psoriasis (92)(93)(94). Palmoplantar pustular psoriasis, however, seems to be related to CARD14 variants rather than IL36RN mutations (95,96).
While most cases of pustular psoriasis occur without such mutations (97), IL-36-related processes appear to contribute decisively to the actual clinical manifestation of specific psoriatic phenotypes: It has recently been shown elegantly that the skin of patients with psoriasis vulgaris differs significantly from that of patients with pustular psoriasis-in a sense, opposite ends of the spectrum of psoriasis: while in both forms numerous genes are expressed abnormally, these differentially expressed IL23R, interleukin-23 receptor, a Janus kinase-2 associated type I cytokine receptor that activates STAT3 upon ligand binding TYK2, tyrosine-protein kinase 2, a Janus kinase family member, facilitates type I and II cytokine receptor and type I and III interferon signaling pathways, involved in innate and adaptive immune processes Frontiers in Immunology | www.frontiersin.org genes overlap only to a relatively small extent. In psoriasis vulgaris, genes involved in adaptive (T-cell-associated) immune processes predominate, whereas in pustular psoriasis processes of innate immunity (mainly neutrophil-associated) they are dysregulated. Interestingly, IL-36 seems to play an important role for the accumulation of neutrophilic granulocytes and a pustular phenotype of psoriasis (89). The balance between IL-36 and IL-17 seems to contribute-at least partially-to clinical symptoms of psoriasis vulgaris vs. psoriasis pustulosa. If this interpretation of the data is correct, then this would constitute a mechanism that regulates the fine-tuning between innate and adaptive immune processes.
THE IL-23/IL-17 PATHWAY CONNECTS INNATE AND ADAPTIVE IMMUNITY IN PSORIASIS
The notion of psoriasis featuring elements of both antigenspecific autoimmunity and non-specific autoinflammation needs to be considered a bit more closely, in particular downstream of innate and/or adaptive activation processes. As of today, interfering with IL-17A or IL-23 are the most efficient treatment modalities against psoriasis (98). Indeed, the IL-23/IL-17 axis seems to be particularly well-suited to exemplify the intricate crosstalk between adaptive and innate immunity in psoriasis. Healthy human skin contains only a few IL-17-producing T cells (99), a population of CD4 + T cells distinct from the "classical" Th1 and Th2 cells. They were eponymously named for their production of IL-17 (100). In psoriasis (31, 101, 102), palmoplantar pustulosis (103) and other inflammatory disorders (104,105), Th17 lymphocytes are vastly expanded and are thought to contribute decisively to the pathogenesis of these conditions (Figure 3). The resulting imbalance between Th17 and regulatory T cells (Treg) favors inflammation (106). The IL-17 production of T lymphocytes is further stimulated by activated keratinocytes, thus creating a positive feedback loop (107). Th17 cells are controlled by regulatory T cells through IL-10 (108). In psoriatic skin, IL-17A is considered the most relevant of the six known isoforms (102). IL-17A is not only secreted by CD4+ Th17 cells, but also by CD8+ T cells (109) and certain cells of the innate immune system including neutrophilic granulocytes (110)(111)(112), thus further highlighting the tight connection of innate and adaptive immunity in psoriasis. The presentation of IL-17 by neutrophil extracellular traps (NETs), which are generated upon activation of neutrophils in a clearly defined manner (113) and are prominently present in both pustular and plaque-type psoriasis (114), may also play a role (115).
In this context it should be mentioned that so-called tissue resident memory cells (T rm cells) in psoriatic skin remain in the long term even after resolution of the lesions, which contribute as mediators of the local adaptive immune response to renewed exacerbations. Although the role of these cells is not yet fully understood, there is growing evidence of their pathogenic role in psoriasis and other chronic inflammatory diseases (116)(117)(118). T rm cells in psoriatic lesions are CD8 + but lack CD49a FIGURE 3 | Differentiation of pathogenic T cells in psoriasis is embedded in a complex regulatory network. Naïve T cells can differentiate into several directions; this is mainly determined by the cytokines and transcription factors depicted here. In addition, various regulatory feedback mechanisms exist, some of which are schematically highlighted here with particular reference to Th17 cell differentiation and function.
In addition to Th17 cells, T cells which produce both IL-17 and IFNγ (termed Th17/Th1-T cells) and IL-22-producing T cells can also be detected in psoriatic skin (31). Naive T cells express several cytokine receptors including the IL-23 receptor. DC-derived TGFß1, IL-1ß, IL-6, and IL-23 facilitate priming and proliferation of Th17 cells (120)(121)(122), while IL-12 assumes these functions for Th1 cells, and IL-6 and TNFα contribute to the programming of Th22 cells.
The balance of Th17 cells and Th1 cells appears to be critical for the pathogenesis of psoriasis (123) and other related conditions (124). There are several exogenous factors such as ultraviolet light or vitamin D3 (125,126), or other cytokines like IL-9 (127) that can modulate Th17-dependent inflammation. Of note, IL-17A can also be produced independent of IL-23, e.g., by γδ T lymphocytes or invariant natural killer (iNKT) cells (128)(129)(130). However, it is not clear yet whether this alternative pathway impacts on the accrual and course of inflammatory disorders or potential undesired effects of either IL-23 or IL-17 inhibition.
In any case, the IL-23/IL-17 axis in psoriasis clearly illuminates the close interaction of the innate immune system (represented by IL-23-producing myeloid cells) with cells of the adaptive immune system (in this case Th17-and IL17-expressing CD8 + T-cells). Psoriasis could again serve as a "model disease" to clarify such relationships.
CONTRIBUTION OF RESIDENT SKIN CELLS TO IMMUNOLOGICAL PROCESSES IN PSORIASIS
Multiple genetic and environmental factors influence the immunopathology of psoriasis (131). The mechanisms leading to the first occurrence of psoriasis in predisposed individuals are only partly known. Infections with streptococci, medications such as lithium, antimalarials, or ß-blockers, or physical or FIGURE 4 | Paradoxical psoriasis triggered by TNF inhibitors in predisposed individuals. Several cytokines contribute to the pathogenesis of psoriatic skin lesions, with TNFα and IL-17A playing prominent roles. However, TNFα also exerts an inhibitory effect on plasmacytoid dendritic cells. Upon therapeutic inhibition of TNFα, this inhibitory effect is abrogated and the resulting shift toward increased production of type I interferons fuels the secretion of IL-17. It is conceivable that additional mechanisms contribute to the shift of cytokines ultimately resulting in "paradoxical" psoriatic lesions. chemical stress may trigger the disease. Minimal trauma can induce rapid immigration and activation of immune cells including T-cells and neutrophils (132), the so-called Köbner phenomenon (133,134)
. Feedback loops between adaptive immune cells (T cells), innate immune cells (neutrophilic granulocytes, macrophages, dendritic cells), and resident skin cells (keratinocytes, endothelial cells) result in an amplification
and chronification of the inflammatory response. Aspects of systemic inflammation in patients with severe psoriasis are thought to contribute to comorbid diseases (135).
Hyperproliferative keratinocytes in psoriatic plaques produce large amounts of antimicrobial peptides and proteins (AMP). These positively charged peptides, which have been termed alarmins, have strong proinflammatory properties. Most studies have addressed cathelicidin and its fragment, LL37, which is highly expressed psoriatic skin (136,137). The positively charged LL37 can associate with negatively charged nucleic acids (DNA and RNA), thus forming immunostimulatory complexes. The free DNA required for such complexes probably comes from neutrophils (which form NETs) and damaged resident skin cells (e.g., traumatized keratinocytes). Plasmacytoid DC (pDC) and myeloid dendritic cells (DC) take up these complexes. Subsequently, RNA motifs stimulate toll-like receptors (TLR) 7 and 8, and DNA triggers TLR9 signaling (52,138). Cytokines such as TNF, IL-23, and IL-12 are produced by TLR7/8stimulated myeloid DC, while pDC make type I-interferons (IFNα), all of which fuel the psoriatic inflammation (131). A prominent role in psoriasis and other autoimmune diseases has been attributed to the so-called 6-sulfo LacNAc (slan) DC (139).
Activated DC in turn can program the differentiation of naive T into pathogenic T cells [reviewed in (8)]. Neutrophilic granulocytes, too, release AMP, inflammatory cytokines, proteases, free oxygen radicals, and NETs, all of which have been implicated in the inflammatory cascade in psoriasis (8,114).
NOT ALONE: RELATIONS AND SIMILARITIES OF PSORIASIS WITH OTHER AUTOIMMUNE AND AUTOINFLAMMATORY DISORDERS
The highlights outlined so far show that both adaptive and innate immune processes contribute to psoriasis. Their balance and finetuning seem to determine the development of certain clinical forms of the disease, but also organ-specific manifestations. On the one hand, the outlined long-term systemic inflammatory processes probably contribute to the pathogenesis of important metabolic, cardiovascular, and mental concomitant diseases. In these areas, the evidence of a causal relationship is becoming increasingly clear and numerous publications prove this. A more detailed overview can be found elsewhere in this thematic focus. On the other hand, the contoured adaptive and innate immune mechanisms are not specific for psoriasis. Rather, many of them have been found-in varying degrees and weightings-in a whole range of other autoimmune and autoinflammatory diseases. In any case, although this interplay of different components of the immune system is certainly not yet fully understood, parallels with other chronic inflammatory and autoimmune diseases emerge that underpin our current view of psoriasis as a systemic disease.
Indeed, the prevalence of several autoimmune and/or autoinflammatory diseases including rheumatoid arthritis, celiac disease, Crohn's disease, multiple sclerosis, systemic lupus erythematosus, vitiligo, Sjögren's syndrome, alopecia areata, or autoimmune thyroiditis appears to be increased in patients with psoriasis compared to that in healthy controls (146,147). Several other and more uncommon associations have also been reported (148). Such associations have been attributed to certain genetic and immunological similarities and "overlaps" (146,149). Three such disease complexes associated with psoriasis, i.e., rheumatoid arthritis, Crohn's disease and systemic lupus erythematosus, will be briefly discussed as examples.
Psoriasis susceptibility 1 candidate gene 1 (PSORS1C1), a gene thought to be involved in IL-17 and IL-1β regulation, is increased in immune cells from patients with rheumatoid arthritis (150). Moreover, aberrant expression of runt-related transcription factor 1 (RUNX1) has been implicated in defective regulation of sodium-hydrogen antiporter 3 regulator 1 (SLC9A3R1) and N-acetyltransferase 9 (NAT9) in both psoriasis and rheumatoid arthritis (151)(152)(153). Polymorphisms of the IL-23R gene have also been implicated in both diseases, which further underscores the general relevance of the IL-23/IL-17 axis (154). TNFαinduced protein 3 (TNFAIP3), which negatively regulates NF-κB signaling, is another gene thought to be involved in rheumatoid arthritis and psoriasis alike, but also in Crohn's disease, celiac disease, and systemic lupus erythematosus (155,156).
Increased expression of IL-6 has been demonstrated in psoriatic plaques and inflamed intestinal mucosa alike (192,193). IL-6 signaling induces STAT3 phosphorylation, which leads to relative resistance of effector T cells toward Tregs (194,195).
The association of psoriasis and systemic lupus erythematosus is uncommon and controversially discussed (196,197). However, dysfunctional interaction of RUNX1 with its binding site due to nucleotide polymorphisms links psoriasis not only with rheumatoid arthritis but also with systemic lupus erythematosus (153,198,199). RUNX1 binding on chromosome 2 is defective in some patients with SLE, while RUNX1 binding on chromosome 17 seems to be altered in some psoriasis patients.
TNF receptor-associated factor 3 Interacting Protein 2 (TRAF3IP2) has been described as a genetic susceptibility locus for psoriasis and appears to facilitate IL-17 signaling in both psoriasis and systemic lupus erythematosus (200)(201)(202)(203)(204)(205). On the cellular level, psoriasis and systemic lupus erythematosus share impaired Treg functions (157,192,206,207), thus suggesting that similar genetic and immune alterations govern pathological immune reactions in both psoriasis and systemic lupus erythematosus.
In summary, psoriasis shows elements of both autoimmune and autoinflammatory mechanisms, whose fine-tuning determines the actual clinical symptoms within the broad spectrum of the disease. Given that psoriasis is a systemic disease that shares conspicuous genetic and immunological similarities with other autoimmune and autoinflammatory disorders, it may serve as a model disorder for research into general mechanisms of such complex immunological regulations.
AUTHOR CONTRIBUTIONS
The author confirms being the sole contributor of this work and has approved it for publication.
|
2019-07-26T13:04:14.174Z
|
2019-07-26T00:00:00.000
|
{
"year": 2019,
"sha1": "5bd0680bf8d25f9ad9fa948910057677a505aad2",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fimmu.2019.01764/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5bd0680bf8d25f9ad9fa948910057677a505aad2",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
55281439
|
pes2o/s2orc
|
v3-fos-license
|
Convexity in real analysis
We treat the classical notion of convexity in the context of hard real analysis. Definitions of the concept are given in terms of defining functions and quadratic forms, and characterizations are provided of different concrete notions of convexity. This analytic notion of convexity is related to more classical geometric ideas. Applications are given both to analysis and geometry.
Introduction
Convexity is an old subject in mathematics. Archimedes used convexity in his studies of area and arc length. The concept appeared intermittently in the work of Fermat, Cauchy, Minkowski, and others. Even Johannes Kepler treated convexity. But it can be said that the subject was not really formalized until the seminal tract of Bonneson and Fenchel [BOF]. See also [FEN] for the history. Modern treatments of convexity may be found in [LAY] and [VAL].
In what follows, we let the term "domain" denote a connected, open set. We usually denote a domain by Ω. If Ω is a domain and P, Q ∈ Ω then the closed segment determined by P and Q is the set P Q ≡ {(1 − t)P + tQ : 0 ≤ t ≤ 1} .
Most of the classical treatments of convexity rely on the following synthetic definition of the concept: Definition 1 Let ΩßR N be a domain. We say that Ω is convex if, whenever P, Q ∈ Ω, then the closed segment P Q from P to Q lies in Ω.
Works such as [LAY] and [VAL] treat theorems of Helly and Kirchbergerabout configurations of convex sets in the plane, and points in those convex sets. However, studies in analysis and differential geometry (as opposed to synthetic geometry) require results-and definitions-of a different type. We need hard analytic facts about the shape of the boundary-formulated in differentialgeometric language. We need invariants that we can calculate and estimate. That is the point of view that we wish to explore in the present paper.
In case k ≥ 2 and ρ is C k then we say that the domain Ω has C k boundary.
This last point merits some discussion. For the notion of a domain having C k boundary has many different formulations. One may say that Ω has C k boundary if ∂Ω is a regularly imbedded C k manifold in R N . Or if ∂Ω is locally the graph of a C k function. In the very classical setting of R 2 , it is common to say that the boundary of a domain or region (which of course is simply a curve γ : S 1 → R 2 ) is C k if (a) γ is a C k function and (b) γ ′ = 0.
We shall not take the time here to prove the equivalence of all the different formulations of C k boundary for a domain (but see the rather thorough discussion in Appendix I of [KRA1]). But we do discuss the equivalence of the "local graph" definition with the defining function definition.
First suppose that Ω is a domain with C k defining function ρ as specified above, and let P ∈ ∂Ω. Since ∇ρ(P ) = 0, the implicit function theorem (see [KRP2]) guarantees that there is a a neighborhood V P of P , a variable (which we may take to be x N ) and a C k function ϕ P defined on a small open set U P ßR N −1 so that ∂Ω∩V P = {(x 1 , x 2 , . . . , x N ) : x N = ϕ P (x 1 , . . . , x N −1 ) , (x 1 , . . . , x N −1 ) ∈ U P } .
Thus ∂Ω is locally the graph of the function ϕ P near P .
We may suppose that the positive x N -axis points out of the domain, and set ρ P (x) = x N − ϕ P (x 1 , . . . , x N −1 ). Thus, on a small neighborhood of P , ρ P behaves like a defining function. It is equal to 0 on the boundary, certainly has non-vanishing gradient, and is C k . Now ∂Ω is compact, so we may cover ∂Ω with finitely many V P1 , . . . , V P k . Let {ψ j } be a partition of unity subordinate to this finite cover, and set Then, in a neighborhood of ∂Ω, ρ is a defining function. We may extend ρ to all of space as follows. Let V be a neighborhood of ∂Ω on which ρ is defined. Let V ′ be an open, relatively compact subset of Ω and V ′′ an open subset of c Ω so that V, V ′ , V ′′ cover C n . Let η, η ′ , η ′′ be a partition of unity subordinate to the cover V, V ′ , V ′′ . Now set Here C is a large positive constant that exceeds the diameter of Ω. Then ρ is a globally defined, C k function that is a defining function for Ω.
Definition 2 Let ΩßR N have C 1 boundary and let ρ be a C 1 defining function.
For Ω with C 1 boundary, we think of ν P = ν = ∂ρ/∂x 1 (P ), . . . , ∂ρ/∂x N (P ) as the outward-pointing normal vector to ∂Ω at P . Of course the union of all the tangent vectors to ∂Ω at a point P ∈ ∂Ω is the tangent plane or tangent hyperplane. The tangent hyperplane is defined by the condition This definition makes sense when ν P is well defined, in particular when ∂Ω is C 1 .
If Ω is convex and ∂Ω is not smooth-say that it is Lipschitz-then any point P ∈ ∂Ω will still have one (or many) hyperplanes P such that P ∩ Ω = {P }. We call such a hyperplane a support hyperplane for ∂Ω at P . As noted, such a support hyperplane need not be unique. For example, if Ω = {(x 1 , x 2 ) : |x 1 | < 1, |x 2 | < 1} then the points of the form (±1, ±1) in the boundary do not have well-defined tangent planes, but they do have (uncountably) many support hyperplanes.
Of course the definition of the normal ν P makes sense only if it is independent of the choice of ρ. We shall address that issue in a moment. It should be observed that the condition defining tangent vectors simply mandates that w ⊥ ν P at P. And, after all, we know from calculus that ∇ρ is the normal ν P and that the normal is uniquely determined and independent of the choice of ρ. In principle, this settles the well-definedness issue.
However this point is so important, and the point of view that we are considering so pervasive, that further discussion is warranted. The issue is this: if ρ is another defining function for Ω then it should give the same tangent vectors as ρ at any point P ∈ ∂Ω. The key to seeing that this is so is to write ρ(x) = h(x) · ρ(x), for h a function that is non-vanishing near ∂Ω. Then, for P ∈ ∂Ω, because ρ(P ) = 0. Thus w is a tangent vector at P vis a vis ρ if and only if w is a tangent vector vis a vis ρ. But why does h exist? After a change of coordinates, it is enough to assume that we are dealing with a piece of ∂Ω that is a piece of flat, (N − 1)−dimensional real hypersurface (just use the implicit function theorem). Thus we may take ρ(x) = x N and P = 0. Then any other defining function ρ for ∂Ω near P must have the Taylor expansion There is no loss of generality to take c = 1, and we do so in what follows. Thus we wish to define Here S(x) ≡ R(x)/x N and S(x) = o(1) as x N → 0. Since this remainder term involves a derivative of ρ, it is plain that h is not even differentiable. (An explicit counterexample is given by ρ(x) = x N · (1 + |x N |).) Thus the program that we attempted in equation (1.1) above is apparently flawed. However an inspection of the explicit form of the remainder term R reveals that, because ρ is constant on ∂Ω, h as defined above is continuously differentiable in tangential directions. That is, for tangent vectors w (vectors that are orthogonal to ν P ), the derivative j ∂h ∂x j (P )w j is defined. Thus it does indeed turn out that our definition of tangent vector is well-posed when it is applied to vectors that are already known to be tangent vectors by the geometric definition w ·ν P = 0. For vectors that are not geometric tangent vectors, an even simpler argument shows that Thus Definition 2 is well-posed. Questions similar to the one just discussed will come up below when we define convexity using C 2 defining functions. They are resolved in just the same way and we shall leave details to the reader. The reader should check that the discussion above proves the following: if ρ, ρ are C k defining functions for a domain Ω, with k ≥ 2, then there is a C k−1 , nonvanishing function h defined near ∂Ω such that ρ = h · ρ.
The Analytic Definition of Convexity
For convenience, we restrict attention for this section to bounded domains. Many of our definitions would need to be modified, and extra arguments given in proofs, were we to consider unbounded domains as well.
Definition 3 Let Ω ⊂ ⊂ R N be a domain with C 2 boundary and ρ a defining function for Ω. Fix a point P ∈ ∂Ω. We say that ∂Ω is analytically (weakly) convex at P if N j,k=1 We say that ∂Ω is analytically strongly (strictly) convex at P if the inequality is strict whenever w = 0. If ∂Ω is convex (resp. strongly convex) at each boundary point then we say that Ω is convex (resp. strongly convex).
One interesting and useful feature of this new definition of convexity is that it treats the concept point-by-point. The classical, synthetic definition specifies convexity for the whole domain at once.
It is natural to ask whether the new definition of convexity is independent of the choice of defining function. We have the following result: Proposition 3 Let ΩßR N be a domain with C 2 boundary. Let ρ and ρ ′ be C 2 defining functions for Ω, and assume that, at points x near ∂Ω, for some non-vanishing, C 2 function h. Let P ∈ ∂Ω. Then Ω is convex at P when measured with the defining function ρ if and only if Ω is convex at P when measured with the defining function ρ ′ .
Proof: We calculate that because ρ ′ (P ) = 0. But then, if w is a tangent vector to ∂Ω at P , we see that If we suppose that P is a point of convexity relative to the defining function ρ ′ , then the first sum is nonnegative. Of course h is positive, so the first expression is then ≥ 0. Since w is a tangent vector, the sum in j in the second expression vanishes. Likewise the sum in k in the third expression vanishes.
In the end, we see that the Hessian of ρ is positive semi-definite on the tangent space if the Hession of ρ ′ is. The reasoning also works if the roles of ρ and ρ ′ are reversed. The result is thus proved.
is frequently called the "real Hessian" of the function ρ. This form carries considerable geometric information about the boundary of Ω. It is of course closely related to the second fundamental form of Riemannian geometry (see B. O'Neill [ONE]).
There is a technical difference between "strong" and "strict" convexity that we shall not discuss here (see L. Lempert [LEM] for details). It is common to use either of the words "strong" or "strict" to mean that the inequality in the last definition is strict when w = 0. The reader may wish to verify for himself that, at a strongly convex boundary point, all curvatures are positive (in fact one may, by the positive definiteness of the matrix ∂ 2 ρ/∂x j ∂x k , impose a change of coordinates at P so that the boundary of Ω agrees with a ball up to second order at P ). Now we explore our analytic notions of convexity. The first lemma is a technical one: Lemma 4 Let ΩßR N be strongly convex. Then there is a constant C > 0 and a defining function ρ for Ω such that N j,k=1 We shall select λ large in a moment. Let P ∈ ∂Ω and set Then no element of X could be a tangent vector at P, hence Xß{w : |w| = 1 and j ∂ρ/∂x j (P )w j = 0}. Since X is defined by a non-strict inequality, it is closed; it is of course also bounded. Hence X is compact and If w ∈ X then this expression is positive by definition. If w ∈ X then the expression is positive by the choice of λ. Since {w ∈ R N : |w| = 1} is compact, there is thus a C > 0 such that This establishes our inequality (2.1) for P ∈ ∂Ω fixed and w in the unit sphere of R N . For arbitrary w, we set w = |w| w, with w in the unit sphere. Then (2.1) holds for w. Multiplying both sides of the inequality for w by |w| 2 and performing some algebraic manipulations gives the result for fixed P and all w ∈ R N .
Finally, notice that our estimates-in particular the existence of C, hold uniformly over points in ∂Ω near P. Since ∂Ω is compact, we see that the constant C may be chosen uniformly over all boundary points of Ω.
Notice that the statement of the lemma has two important features: (i) that the constant C may be selected uniformly over the boundary and (ii) that the inequality (2.1) holds for all w ∈ R N (not just tangent vectors). In fact it is impossible to arrange for anything like (2.1) to be true at a weakly convex point.
Our proof shows in fact that (2.1) is true not just for P ∈ ∂Ω but for P in a neighborhood of ∂Ω. It is this sort of stability of the notion of strong convexity that makes it a more useful device than ordinary (weak) convexity.
Proposition 5
If Ω is strongly convex then Ω is geometrically convex.
Proof: We use a connectedness argument.
To see that S is closed, fix a defining function ρ for Ω as in the Lemma. If S is not closed in Ω × Ω then there exist P 1 , P 2 ∈ Ω such that the function assumes an interior maximum value of 0 on [0, 1]. But the positive definiteness of the real Hessian of ρ contradicts that assertion. The proof is complete.
We gave a special proof that strong convexity implies geometric convexity simply to illustrate the utility of the strong convexity concept. It is possible to prove that an arbitrary (weakly) convex domain is geometrically convex by showing that such a domain can be written as the increasing union of strongly convex domains. However the proof is difficult and technical. We thus give another proof of this fact:
Proof: To simplify the proof we shall assume that Ω has at least C 3 boundary.
Assume without loss of generality that N ≥ 2 and 0 ∈ Ø. For ǫ > 0, let If M ∈ N is large and ǫ is small, then Ø ǫ is strongly convex. By Proposition 4, each Ø ǫ is geometrically convex, so Ø is convex.
We mention in passing that a nice treatment of convexity, from roughly the point of view presented here, appears in V. Vladimirov [VLA].
Proposition 7
Let Ω ⊂ ⊂ R N have C 2 boundary and be geometrically convex.
Remark:
The reader can already see in the proof of the proposition how useful the quantitative version of convexity can be.
The assumption that ∂Ω be C 2 is not very restrictive, for convex functions of one variable are twice differentiable almost everywhere (see A. Zygmund [ZYG]). On the other hand, C 2 smoothness of the boundary is essential for our approach to the subject.
Exercises for the Reader: If ΩßR N is a domain then the closed convex hull of Ω is defined to be the closure of the set m j=1 λ j s j : s j ∈ Ω, m ∈ N, λ j ≥ 0, λ j = 1 . Equivalently, the closed convex hull of Ω is the intersection of all closed, convex sets that contain Ω.
Assume in the following problems that ΩßR N is closed, bounded, and convex. Assume that Ω has C 2 boundary.
(a) We shall say more about extreme points in the penultimate section. For now, a point P ∈ ∂Ω is extreme (for Ω convex) if, whenever P = (1 − λ)x + λy and 0 ≤ λ ≤ 1, x, y ∈ Ω, then x = y = P . Prove that Ω is the closed convex hull of its extreme points (this result is usually referred to as the Krein-Milman theorem and is true in much greater generality). (b) Let P ∈ ∂Ω be extreme. Let p = P + T P (∂Ω) be the geometric tangent affine hyperplane to the boundary of Ω that passes through P. Show by an example that it is not necessarily the case that p ∩ Ω = {P }.
(c) Prove that if Ω 0 is any bounded domain with C 2 boundary then there is a relatively open subset U of ∂Ω 0 such that U is strongly convex. (Hint: Fix x 0 ∈ Ω 0 and choose P ∈ ∂Ω 0 that is as far as possible from x 0 ). (d) If Ω is a convex domain then the Minkowski functional 5 (see [LAY]) less 1 gives a convex defining function for Ω.
Convex Functions and Exhaustion Functions
Let F : R N → R be a function. We say that F is convex if, for any P, Q ∈ R N and any 0 ≤ t ≤ 1, it hold that In the case that F is C 2 , we may restrict F to the line passing through P and Q and differentiate the function If we set α = Q − P = (α 1 , α 2 , . . . , α N ), then this last result may be written as p(x) = inf{r > 0 : x ∈ rK} . Then p is a Minkowski functional for K. This in turn may be rewritten as In other words, the Hessian of F is positive semi-definite.
In the case that a C 2 function F has positive definite Hessian at each point then we say that F is strictly convex or strongly convex.
The reasoning in the penultimate paragraph can easily be reversed to see that the following is true: Proposition 8 A C 2 function on R is convex if and only if it has positive semi-definite Hessian at each point of its domain.
Of course it is also useful to consider convex functions on a domain. Certainly we may say that F : for all (α 1 , . . . , α N ) and all points x ∈ Ω. Equivalently, F is convex on a convex domain Ω if, whenever P, Q ∈ Ω and 0 ≤ λ ≤ 1 we have It is straightforward to prove that any convex function is continuous. See [ZYG] or [VLA,p. 85]. Other properties of convex functions are worth noting. For example, if f : R n → R is convex and ϕ : R → R is convex and increasing then ϕ • f is convex. Certainly the sum of any two convex functions is convex. If {f α } α∈A is any family of convex functions then is convex. The proof of this latter assertion is straightforward: If P, Q lie in the common domain of the f α and 0 ≤ λ ≤ 1 and α ∈ A then Then certainly Now take the supremum over α on the lefthand side to obtain the result.
It is always useful to be able to characterize geometric properties of domains in terms of functions. For functions are more flexible objects than domains: one can do more with functions. With this thought in mind we make the following definition: Definition 4 Let ΩßR N be a bounded domain. We call a function λ : Ω → R an exhaustion function if, for each c ∈ R, the set The key idea here is that the function λ is real-valued and blows up at ∂Ω.
Theorem 9 A domain ΩßR N is convex if and only if it has a continuous, convex exhaustion function.
Proof:
If Ω possesses such an exhaustion function λ, then the domains Ω k ≡ {x ∈ Ω : λ < k} are convex. And Ω itself is the increasing union of the Ω k . It follows immediately, from the synthetic definition of convexity, that Ω is convex.
For the converse, observe that if Ω is convex and P ∈ ∂Ω, then the tangent hyperplane at P has the form a · (x − P ) = 0. Here a is a Euclidean unit vector. It then follows that the quantity a · (x − P ) is the distance from x ∈ Ω to this hyperplane. Now the function µ a,P (x) ≡ − ln a · (x − P ) is convex since one may calculate the Hessian H directly. Its value at a point x equals Thus − log δ Ω is a convex function that blows up at ∂Ω. Now set This is a continuous, convex function that blows up at the boundary. So it is the convex exhaustion function that we seek.
Lemma 10 Let F be a convex function on R N . Then there is a sequences f 1 ≥ f 2 ≥ · · · of C ∞ , strongly convex functions such that f j → F pointwise.
Proof: Let ϕ be a C ∞ c function which is nonnegative and has integral 1. We may also take ϕ to be supported in the unit ball, and be radial. For ǫ > 0 we set ϕ ǫ (X) = ǫ −N ϕ(x/ǫ) .
We define
We assert that each F ǫ is convex. For let P, Q ∈ R N and 0 ≤ λ ≤ 1. Then Certainly f j is strongly convex because F ǫ is convex and |x| 2 strongly convex. If ǫ j > 0, δ j > 0 are chosen appropriately, then we will have f 1 ≥ f 2 ≥ . . . and f j → F pointwise. That is the desired conclusion.
Proposition 11 Let F : R N → R be a continuous function. Then F is convex if and only if, for any ϕ ∈ C ∞ c (R N ) with ϕ ≥ 0, ϕ dx = 1, and any w = (w 1 , w 2 , . . . , w N ) ∈ R N it holds that Proof: Assume that F is convex. In the special case that F ∈ C ∞ , we certainly know that Hence it follows that Now the result follows from integrating by parts twice (the boundary terms vanish since ϕ is compactly supported). Now the general case follows by approximating F as in the preceding lemma. For the converse direction, we again first treat the case when F ∈ C ∞ . Assume that for all suitable ϕ. Then integration by parts twice gives us the inequality we want. For general F , let ψ be a nonnegative C ∞ c function, supported in the unit ball, with integral 1. Set We may integrate by parts twice in this last expression to obtain It follows that each F ǫ is convex. Thus for every P, Q, λ. Letting ǫ → 0 + yields that hence F is convex. That completes the proof.
For applications in the next theorem, it is useful to note the following: Proposition 12 Any convex function f is subharmonic.
Proof: To see this, let P and P ′ be distinct points in the domain of f and let X be their midpoint. Then certainly 2f (X) ≤ f (P ) + f (P ′ ) .
Let η be any special orthogonal rotation centered at X. We may write 2f (X) ≤ f (η(P )) + f (η(P ′ )) . Now integrate out over the special orthogonal group to derive the usual submean-value property for subharmonic functions.
The last topic is also treated quite elegantly in Chapter 3 of [HOR]. One may note that the condition that the Hessian be positive semi-definite is stronger than the condition that the Laplactian be nonnegative. That gives another proof of our result.
Theorem 13 A domain ΩßR N is convex if and only if it has a C ∞ , strictly convex exhaustion function.
Proof: Only the forward direction need be proved (as the converse direction is contained in the last theorem).
Corollary 14 Let ΩßR N be any convex domain. Then we may write where this is an increasing union and each Ω j is strongly convex with C ∞ boundary.
Proof: Let λ be a smooth, strictly convex exhaustion function for Ω. By Sard's theorem (see [KRP1]), there is a strictly increasing sequence of values c j → +∞ so that Ω cj = {x ∈ Ω : λ(x) < c j } has smooth boundary. Then of course each Ω cj is strongly convex. And the Ω cj form an increasing sequence of domains whose union is Ω.
Other Characterizations of Convexity
Let ΩßR N be a domain and let F be a family of real-valued functions on Ω (we do not assume in advance that F is closed under any algebraic operations, although often in practice it will be). Let K be a compact subset of Ω. Then the convex hull of K in Ω with respect to F is defined to be We sometimes denote this hull by K when the family F is understood or when no confusion is possible. We say that Ω is convex with respect to F provided K F is compact in Ω whenever K is. When the functions in F are complex-valued then |f | replaces f in the definition of K F .
Proposition 15
Let Ω ⊂ ⊂ R N and let F be the family of real linear functions.
Then Ω is convex with respect to F if and only if Ω is geometrically convex.
Proof: Exercise. Use the classical definition of convexity at the beginning of the paper.
Proposition 16
Let Ω ⊂ ⊂ R N be any domain. Let F be the family of continuous functions on Ω.
Then Ω is convex with respect to F .
Proof: If K ⊂ ⊂ Ω and x ∈ K then the function F (t) = 1/(1 + |x − t|) is continuous on Ω. Notice that f (x) = 1 and |f (k)| < 1 for all k ∈ K. Thus x ∈ K F . Therefore K F = K and Ω is convex with respect to F .
We close this discussion of convexity with a geometric characterization of the property. We shall, later in the book, refer to this as the "segment characterization". First, if ΩßR N is a domain and I is a closed one-dimensional segment lying in Ω then the boundary ∂I is the set consisting of the two endpoints of I. Now the domain Ω is convex if and only if whenever {I j } ∞ j=1 is a collection of closed segments in Ω and {∂I j } is relatively compact in Ω then so is {I j }. This is little more than a restatement of the classical definition of geometric convexity. We invite the reader to supply the details.
In fact the formulation in the last paragraph admits of many variants. One of these is the following: If {I j } is a collection of closed segmenets in Ω then dist (∂I j , ∂Ω) is bounded from 0 if and only if dist (I j , ∂Ω) is bounded from 0. The following example puts these ideas in perspective. Let
Then it is clear that
is not. And of course Ω is not convex.
Convexity of Finite Order
There is a fundamental difference between the domains and E = {x = (x 1 , x 2 ) ∈ R 2 : x 2 1 + x 4 2 < 1} . Both of these domains are convex. The first of these is strongly convex and the second is not. More generally, each of the domains is, for m = 2, 3, . . . , weakly (not strongly) convex. Somehow the intuition is that, as m increases, the domain E m becomes more weakly convex. Put differently, the boundary points (±1, 0) are becoming flatter and flatter as m increases. We would like to have a way of quantifying, indeed of measuring, the indicated flatness. These considerations lead to a new definition. We first need a bit of terminology.
Let f be a function on an open set U ßR N and let P ∈ Ω. We say that f vanishes to order k at P if any derivative of f , up to and including order k, vanishes at P . Thus if f (P ) = 0 but ∇f (P ) = 0 then we say that f vanishes to order 0. If f (P ) = 0, ∇f (P ) = 0, ∇ 2 f (P ) = 0, and ∇ 3 f (P ) = 0, then we say that f vanishes to order 2.
Let Ω be a domain and P ∈ ∂Ω. Suppose that ∂Ω is smooth near P . We say that the tangent plane T P (∂Ω) has order of contact k with ∂Ω at P if the defining function ρ for Ω satisfies and this same inequality does not hold with k replaced by k + 1.
Definition 6 Let ΩßR N be a domain and P ∈ ∂Ω a point at which the boundary is at least C k for k a positive integer. We say that P is convex of order k if • The point P is convex; • The tangent plane to ∂Ω at P has order of contact k with the boundary at P .
EXAMPLE 7
Notice that a point of strong convexity will be convex of order 2. The boundary point (1, 0) of the domain is convex of order 2k.
Proposition 17 Let ΩßR N be a bounded domain, and let P ∈ ∂Ω be convex of finite order. Then that order is an even number.
Proof: Let m be the order of the point P . We may assume that P is the origin and that the outward normal direction at P is the x 1 direction. If ρ is a defining function for Ω near P then we may use the Taylor expansion about P to write ρ(x) = 2x 1 + ϕ(x) , and ϕ will vanish to order m. If m is odd, then the domain will not lie on one side of the tangent hyperplane So Ω cannot be convex.
A very important feature of convexity of finite order is its stability. We formulate that property as follows: Proposition 18 Let ΩßR N be a smoothly bounded domain and let P ∈ ∂Ω be a point that is convex of finite order m. Then points in ∂Ω that are sufficiently near P are also convex of finite order at most m.
Proof: Let Ω = {x ∈ R N : ρ(x) < 0}, where ρ is a defining function for Ω. Then the "finite order" condition is given by the nonvanishing of a derivative of ρ at P . Of course that same derivative will be nonvanishing at nearby points, and that proves the result.
Proposition 19
Let ΩßR N be a smoothly bounded domain. Then there will be a point P ∈ ∂Ω and a neighborhood U of P so that each point of U ∩ ∂Ω will be convex of order 2 (i.e., strongly convex).
Proof: Let D be the diameter of Ω. We may assume that Ω is distance at least 10D + 10 from the origin 0. Let P be the point of ∂Ω which is furthest (in the Euclidean metric) from 0. Then P is the point that we seek. Refer to Figure 1.
Let L be the distance of 0 to P . Then we see that the sphere with center 0 and radius L externally osculates ∂Ω at P . Of course the sphere is strongly convex at the point of contact. Hence so is ∂Ω. By the continuity of second derivatives of the defining function for Ω, the same property holds for nearby points in the boundary. That completes the proof.
EXAMPLE 8 Consider the domain
The boundary points of the form (0, a, b) are convex of order 4. All others are convex of order 2 (i.e., strongly convex).
It is straightforward to check that Euclidean isometries preserve convexity, preserve strong convexity, and preserve convexity of finite order. Diffeomorphisms do not. In fact we have: Proposition 20 Let Ω 1 , Ω 2 be smoothly bounded domains in R N , let P 1 ∈ ∂Ω 1 and P 2 ∈ ∂Ω 2 . Let Φ be a diffeomorphism from Ω 1 to Ω 2 and assume that Φ(P 1 ) = P 2 . Further suppose that the Jacobian matrix of Φ at P 1 is an orthogonal linear mapping. Then we have: • If P 1 is a convex boundary point then P 2 is a convex boundary point; • If P 1 is a strongly convex boundary point then P 2 is a strongly convex boundary point; • If P 1 is a boundary point that is convex of order 2k then P 2 is a boundary point that is convex of order 2k.
Proof: We consider the first assertion. Let ρ be a defining function for Ω 1 . Then ρ • Φ −1 will be a defining function for Ω 2 . Of course we know that the Hessian of ρ at P 1 is positive semi-definite. It is straightforward to calculate the Hessian of ρ ′ ≡ ρ • Φ −1 and see that it is just the Hessian of ρ composed with Φ applied to the vectors transformed under Φ −1 . So of course ρ ′ will have positive semi-definite Hessian. The other two results are verified using the same calculation.
Proposition 21
Let Ω be a smoothly bounded domain in R N . Let L be an invertible linear map on R N . Define Ω ′ = L(Ω). Then • Each convex boundary point of Ω is mapped to a convex boundary point of Ω ′ .
• Each strongly convex boundary point of Ω is mapped to a strongly convex boundary point of Ω ′ .
• Each boundary point of Ω that is convex of order 2k is mapped to a boundary point of Ω ′ that is convex of order 2k.
Maps which are not invertible tend to decrease the order of a convex point. An example will illustrate this idea: maps Ω ′ onto Ω. And we see that Ω is strongly convex (i.e., convex of order 2 at each boundary point) while Ω ′ has boundary points that are convex of order 4. The points of order 4 are mapped by Φ to points of order 2.
Extreme Points
A point P ∈ ∂Ω is called an extreme point if, whenever a, b ∈ ∂Ω and P = (1 − λ)a + λb for some 0 ≤ λ ≤ 1 then a = b = P . It is easy to see that, on a convex domain, a point of strong convexity must be extreme, and a point that is convex of order 2k must be extreme. But convex points in general are not extreme.
EXAMPLE 10 Let
Then Ω is clearly convex. But any boundary point with x 1 , x 2 not both 1 is not extreme.
Support Functions
Let ΩßR N be a bounded, convex domain with C 2 boundary. If P ∈ ∂Ω then let T P (∂Ω) be the tangent hyperplane to ∂Ω at P . We may take the outward unit normal at P to be the positive x 1 direction. Then the function is a linear function that is negative on Ω and positive on the other side of T P (Ω). The function L is called a support function for Ω at P . Note that if we take the supremum of all support functions for all P ∈ ∂Ω then we obtain a defining function for Ω. The support function of course takes the value 0 at P . It may take the value 0 at other boundary points-for instance in the case of the domain {(x 1 , x 2 ) : |x 1 | < 1, |x 2 | < 1}. But if Ω is convex and P ∈ ∂Ω is a point of convexity of finite order 2k then the support function will vanish on ∂Ω only at the point P . The same assertion holds when P is an extreme point of the boundary.
Bumping
One of the features that distinguishes a convex point of finite order from a convex point of infinite order is stability. The next example illustrates the point.
EXAMPLE 12
Let Let P be the boundary point (1/2, 1). Let U be a small open disc about P . Then there is no open domain Ω such that (a) Ω ⊇ Ω and Ω ∋ P ; To see this assertion, assume not. Let x be a point of Ω \ Ω. Let y be the point (0.9, 0.9) ∈ Ω. Then the segment connecting x with y will not lie in Ω.
The example shows that a flat point in the boundary of a convex domain cannot be perturbed while preserving convexity. But a point of finite order can be perturbed: Proposition 22 Let ΩßR N be a bounded, convex domain with C k boundary. Let P ∈ ∂Ω be a convex point of finite order m. Write Ω = {x ∈ R N : ρ(x) < 0}. Let ǫ > 0. Then there is a perturbed domain Ω = {x ∈ R N : ρ(x) < 0} with C k boundary such that (a) Ω ⊇ Ω; (b) Ω ∋ P ; (c) ∂ Ω \ Ω consists of points of finite order m; (d) The Hausdorff distance of ∂ Ω and ∂Ω is less than ǫ.
Here the exponents in parentheses are derivatives.
Proof of the Lemma: Define g j (x) = x 2j and let h θj j be the function obtained from g j by rotating the coordinates (x, y) through an angle of θ j . Define some positive integer p. If p is large enough (at least k + 1), then there will be more free parameters in the definition of p than there are constants α j , β j , and γ 0 . So we may solve for the c j and θ j and thereby define p.
Proof of the Proposition: First let us consider the case N = 2. Fix P ∈ ∂Ω as given in the statement of the proposition. We may assume without loss of generality that P is the origin and the tangent line to ∂Ω at P is the x-axis.
We may further assume that Ω is so oriented that the boundary ∂Ω of Ω near P is the graph of a concave-down function ϕ.
Let δ > 0 be small and let x and y be the two boundary points that are horizontally distance δ from P (situated, respectively, to the left and to the right of P ). If δ is sufficiently small, then the angle between the tangent lines at x and at y will be less than π/6. Now we think of P = (0, 0), γ 0 = ǫ > 0, of x = (−a, α 0 ), and of y = (a, β 0 ). Further, we set α j = ϕ (j) (−a) , j = 1, . . . , k and β j = ϕ (j) (a) , j = 1, . . . , k .
Then we may apply the lemma to obtain a concave-down polynomial p which agrees with ϕ to order k at the points of contact x and y. Thus the domain Ω which has boundary given by y = p(x) for x ∈ [−a, a] and boundary coinciding with ∂Ω elsewhere (we are simply replacing the portion of ∂Ω which lies between x and y by the graph of p) will be a convex domain that bumps Ω provided that the degree of p does not exceed the finite order of convexity m of ∂Ω near P . When the degree of p exceeds m, then the graph y = p(x) may intersect ∂Ω between x and y, and therefore not provide a geometrically valid bump.
For higher dimensions, we proceed by slicing. Let P ∈ ∂Ω be of finite order m. Let T P (∂Ω) be the tangent hyperplane to ∂Ω at P as usual. If v is a unit vector in T P (∂Ω) and ν P the unit outward normal vector to ∂Ω at P , then consider the 2-dimensional plane P v spanned by v and ν P . Then Ω v ≡ P v ∩ Ω is a 2-dimensional convex domain which is convex of order m at P ∈ ∂Ω v . We may apply the two-dimensional perturbation result to this domain. We do so for each unit tangent vector v ∈ T P (∂Ω), noting that the construction varies smoothly with the data vector v. The result is a smooth, perturbed domain Ω as desired.
It is worth noting that the proof shows that, when we bump a piece of boundary that is convex of order m, then we may take the bump to be convex of order 2 or 4 or any degree up to and including m (which of course is even)w.
It is fortunate that the matter of bumping may be treated more or less heuristically in the present context. In several complex variables, bumping is a more profound and considerably more complicated matter (see, for instance [BHS]).
Concluding Remarks
We have attempted here to provide the analytic tools so that convexity can be used in works of geometric analysis. There are many other byways to be explored in this vein, and we hope to treat them at another time.
|
2009-08-31T00:29:24.000Z
|
2009-08-31T00:00:00.000
|
{
"year": 2011,
"sha1": "baa91771dbf83944545333a2fc2536075368afad",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/0908.4437v1.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "307bd1386d4e24e4ab7d9d5de68d2c36b399fd39",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
57783089
|
pes2o/s2orc
|
v3-fos-license
|
Anterior versus posterior approach laparoscopic radical cystectomy: a retrospective analysis
Objective To investigate the mortality, operation time, cystectomy time, and complications of anterior approach laparoscopic radical cystectomy (ALRC) in Asian males in comparison with posterior approach laparoscopic radical cystectomy (PLRC). Materials and methods One hundred forty-seven male patients with bladder cancer (cT2-3NxM0) in our hospital from May 2011 to January 2018 having undergone laparoscopic radical cystectomy were studied, including 68 patients in PLRC group and 79 patients in ALRC group. Baseline patient characteristics, operative and postoperative characteristics, and postoperative complications were retrospectively collected and analyzed between the two groups. Results Patients in these two groups exhibited similar baseline characteristics (p > 0.05). Compared with PLRC group, ALRC group required similar operation time (317.3 ± 40.9 vs 321.9 ± 37.5) and cystectomy time (64.8 ± 8.7 vs 65.6 ± 14.0). The ALRC group required less cystectomy time (67.8 ± 10.1 vs 77.4 ± 14.9) when patients’ BMI > 24 or patients had large total tumor and blood clot volume (> 160 cm3). Also, estimated blood loss (EBL) of ALRC group was significantly less than that of PLRC group (477.8 ± 97.4 vs 550.4 ± 99.9). There existed no significant differences between the PLRC and ALRC groups in postoperative characteristics and complications. Conclusion This study revealed that ALRC required less cystectomy time for patients with higher BMI and larger tumor, suggesting less blood loss and similar perioperative complications. ALRC is recommend for male patients, of which BMI > 24 or total tumor and blood clot volume > 160 cm3.
Introduction
Bladder cancer (BCa), characterized by high risk of recurrence and mortality, is a serious health risk worldwide, with an estimated 430,000 new cases in 2012 [1]. BCa is common in elder individuals and strongly associated with smoking exposure [2,3]. The incidence of BCa has been rising in China with the population aging, and BCa ranks the first in the urinary malignant tumor with an incidence of 69.7/100,000 [4]. Accordingly, the treatment for bladder cancer is of huge importance.
Radical cystectomy (RC) is the standard management of non-metastatic, muscle-invasive BCa [5]. Though open radical cystectomy (ORC) is a universally accepted gold standard for treating muscle-invasive, organ-confined BCa, minimally invasive surgical techniques, such as laparoscopic radical cystectomy (LRC) and robot-assisted laparoscopic radical cystectomy (RALRC), have been rapidly accepted for treating BCa. Compared with ORC, these new techniques with small incisions are considered to cause less blood loss and shorter hospital stay, while yielding equivalent oncologic and functional outcomes [6][7][8]. However, LRC has not been widespread because of the technical challenges of the procedure. The process of LRC is long and complicated, requiring detailed knowledge of pelvic anatomy. LRC is really challenging for male patients with higher BMI and larger tumors, even for an experienced surgical team. As the working space in the narrow pelvis would be smaller, laparoscopic visualization would be limited.
The LRC techniques for male fat patients or male patients with larger tumor and blood clot have been rarely studied. To make better laparoscopic visualization and bigger working space, posterior approach laparoscopic radical cystectomy (PLRC) was modified in 2011, and bladder suspension was performed during the laparoscopic radical cystectomy. To compare PLRC with anterior approach laparoscopic radical cystectomy (ALRC), 147 male patients who underwent PLRC or ALRC in our hospital from May 2011 to January 2018 in this study were retrospectively analyzed, and our experience of ALRC was reported.
Patients
This study was approved by the ethics board of Xiangya Hospital, Central South University, in Changsha, China. All the participants were informed that their clinical information may be applied in later clinical study when they entered hospital. Also, their written informed consents were given. Between May 2011 and January 2018, there were 157 male patients in our hospital, with high-grade bladder urothelial carcinoma, of which 72 and 85 patients are in the PLRC and ALRC group respectively; we failed to follow up 4/72 patients in PLRC group and 6/85 patients in ALRC group. Thus, these patients were excluded, and the medical records of the 147 patients retrospectively were collected and analyzed from our bladder cancer database. We made the technique rotation from month to month. All patients have high-grade bladder urothelial carcinoma (cT 2-3 N X M 0 ) in line with TMN classification of the Union for International Cancer Control [9]. All patients had primary tumors, and none of the patients had received bladder surgeries, radiotherapy, or chemotherapy before.
Patients were evaluated in accordance with the American Society of Anesthesiologists (ASA) physical status classification system. Each patient underwent preoperative examination including routine laboratory test, echocardiography, chest radiography, lung function test, computerized tomography, or magnetic resonance imaging of urinary system. Common comorbidities were recorded, including diabetes mellitus, coronary artery disease, hypertension, chronic obstructive pulmonary disease, and other chronic diseases. Patients began semi-liquid diet 48 h before surgery, liquid diet 24 h before surgery, and underwent bowel preparation with enema and oral sodium phosphates solution 12 h before surgery. All patients got broad-spectrum systemic antibiotic intravenously during the induction of anesthesia. Same laparoscopic instruments and devices from KARL STORZ were used in both groups. All the operations and perioperative management were performed by the same experienced surgical team. Three surgeons (MC, LQ, and XZ) performed the operations. MC performed 30 and 35, LQ performed 22 and 24, and XZ performed 16 and 20 in PLRC and ALRC respectively. All the surgeons are very experienced and perform more than 25 cases of laparoscopic radical cystectomy annually. Follow-up studies were conducted with ultrasonography and CT scan at 3 and 9 months postoperatively.
Parameters and endpoint
The baseline patient characteristics included age, BMI, hemoglobin (Hb), total tumor and blot clot volume, comorbid conditions, and ASA score. Total tumor and blot clot volume were measured with CT scan. Operative characteristics included operation time, cystectomy time, estimated blood loss (EBL), and transfusion needed, while postoperative characteristics included Hb, time to liquid intake, time to exsufflation, and hospital stay after surgery. Operation time was defined as anesthesia from the beginning to the end, and cystectomy time was defined as procedure from starting mobilization to bladder removed completely. Intraoperative complication includes rectal injury and external iliac vein injury. Complications that occurred within 90 days after surgery were considered early complications, which included ileus, deep vein thrombosis, pyelonephritis, infection, obturator nerve paresis, and wound dehiscence [10]. The complication occurring within 90 days or later after surgery were defined as late complications, including ileus, ureteral stricture, and incisional hernia [10].
Statistical analysis
Chi-square test and Student's t test were performed to compare categorical and parametric data respectively between the two groups. Parametric data is denoted as mean ± SD, and categorical data is presented as frequency (%). Linear regression analysis was conducted to assess the relationship between cystectomy time and total tumor and blood clot volume. Significant differences were considered when p < 0.05. Statistical analysis was conducted with the Statistical Package for Social Scientists, version 22.0 (IBM Inc).
Laparoscopic radical cystectomy
The basic laparoscopic radical cystectomy procedures were performed according to Campbell-Walsh Urology [11], A five-port fan-shaped trans-peritoneal approach was employed for LRC. The camera port was placed just below the umbilicus. The residual four ports were placed under endoscopic control after the establishment of the pneumoperitoneum. A dorsal supine position with a 20-25°Trendelenburg position was used to keep the bowel from the pelvis during surgery. Surgery was performed by visualizing the pelvis and releasing adhesions of the sigmoid colon to find the anatomical landmarks in the pelvis, such as obliterated umbilical arteries, obliterated urachus, spermatic cord, and vas deferens. The standard lymphadenectomy was performed in all cases [12]. After laparoscopic cystectomy, urinary diversion, bricker operation, or orthotopic neobladder, was conducted according to patients' preferences.
Posterior and anterior approach laparoscopic radical cystectomy
The posterior approach laparoscopic radical cystectomy is a conventional LRC, the posterior plane was developed first, and the whole operation procedure was performed according to Campbell-Walsh Urology [11]. The anterior approach laparoscopic radical cystectomy was performed as follows. The anterior plane was first developed distally towards the prostate. Both puboprostatic ligaments were mobilized and dissected, and the Santorini venous plexus was ligated. Subsequently, the bladder was suspended towards the abdominal wall with a 2-0 synthetic non-absorbable suture (Fig. 1a). The laparoscopy observation showed that nearly half of the needle was punctured into the abdominal, 3 cm above the pubic symphysis, and about 2 cm right to the midline (Fig. 1b). The needle was punctured through the right side of bladder wall, then intermittently through the posterior and left side, puncturing out to the symmetrical position of the abdominal wall ( Fig. 1c-f ). The needle should be avoided from penetrating the bladder wall, and the suture should be limited in the superficial peritoneum. After puncturing out the abdominal wall (Fig. 1g), the end of this suture was knotted with tension. The vesicorectal fossa could extend to the fullest after bladder suspension was completed (Fig. 1h). After bladder suspension, the posterior plane was developed. Next, the procedures of handling the posterior and lateral planes were the same as those of PLRC. After the posterior and lateral bladder attachments were released, the bladder was released from the anterior abdominal wall. The attachments of the prostatic apex to the pelvic floor were released, and the urethral catheter was removed. After the dissection of Santorini venous plexus, the urethra was dissected.
Baseline patient characteristics
In general, 147 patients were recruited here, including 68 patients who underwent PLRC and 79 patients proceeded with ALRC. Patients in the PLRC group and ALRC group had comparable baseline characteristics, including age, BMI, Hb, total tumor and blood clot volume, comorbid conditions, and ASA score (Table 1).
Operative and postoperative characteristics
One patient in the PLRC group was transferred to open surgery, and no patients in the ALRC groups were transferred to open surgery. Operative and postoperative characteristics of the two groups are shown in Table 2. Estimated blood loss (EBL) of the ALRC group was significantly lower than that of the PLRC group (477.8 ± 97.4 vs 550.4 ± 99.9). Also, less patients in ALRC group needed transfusion ( Table 2). The Hb after surgery in ALRC group was a little higher than the PLRC group (102.1 ± 11.6 vs 98.1 ± 12.5). There were no significant differences between the two groups in operation time, cystectomy time, time to liquid intake, time to exsufflation, and hospital stay after surgery. When patients were sub-grouped based on BMI, cystectomy time was different between sub-groups though the operation time (317.3 ± 40.9 vs 321.9 ± 37.5) and cystectomy time (64.8 ± 8.7 vs 65.6 ± 14.0) were similar between the two groups. When BMI was less than 18.5, the PLRC group took less cystectomy time compared with the ALRC (52.5 ± 6.5 vs 60.3 ± 6.2). When BMI was ranged from 18.5 to 24, cystectomy time was similar between the two groups. It took longer cystectomy time for the PLRC group when BMI was greater than 24 (77.4 ± 14.9 vs 67.8 ± 10.1) ( Table 3). Liner regression analysis was conducted to access the relationship between cystectomy time and total tumor and blood clot volume. In both PLRC (y = 0.1787x + 34.66, r 2 = 0.8225, p < 0.0001) and ALRC (y = 0.0851x + 49.29, r 2 = 0.4459, p < 0.0001), cystectomy time were correlated significantly positively with total tumor and blood clot volume. Both PLRC and ALRC required similar cystectomy time when total tumor and blood clot volume was about 160 cm 3 . Compared with ALRC, PLRC required less cystectomy time and operation time when total tumor and blood clot volume was less than 160 cm 3 , and PLRC required more cystectomy time and operation time when total tumor and blood clot volume was larger than 160 cm 3 ( Table 2, Fig. 2).
Intraoperative and postoperative complications
There were no deaths in the two groups during the perioperative period. Intraoperative and postoperative complications of the two groups are listed in Table 4. Two patients underwent rectal injury and another patient got external iliac vein injury in the PLRC group during surgery, and one patient underwent external iliac vein injury in the ALRC group. Ileus was considered the most common early postoperative complication, and there existed no significant difference between the PLRC and ALRC group. Also, there were no significant differences between the two groups in the incidence of other early complications, including deep vein thrombosis, pyelonephritis, infection, obturator nerve paresis, and wound dehiscence. Likewise, the incidence of late complications including ileus, ureteral stricture, and incisional hernia was comparable.
Discussion
With the rapid development of laparoscopic techniques, LRC has become an alternative procedure rather than the open radical cystectomy [13]. Besides, LPC is associated with less blood loss and shorter hospital stay and no significant differences in oncologic outcomes [6][7][8]. The procedure of LPC is complicated, time-consuming, difficult to grasp, and challenging to laparoscopy-naive surgeons [14,15]. A large number of studies focused on the technique improvement of LRC, such as the modifications about pelvic lymph node dissection or urinary diversion [16,17]. The difficulty of laparoscopic cystectomy would increase when working space and vision field were very limited in the pelvis. The present study was to assess the technical modification of laparoscopic cystectomy to make laparoscopic visualization and working space larger. Compared with female, the pelvis of male is relatively narrower and longer [18], and the skeleton of Asians are generally smaller than the Westerns [19]. These make it even more difficult for surgeons to perform LRC in Asian male patients. In particular, the seminal vesicle and the posterior surface of the prostate become very difficult due to limited vision and working space. To expose the vesicorectal fossa, the posterior plane was developed distally towards the prostate first, usually with the assistant's devices, pull the bladder upwards, and stretch the sigmoid colon towards the head. Besides, a catheter is indwelled routinely preoperative to keep the bladder empty to make the working space largest. For Asian males especially those with large tumor and blood clot, however, these mentioned methods would be still insufficient. The bladder would maintain a certain degree of filling state, and it was hard to produce adequate working place and make the cystectomy more difficult.
In the ALRC procedure, the anterior plane was developed first distally towards the prostate. Then, the bladder was suspended using a 2-0 polyamide stitch towards the abdominal wall (Fig. 1), to reveal the vesicorectal fossa. Even in the cases with large tumor and blood clot, the visualization and working place were excellent. Therefore, the Denonvilliers' fascia was easily encountered at the level of the prostate-vesicular junction, and this fascia can be incised under direct vision, allowing the posterior plane to be developed further distally in a safer way, compared with PLRC. The rectum was less likely to be injured when dissecting the tissue between the prostate and the anterior rectal surface under direct vision. Moreover, for those younger male patients wanted to retain erectile function, laparoscopic nerve-sparing radical cystectomy had been performed with the same principles of open procedure, using ultrasound scalpel to dissect tissue within the prostate fascia to preserve these nerve fibers. In the ALRC group, bladder suspension technique provided better surgical exposure and clearer anatomical landmarks, which resulted in much more smooth operation procedures. According to the results, though the average cystectomy time was similar for all patients between the two groups (64.8 ± 8.7 vs 65.6 ± 14.0), the ALRC group required less time for cystectomy when patients' BMI was greater than 24 (67.8 ± 10.1 vs 77.4 ± 14.9). Also, when total tumor and blood clot volume was larger than 160 cm 3 , it took less time for the ALRC group (Fig. 2). EBL of the ALRC group was significantly lower than that of the PLRC group (477.8 ± 97.4 vs 550.4 ± 99.9). Better visualization and larger working place played important roles regarding the blood loss. Besides, rectal injury was a common complication and it happened primarily due to the bloody working field between the prostate and the anterior rectal surface. Bladder suspension helped to expose vesicorectal fossa fully and avoid rectal injury. No rectal injury was found here in the ALRC group (Table 4), while there were two cases in the PLRC group.
LRC should be limited to experienced and skilled laparoscopic urologists. The working space in the pelvis and proper laparoscopic visualization are vital for LRC. The bladder lies deeply in the pelvis with a limited space. For female patients, the operative field may be fully revealed by adjusting the surgical posture to a hyperextension lithotomy position, while it remains hard to achieve good better visualization for male patients. In the meantime, there was no effective method reported in the literature to achieve better visualization except the traction of the assistants' devices. Here, a safe and effective alternative technique was reported in detail to achieve better visualization in laparoscopic cystectomy for male patients.
Conclusion
To sum up, the technique of bladder suspension applied in ALRC is simple and effective to provide better visualization and larger working space. Compared with PLRC, it takes less time of laparoscopic cystectomy for patients with greater BMI or larger total tumor and blood clot volume, and patients also have less blood loss. The technique of bladder suspension is verified to be suitable for Asian male BCa patients, especially those with greater BMI or larger tumor and blood clot within the bladder.
|
2019-01-17T05:53:10.064Z
|
2019-01-07T00:00:00.000
|
{
"year": 2019,
"sha1": "adc226d222461e082135d1d16240a10488d4d5d5",
"oa_license": "CCBY",
"oa_url": "https://wjso.biomedcentral.com/track/pdf/10.1186/s12957-018-1547-7",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "52844f22ce2de93a80e010a9e5a440ef1632a559",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
2721071
|
pes2o/s2orc
|
v3-fos-license
|
Description of the HKU Chinese Word Segmentation System for Sighan Bakeoff 2005
In this paper, we describe in brief our system for the Second International Chinese Word Segmentation Bakeoff sponsored by the ACL-SIGHAN. We participated in all tracks at the bakeoff. The evaluation results show our system can achieve an F measure of 0.940-0.967 for different testing corpora.
Introduction
Word segmentation is very important for Chinese text processing, which is aiming at recognizing the implicit word boundaries in plain Chinese text. Over the past decades, great progress has been made with Chinese word segmentation technology. However, two difficulties still face us while developing a practical segmentation system for large open applications, i.e. the resolution of ambiguous segmentation and the identification of unknown or out-of-vocabulary (OOV) words.
In order to resolve the above two problems, we developed a purely statistical Chinese word segmentation system using a two-stage strategy. We participated in eight tracks at the Second International Chinese Word Segmentation Bakeoff sponsored by the ACL-SIGHAN, and tested our system on different testing corpora. The scored results show that our system is effective for most of ambiguous segmentation and unknown words in Chinese text. In this paper, we make a summary of this work and give some analysis on the results. The rest of this paper is organized as follows: First in Section 2, we describe in brief a twostage strategy for Chinese word segmentation. Then in Section 3, we give details about the settings or configuration of our system for different testing tracks, particularly the training data and the dictionaries used in our system. Finally, we report the results of our system at this bakeoff in Section 4, and give our conclusions on this work in Section 5.
Overview of the System
In practice, our system works in two major steps as follows: The first step is a process of known word segmentation, which aims to segment an input sequence of Chinese characters into a sequence of known words that are listed in the system dictionary. In our current system, we apply a known word bigram model to perform this task (Fu and Luke, 2003;Fu and Luke, 2005).
Actually, known word segmentation is a process of disambiguation. Given a Chinese character string The second step is actually a tagging task on the sequence of known words acquired in the first step, which intends to detect unknown words or out-of-vocabulary (OOV) words in the input. In this process, each known word yielded in the first step will be further assigned a proper tag that indicates whether the known word is an independent segmented word by itself or a beginning/middle/ending component of an OOV word . In order to improve our system, part-of-speech information is also introduced in some tracks such as the PKU open test and the AS open test. Furthermore, a lexicalized HMM tagger is developed to perform this task .
Given a sequence of known words n w w w W L 2 1 = , the lexicalized HMM tagger attempt to find an appropriate sequence of tags Table 2 shows all the dictionaries used in our system for different tracks.
Settings for Different Tracks
In the closed test, the system dictionaries are derived automatically from the relevant training corpora for this bakeoff by using the following three criteria: (1) Each character in the training corpus is taken as an independent entry and collected into the relevant system dictionary. (2) A standard Chinese word in the training corpus will enter to the relevant dictionary if it has four or less Chinese characters within it, and at the same time, its counts of occurrence in the corpus is observed to be larger than a threshold. In our current system, the threshold is set to 10 for the AS closed test and 5 for other closed tests. (3) For non-standard Chinese words such as numeral expressions, English words and punctuations, if they consist of multiple characters, they will be not included in the system dictionary.
As for the open test, some other dictionaries are applied. As can be seen from . Therefore, the training corpora for the two tests are tagged with part-of-speech, and entries in the relevant dictionaries are defined with their potential part-of-speech categories.
The Scored Results
In Bakeoff 2005, six measures are employed to score the performance of a word segmentation system, namely recall (R), precision (P), the evenly-weighted F-measure (F), out-ofvocabulary (OOV) rate for the test corpus, recall with respect to OOV words (R OOV ) or invocabulary words (R iv ).
In order to achieve a consistent evaluation of our system in both the closed test and the open test, OOV is defined in this paper as the set of words in the test corpus but not occurring in both the training corpus and the system dictionary. Furthermore, the additional two rates, i.e. OOV-C and OOV-D are used to denote the outof-vocabulary rate with respect to the training corpus and the out-of-vocabulary rate with respect to the system dictionary, respectively. At the same time, the precision with regard to invocabulary words (P iv ) and OOV words (P OOV ) are also computed in this paper to give a more complete evalution of our system in unknown word identification. Table 4. Scores for different tracks The OOV rates and scores of our system are summarized respectively in Table 3 and Table 4. The results show that our system can achieve a F-measure of 0.940-0.967 for different testing corpora while the relevant OOV rates are from 0.023 to 0.074.
Although our system has achieved a promising performance, there is still much to be done to improve it. Firstly, our system is purely statistics-based, it cannot yield correct segmentations for all non-standard words (NSWs) such as numeral expressions and English strings in Chinese text. Secondly, known word segmentation and unknown word identification are taken as two independent stages in our system. This strategy is obviously simple and more easily applicable (Fu and Luke, 2003). Although the known word bigram model can partly resolve this problem, it is not always effective for some complicated strings that contains a mixture of ambiguities and unknown words, such as "31 ᮹ " and the fragment "Ёᜐ⑃ጟ" in the sentence "Ёᜐ⑃ጟᬃᜐ⊼ₑعᵯ".
Conclusions
This paper presents a two-stage statistical word segmentation system for Chinese. We participated in all testing tracks at the second Sighan bakeoff. The scored results show that our system can achieve a F-measure of 0.940-0.967 as a whole for different corpora. This indicates that the proposed system is effective for most ambiguous segmentations and unknown words in Chinese test. For future work, we hope to improve our system by incorporating some pattern rules to handle complicated ambiguous fragments and non-standard words in Chinese text.
|
2014-07-01T00:00:00.000Z
|
2005-01-01T00:00:00.000
|
{
"year": 2005,
"sha1": "9f0125d3e4198582e68d5e1a41dee55d37803701",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "ACL",
"pdf_hash": "9f0125d3e4198582e68d5e1a41dee55d37803701",
"s2fieldsofstudy": [
"Computer Science",
"Linguistics"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
234589214
|
pes2o/s2orc
|
v3-fos-license
|
Factors influencing truth-telling by healthcare providers to terminally ill cancer patients at Ocean Road Cancer Institute in Dar-es-Salaam, Tanzania
truth or not. [15] In Tanzania, little is known about clinical information disclosure to cancer patients, owing to limited evidence available regarding factors influencing the telling or not telling of the truth. However, Background. Widely accepted international clinical ethics standards and Tanzania’s national cancer treatment guidelines mandate that healthcare providers give patients the information they want or need in an understandable way. However, adherence to these standards is low, particularly in developing countries. In Tanzania, little is known regarding the views and practice of disclosure of diagnosis and prognosis to cancer patients. Objective. To examine the factors influencing truth-telling by healthcare providers to terminally ill cancer patients at Ocean Road Cancer Institute (ORCI) in Dar-es-Salaam, Tanzania. Methods. We conducted semi-structured interviews with a purposive sample of healthcare providers ( n =13) and terminally ill cancer patients ( n =8) in English and Swahili. The interviews were recorded, transcribed verbatim and translated into English. Results. The study findings show an unsatisfactory experience of truth-telling according to international clinical standards. Many healthcare providers emphasised barriers including: limited communication skills; limited time; a high volume of patients; and lack of privacy for communication. Patients’ preferences and readiness to receive and accept information, as well as family collusion, were identified as barriers, whereas communalism was identified as a facilitator to truth-telling. Conclusion. We identified several barriers and facilitators of truth-telling that may be targeted in order to promote high-quality communication at ORCI. Our results support advocacy for improvements such as: training for clinical communication skills and information sharing; space designated to enhance privacy; deliberate efforts to employ enough providers to care for the increasing volume of patients; and need for counselling session with patients and family members prior to clinical information disclosure.
ARTICLE the culture of Tanzania does not allow such information to be given directly to patients, to protect cancer patients from losing hope and experiencing depression. [16] The aim of this study was to explore the factors influencing truth-telling by healthcare providers to terminally ill cancer patients at the Ocean Road Cancer Institute in Dar-es-Salaam, Tanzania. Specifically, the study aimed to examine: (i) healthcare provider-related factors; (ii) patient-related factors; and (iii) sociocultural factors influencing truth-telling in cancer treatment settings.
Methods
This study was conducted at the Ocean Road Cancer Institute (ORCI) in Dar-es-Salaam, Tanzania. The hospital was established in 1996 as a national referral centre for cancer treatment, and provides free care to 5 400 new patients per year, including services such as radiotherapy and chemotherapy. The authors purposively selected the area because it is the only specialised facility for cancer treatment in Tanzania. Given its nature, the ORCI serves many cancer patients and hosts a number of specialised cancer healthcare providers. It is located in Dar es Salaam, the largest city in Tanzania. The ORCI provides cancer care services for all types of cancer to both adult outpatients and inpatients. Approximately 257 patients are treated on a daily basis. It has a total number of 19 specialists, of whom 11 are females and 8 are males.
Data were extracted from two purposively sampled groups of key informants. [17] Healthcare providers included oncologists, palliative care specialists and nurses (n=13). All healthcare providers who participated in this study were purposively selected because they were knowledgeable, highly experienced with at least 2 years of work experience in the oncology field and are directly involved in treating terminally ill cancer patients. Their names and contact details were obtained from the head of the research and training unit. Some of them were contacted physically, and others were telephoned to ask for their participation. Those who agreed were free to decide on the time, date and place of interview.
We selected patients who were at a terminal stage, with end-oflife ≤6 months, admitted to a special room receiving only palliative treatment, but who could still communicate (n=8). The reason for including patients in this study was to find out exactly what they need to know about their illness, and to what extent (how much), and when. During patients' selection, a list of names was obtained from the palliative care specialists and nurses in charge, with permission from the heads of the wards. On each day of the interview, patients were approached by the researcher or an assistant, and were asked to participate voluntarily in the study. Each group was assigned its own interview guides. The interview guides were pilot tested at Muhimbili National Hospital (MNH) with healthcare providers and patients, and questions were rephrased in line with the challenges encountered during the piloting phase. Each interview guide contained openended questions, with subsequent probing questions. The questions were tailored to answer the three study objectives regarding factors that influence truth-telling by healthcare providers to terminally ill cancer patients at the ORCI. Interviews were conducted by the researchers with help from two research assistants adequately qualified in caring for terminally ill cancer patients. The interviews were conducted in a separate isolated room so as to maintain the privacy of key informants and confidentiality of the information they shared. Moreover, the researcher ensured that questions were fully exhausted by participants, and data collection stopped when all the themes became saturated. With permission from key informants, interviews were audiorecorded. Furthermore, field notes were taken for each interview to smoothly facilitate data analysis.
Data analysis
Data transcription and translation were done by the researchers with help from two research assistants. Thematic analysis using an inductive approach was used to analyse the collected information. The use of an inductive approach aims to ensure that the emerging themes are strongly linked to the data themselves, and that they are not imposed by the researcher. The first author coded the data, and all authors participated in analysing the data manually through reading and re-reading the transcripts to ensure a clear and general understanding of the emerging concepts. Themes were then sought out from the general lists of the created codes ( Table 1). The whole process of analysis was iterative. Further scrutiny was carried out by returning to the interview transcripts to identify their similarities and differences, to ensure that the identified themes formed coherent patterns. Finally, we clustered the subthemes and the themes, and presented them with supporting quotations that describe the meaning underpinning each theme.
Ethical considerations
We sought and were granted ethical clearance from Muhimbili University of Health and Allied Sciences (MUHAS) Ethical Review Committee to conduct the study (ref. no. DA.287/298/01A/07). Permission to conduct the research was also requested from the ORCI, where the study participants work. Written consent was obtained from participants after explaining to them the study objectives, methodology and benefits. Participants were also assured of the confidentiality of all the information they disclosed to the researchers.
Results
Upon analysis of the study findings, seven subthemes were generated, and grouped into three main themes, namely healthcare provider-related, patient-related and sociocultural factors influencing truth-telling. The subthemes generated under healthcare providers were: limited communication skills; limited time; lack of privacy for communication; and high volume of patients. The subthemes under patients were: patients' preferences; and readiness to receive and accept information. Communalism and family collusion were the subthemes under the sociocultural theme.
Limited communication skills
Truth-telling to patients was generally perceived by most healthcare providers as the key role in the oncology field. The main issue was how to start communicating the bad news to patients. The task tends to place healthcare providers in a dilemma about whether to communicate the crucial information or not. It was further reported that truth-telling should never be reduced to merely telling patients that they have cancer. Participants said that it should go beyond this, as a quotation from one interviewee illustrates: 'Here we have I think a big gap in truth-telling due to so many aspects like communication skills. We may be able to provide symptoms managements to a certain good number of patients, but what is happening afterward is a big question mark that should be addressed. ' MD 010, oncologist
Limited time
Limited time was one of the system-level barriers to information disclosure. The healthcare providers interviewed for this study reported that they have minimal time available to afford a thorough discussion with a patient. Time is always not a doctors' friend, as claimed by one of the participants, who cited it as one of the reasons why healthcare providers sometimes receive patients who are not aware of their diagnosis: 'A doctor may not have enough time to tell the truth to a patient due to limited time. You find yourself with so many patients to attend to in a day such that you give just a fraction of a minute to each patient of which it is not enough to digest all of the information. ' MD 008, oncologist
Lack of privacy for communication
It was reported by healthcare providers that providing complete information to patients requires a conducive setting where each party has the freedom of sharing and listening. The existing settings of the hospital do not provide rooms that guarantee adequate privacy to facilitate effective disclosure of medical information. The participants reported being ethically failed by a lack of hospital infrastructure such as private offices, rooms for private counselling and a suitable place for information sharing: 'First of all, the issue of breaking cancer information is crucial. You need to have a physical support like a private office, room and a good place to talk whereby a patient would feel comfortable and able to hear and cope with the news. ' MD 006, oncologist
High volume of patients
A high volume of cancer patients who come to seek health services on a daily basis was cited as a barrier to truth-telling at the hospital. It was reported by participants that the ever-increasing number of cancer patients relative to the availability of healthcare providers tends to limit adequate time available for the comprehensive disclosure of medical information to patients: 'Here patients are so many to the extent in the sense that those in charge of giving the information do not have enough time to sit with a patient and share the information wholly. ' MD 009, oncologist
Patients' preferences about information
Key findings further showed that patient's preferences about access to information also influence the truth-telling process. This was cited as a barrier to truth-telling because imposing information on an unprepared and unwilling patient is not an act of respect for autonomy. Both healthcare providers and patients interviewed for this study reported the preferences of patients regarding medical information as an important factor that featured in the truth-telling process in oncology, as one participant made clear: 'From my experience, patients also influence the truth-telling process in oncology. This is because some of them do not want to hear certain kinds of information, especially that which bears a negative element in it. Some prefer to hear information that brings healing hope. ' MD 011, oncologist This was also confirmed by some patients who were interviewed about the type of information that they would want to receive from healthcare providers. One of the participants reported that he would like to receive good news on how his condition was improving: 'I need to receive good news, that I am recovering, that my condition is improving, that I am getting better and going home tomorrow. This is the kind of information I would like to hear from healthcare providers. I do not like to hear the news that break my heart, news which makes me become afraid of and put me into fear. ' PT 06, patient
Patients' readiness to receive information
Truth-telling was said to be influenced by the psychological status and ability of the patient to handle the information. Some of the study participants interviewed for this study said that patients are sometimes not ready to receive medical information. One of the healthcare providers said : 'Truth-telling depends on the readiness of a patient. You have to look at the patient's facial expression and conclude that sometimes patients cannot handle information comprehensively. In this regard as an oncologist, you are obliged to break the news slowly, little by little, while monitoring the psychological readiness of your patient. ' MD 008, oncologist Another patient interviewed for this study acknowledged the influence of his psychological state in receiving medical information, saying: 'Sometimes I feel not ready for my medical information, because I believe there is bad news and I do not want to hear such news that break my heart into pieces. I am ready to hear the information that ARTICLE give me hope, encourage my faith, that the medications will cure me. ' P02, patient
Communalism
The findings of this study revealed the influence of cultural communication style as a barrier to truth-telling by healthcare providers to terminally ill cancer patients. It was reported that when the issue of discussing medical information arises, in most cases, many African cultures, including Tanzanian, emphasise communalism rather than the interests of the individual patient. This encourages withholding information from the patient. One participant said: 'We usually discuss the information with family members in my culture, so as to protect the patients, because if the patient knows it is likely that he or she will lose hope, experience depression, fear, and eventually die. ' MD 010, oncologist
Family collusion
The majority of healthcare providers who were interviewed for this study reported the influence of family collusion in providing medical information to their cancer patients. The key findings were further that family members often prevent healthcare providers from sharing information with patients. Most patients interviewed also cited the role of their families in receiving medical information. One of the healthcare provider participants said: 'Family members can interfere and prevent you from communicating the news directly to the patients. Many times, I get pressure from family members who would approach me asking not to disclose the information to their patients. We have to respect family members in disclosing the information. ' MD 010, oncologist
Discussion
Healthcare provider participants revealed several factors that influence their disclosure of medical information to terminally ill cancer patients. This discussion is based on the objectives of the study, which aimed to examine the healthcare provider-related, patient-related and sociocultural factors influencing truth-telling to terminally ill cancer patients. The discussion that follows is based on the findings obtained from the 21 participants of this study (13 healthcare providers and 8 terminally ill cancer patients).
Provider-level factors
Many participants emphasised the barriers discussed in this section.
Limited communication skills
The findings revealed limited communication skills among healthcare providers in discussing medical information with cancer patients. The findings show how information about cancer diagnosis and treatment overwhelms healthcare providers in terms of their own feelings and emotions as they communicate bad news to patients. The fear of a patient's reaction to the news is one factor that limits and weighs down a provider's decision as to whether to discuss clinical information. It puts healthcare providers into a dilemma regarding whether to communicate the information or not, because, as has been discussed, telling a patient that his or her cancer is at a late stage is a difficult task, since it tends to cause feelings of terror. [18] This is due to health practitioners' limited communication skills and information-sharing methods, as well as the complex nature of the disease itself. [19] This implies that as a result of reflecting on patients' reaction to cancer information, healthcare providers find that they have little that they can discuss with patients, consequently making truth-telling an even harder task for providers to undertake. Healthcare providers' limited communication skills in discussing clinical information with their patients in the oncology field has been found to compromise the entire truth-telling and general information-sharing process with their patients. [20] Limited time This study showed that having insufficient time hinders truthtelling to patients. The interviewed healthcare providers reported themselves as having minimal time available to afford a thorough discussion with a patient. [21] Each patient is given a very short amount of time, which is never enough to digest all the information one may receive. This implies that truth-telling to patients requires sufficient time to discuss and share medical information. The impact of time in healthcare has also been documented elsewhere, and is said to greatly affect communication. [22]
Lack of privacy for communication
Our study shows that a lack of adequate facilities, such as offices or private rooms, hinders information sharing among healthcare providers and patients. It was reported by most of the interviewed healthcare providers that disclosing complete information to patients requires a conducive setting, where each party involved in the truth-telling process has freedom to share and to listen carefully. This is not the case in the existing settings, which do not provide enough room for privacy to allow for the effective disclosure of medical information. Participants admitted being ethically failed by the lack of infrastructure conducive to information sharing. This implies that deciding to disclose information in inappropriate settings violates privacy, which discourages practitioners from sharing information honestly with patients. A previous study has shown that protection of patient information is part of core professional ethics, and it is clearly understood that such protection is only possible if there are adequate facilities to ensure privacy at health facilities. [23]
High volume of patients
This study showed that the high volume of cancer patients attended at the ORCI compromises the truth-telling process. It was reported by most healthcare providers that they are obliged to attend to as many patients as possible on a daily basis, because of the everincreasing number of patients reporting at the hospital. The low ratio of healthcare providers to patients tends to limit adequate disclosure of clinical information, due to the time pressure exerted on the providers. The lower the number of patients, the more time would be available for information disclosure. Healthcare providers furthermore reported that it would be unethical to spend more time on one patient than others while there are so many more, some having travelled a long distance, waiting for the service. This implies that the increase in the number of patients on a waiting list results in a decrease in the amount of time available to providers who need it to impart information fully, hence the information is only partially shared. Another study documented the fact that a high increase in the number of new patient cases per month contributed to the majority of healthcare providers becoming reluctant with regard to truth-telling, owing to their heavy workload. [24] In addition, healthcare providers know the importance of truth-telling to patients: that it is a basic moral rule in the healthcare profession. Therefore, they understand that not to tell the truth may jeopardise existing staffpatient trust, and lead to a failure of health professionals to respect cancer patients as autonomous individuals. This undermines the patient's capacity for autonomy, and deprives terminally ill patients of their right to a 'good death' . [25]
Patient-level factors
Patients' preferences and readiness to receive information were identified as both barriers and facilitators to truth-telling, as discussed in the following section.
Patients' preferences about information
It was found that there are some patients who are willing to hear information that brings them hope of healing, while others would prefer not to hear any information about their illness. These preferences affect the truth-telling process. Healthcare providers reported that they respect these preferences owing to the widely recognised principle of fundamental medical autonomy regarding information, which allows patients the choice to the type of information they would prefer to access. It is worth noting that imposing information on an unprepared and unwilling patient is not an act of respect for autonomy. [26] This implies that in the effort to impart medical information, healthcare providers have a role to play in educating patients, especially those who tend toward resistance, and refuse to receive details about their ailments and treatment options. A study [22] documented patients' preferences to information as a barrier to truth-telling in the oncology field, as it contributes to the variability of attitudes in cancer patients.
Patients' readiness to receive information
This study revealed that the readiness of a patient to receive medical information influences truth-telling. Most participants interviewed for the research said that truth-telling depends on the psychological readiness of a cancer patient. Healthcare providers are not willing to share too much medical information with patients if they feel that the patient would not be able to handle it. Opting to share information with patients who are unstable and disturbed psychologically may lead such patients to lose hope. This implies that a healthcare provider's decision whether to tell the truth or not is sometimes determined by the psychological condition of the patient they are attending to. This observation was also supported by one of the interviewed patients, who acknowledged his unwillingness to receive any bad news. The patient admitted to only waiting for information that gives hope, encouragement and the promise of absolute cure for his ailment. A study has also made the finding that under grave circumstances, healthcare providers may be forced to deceive patients so as to instill hope in them, adhering to the ethical principle of beneficence. [27] However, it is also recommended in another study that deception generally should not be utilised in everyday medical practice because it disrespects the autonomy of the patient. [28] In medical practice, providers are morally obliged to follow the doctrines of autonomy and beneficence. However, some providers may mistakenly use benevolent deception because it honours the principle of beneficence.
They may lie to the patient allegedly for his or her benefit, especially when believing that telling the actual truth would cause more harm than benefit. [7] It has been mentioned above that while the majority of providers in both developed and developing countries tell the truth about a patient's health condition, the assumption that truth-telling is always beneficial to patients is sometimes criticised. [29] Truth-telling has some side-effects to patient's health, including psychological effects, as some may be overwhelmed by the information that they are in the final moments of their lives, [30] meaning that the decision to accept the truth can hinge on the degree of personal responsibility that health professionals delegate to patients. It is argued that truthtelling is equal to the avoidance of responsibility by providers, who are expected to involve and guide cancer patients in making decisions about their health. However, some physicians present the available options to cancer patients, and expect them to choose, because this avoids blame from patients and their relatives if something goes wrong. [29]
Sociocultural factors
Communalism and family collusion were identified as a facilitator and barrier to truth-telling, respectively, as discussed below.
Communalism
Cultural communication style was identified as an influence, as Tanzanian culture embraces communal concern rather than individual interests, something that may encourage the withholding of information from a patient by a provider. In this study, it was found that the culture does not allow much information to be given directly to patients, out of the desire to protect patients from depression and loss of hope, which could lead to premature death. The practice can influence coping outcomes, including the ability to absorb the shocks of cancer diagnosis and prognosis. This implies that society, through communication style, is capable of influencing therapeutic outcomes, and this can result in large differences in medical care services, as well as the ability of the society to absorb shocks in terms of cancer information. [31] The influence of cultural communication style that can manipulate the perceptions and attitudes of people on cancer treatments and information sharing, including truthtelling, has also been acknowledged in another study. [32] Family collusion Our study shows that family members normally request that providers do not share information with their patients. It was revealed that patients allow this interference as well as the involvement of family members in the truth-telling and decision-making process on their behalf. This shows that family members may play a major role in imparting information and in making medical decisions for patients. It also entails that family members can influence the medical relationship, including truth-telling, between a provider and a patient. [18] Other studies have also reported the influence of family members in the field of oncology on information sharing, including truth-telling. It has been reported that healthcare providers often find themselves in a quandary as to whom, when and how to tell. [8,33] Such a dilemma is complex in the sense that opting to tell the family members amounts to violating the ethical principle of autonomy of a patient, while lying is wrong and immoral and disrespecting the person's autonomy is not ethical. [34] Despite familyrelated barriers to truth-telling, it is worth noting that one of the key ARTICLE values to being truthful is associated with respect for the patient as a person who is able to make the correct decisions that benefit their overall health. Furthermore, if patients do not understand the truth, it may lead to a failure of healthcare professionals to respect cancer patients as autonomous individuals. [34]
Study limitations and strengths
The lack of inter-hospital comparison, and the selection of only hospital-based respondents without including a household-based selection of respondents, may contribute to missing the inclusion of ideas from dissatisfied patients who have stopped seeking medical care in public hospitals such as the ORCI. The possibility of social desirability bias in in-depth interviews with healthcare providers may also limit this study. However, the adequate sample size and use of respondents from multiple categories (oncologists, nurses and patients) are the major strengths of this qualitative study. We believe that this study provides valuable insights into the nature of truthtelling to terminally ill cancer patients by healthcare providers.
Conclusions
The truth-telling process in the oncology field goes beyond merely providing information related to the patient's diagnosis and prognosis.
The data in this study support advocacy for improvements in aspects such as training in clinical communication skills and information sharing, space designated to enhance privacy, deliberate efforts to employ enough healthcare providers to care for the increasing volume of patients, and the need to hold counselling sessions with patients and family members prior to clinical information disclosure. Since this study was conducted in just one hospital, further studies could be conducted involving other hospitals and healthcare institutions in an effort to generate a better understanding of the influencing factors of truth-telling nationwide in Tanzania.
|
2021-05-16T00:03:16.909Z
|
2020-12-15T00:00:00.000
|
{
"year": 2020,
"sha1": "01a34db4d0407d8d4c5b1729ad884e68a880a82e",
"oa_license": "CCBYNC",
"oa_url": "http://www.sajbl.org.za/index.php/sajbl/article/download/651/645",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "dd4fa316973662ac29c9a37f9f2fd1bd80272d49",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Psychology"
]
}
|
117312912
|
pes2o/s2orc
|
v3-fos-license
|
An Engineering Approach to the Scientific Method
Part of the primary motto of this journal is to provide engineering approaches to enhance the power of the scientific method. The engineering approach is essentially the engineering design process. The scientific method is similar but is hypothesis driven vs. design driven. In the scientific method a hypothesis is generated as a potential answer to an important question. An experiment is tested to prove or disprove the hypotheses. The results of the experiment are discussed in the context of the original question and whether the hypothesis was proven or not. The results are also compared to other studies looking at the same or similar questions. Therefore the study is looked at whether it follows from previous studies, what are the ramifications of the results related to the original question, and what needs to be examined next; further test the hypothesis or test a different hypothesis.
Part of the primary motto of this journal is to provide engineering approaches to enhance the power of the scientific method. The engineering approach is essentially the engineering design process. The scientific method is similar but is hypothesis driven vs. design driven. In the scientific method a hypothesis is generated as a potential answer to an important question. An experiment is tested to prove or disprove the hypotheses. The results of the experiment are discussed in the context of the original question and whether the hypothesis was proven or not. The results are also compared to other studies looking at the same or similar questions. Therefore the study is looked at whether it follows from previous studies, what are the ramifications of the results related to the original question, and what needs to be examined next; further test the hypothesis or test a different hypothesis.
Again, the design process is similar, but different in some important ways. Instead of a question there is a problem or need. Instead of a hypothesis there is a design constraint(s) of what the solution has to be able to do. Experiments are done to determine if the solution meets the design constraint(s), which is similar to testing a hypothesis; in fact a design constraint can be written as a hypothesis. The big difference is that design constraints are quantitative and hypotheses are typically not. Hypotheses are typically proven or not proven based on statistics and whether something leads to a statistically significant difference. However, a statistical difference does not mean the difference is significant enough to matter; with hypotheses not normally quantifying how big a difference is required.
In the Bioengineering world, the problem is typically a clinical medicine one. It could be with current treatment the healing is too slow, the device or new tissue are not strong enough, or not something else enough. The new treatment or device either has to augment the tissue or stimulate the tissue to meet a clinical performance requirement (e.g. handle a certain load--which will be used as the example here to help better illustrate a point). A design constraint would be quantitative and specify how much load it needs to handle (plus normally some factor of safety). A hypothesis would be: Does this new treatment lead to a significant increase in load carrying ability compared to current treatment; without assuring it meets the necessary clinical performance requirement.
Implicit in the problem is that it is a significant problem. Further, implicit in the design constraint is that meeting it will have a significant impact on the problem. This clinical significance is the major difference between the scientific method and the design process, as well as implied by the first word in the journal title (significances). Currently medical schools have determined one of the most important skills a clinician should have is the ability to apply math and science to physiological systems (to clinical medicine once they graduate); which is the definition of bioengineering. If asked, it would be an understanding of the scientific method in order to choose the best treatment for a given situation. The skill they are actually looking for is to be able to use the design process (at least the first few steps) in order to choose a treatment that meets the required clinical performance constraints (or as close as possible). They want clinicians to "think like engineers" but would never admit that (I have tried).
Clinicians should not choose one procedure over another just because it produces statistically significant improvements. First, a significant problem has to be established. Speeding up the healing or increasing the strength of something that is already adequate clinically is not beneficial (unless it significantly reduces cost or gives a significant additional benefit). Then procedures are chosen if they can meet the clinical performance design constraint(s) that current treatments cannot, or at least make a clinically significant improvement; getting closer to the clinical performance desired.
So a research paper that uses bioengineering approaches to enhance the scientific method requires discussion of clinical impact (the significance of the bioengineering advancement). There are multiple ways this can be looked at: the desired clinical performance improvement, the potential effect on clinical performance of a solution, or the benefit (cost, time, resources) to critical stakeholders (patient or health-care providers). All three ways are potential design constraints; with the third also a commercializability concern. It is of course not necessary to discuss the commercializability of the studied treatment, however, it will not have any clinical impact without making it to the marketplace. At least a paper should place the research in the continuum of steps toward the development of a marketable product.
Significances Bioeng Biosci
This is important for justification; justification of the need for the study, the approach used, and the significance of the results. It does depend, however, to a degree on the type of study and where in the design process it fits. An applied paper should be design driven, even if it is written as hypothesis driven. There should therefore be design constraints. The study should explain where it fits in meeting these design constraints. Design constraints can be broken down into different types. There are "have to(s)" and "would like to(s)", which can be clinical performance design constraints as well as pre-clinical design constraints.
Each study should also have its own design constraint(s); what it is trying to show relative to the design constraint(s) with emphasis on the clinical performance design constraint(s). Then the significance of the study becomes how the limitations of the methodology effect the ability to meet these study design constraints and how this relates to the clinical performance design constraints necessary to solve the problem (or at least make a significant clinical impact).
Too often, even in applied journals, papers fall short related to the engineering design process, which effects its ability to justify the study, the approach, and/or the significance of the study; because they are hypothesis driven. First is in establishing a problem; both how far short of the needed clinical parameters are current treatments and how significant a problem this is. Most papers will describe the clinical problem, but not what is lacking with current treatments and what significant clinical issue this deficit causes.
The next place is in specifying the design constraint(s), so any proposed solution can be judged on whether it meets all the "have to" design constraints as well as any "would like to" design constraints. The comparison of solutions should be on the clinical significance of meeting each of their "would like to" design constraints as well as their associated costs and risks, since any solution not meeting a "have to" design constraint can be eliminated. Many of the "would like to" design constraints are meeting the "have to" design constraints above the minimal level. Assessing these improvement "would like to" design constraints again requires determining how big an additional difference in clinical performance would actually make a difference as well as how big an improvement on the "have to" design constraint would lead to that level of difference in clinical performance. For example, the desired clinical performance may not be a "have to" design constraint, but be broken into a "have to" design constraint plus one or more "would like to" design constraints. The first "have to" could be (going back to the load carrying ability example) the minimum improvement in load carrying ability that makes a significant clinical impact; with the "would like to" design constraints one or more improvements in load carrying ability that also make a significant clinical impact. Part of the engineering design process is optimization; making improvements to meet more "would like to" design constraints.
Determining and quantifying design constraints is normally an iterative process. The pre-clinical constraints are what we believe the design needs to be or do in order to meet the clinical performance design constraints, which we are most likely not testing, unless this is a clinical study. So the desired pre-clinical performance design constraints cannot actually be determined until the relationship between pre-clinical performance and clinical performance is known. A study may look at just feasibility of meeting the pre-clinical design constraint(s), which is believed to be necessary to meet the "have to" clinical performance design constraint(s).
So although it is unlikely that the design process is complete; where the study fits into the process has to be justified. As a minimum the specific improvement in clinical performance should be specified (as quantitatively as possible) as well as the believed relationship between the pre-clinical performance design constraint(s) the study is focusing on and the clinical performance design constraints (as quantitatively as possible). In the discussion, what the study showed relative to the design process should be explained as well as, at least in general, what future studies are needed to determine if the proposed solution could meet the clinical performance design constraint(s). Too often a paper will claim it showed the potential of the solution to be used in a clinical situation without identifying the problem with current solutions, the improvement in clinical performance desired, or what additional studies would be needed to show the solution could actually meet the clinical performance design constraint(s).
|
2019-04-16T13:28:52.600Z
|
2017-09-12T00:00:00.000
|
{
"year": 2017,
"sha1": "5dfa5e9bef63746cb0c932890416a8c5b1e0082f",
"oa_license": "CCBY",
"oa_url": "https://crimsonpublishers.com/sbb/pdf/SBB.000501.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "4e004ca3f8fb4a4d1d7b4e49fcfb738dac6b88a3",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Engineering"
]
}
|
267051013
|
pes2o/s2orc
|
v3-fos-license
|
Transfer of magnetic anisotropy in epitaxial Co/NiO/Fe trilayers
The magnetic properties of Co(10 Å)/NiO(40 Å)/Fe trilayer epitaxially grown on W(110) substrate were investigated with use of x-ray magnetic linear dichroism (XMLD) and x-ray magnetic circular dichroism (XMCD). We showed that magnetic anisotropy of Fe film that can be controlled by a thickness-driven spin reorientation transition is transferred via interfacial exchange coupling not only to NiO layer but further to ferromagnetic Co overlayer as well. Similarly, a temperature driven spin reorientation of Fe sublayer induces a reorientation of NiO spin orientation and simultaneous switching of the Co magnetization direction. Finally, by element specific XMCD and XMLD magnetic hysteresis loop measurements we proved that external magnetic field driven reorientation of Fe and Co magnetizations as well as NiO Néel vector are strictly correlated and magnetic anisotropy fields of Fe and Co sublayers are identical despite the different crystal structures.
Magnetic anisotropy (MA) is a feature of magnetic materials that plays a crucial role in technological applications.In thin films MA can be determined by magnetocrystalline, dipolar or magneto-elastic effects 1 .Additional contributions to MA appear for thin film heterostructures via so called magnetic proximity effects for which the presence of neighboring magnetic or/and non-magnetic layers can induce interface anisotropy 2 , interlayer exchange coupling [3][4][5] or exchange anisotropy 6 .The latter describes the interaction between two magnetically ordered layers, in particular the coupling at the interface between ferromagnet (FM) and antiferromagnet (AFM) that has been widely utilized in spin valves 7 and magnetic tunnel junctions 8 .Exchange coupling at AFM/FM interface can be used to manipulate the magnetic properties of both FM and AFM layers.For fully compensated (001) surface of NiO, an exchange interaction at the Fe/NiO interface caused the AFM domain structure of NiO to follow the FM domains of Fe 9 .Furthermore, for Py/IrMn and CoO/Fe it was shown that a nonuniform magnetization state in the FM layer can modify the spin structure of AFM 10,11 .
Among the most intensively studied AFM/FM interfaces are those which contain an antiferromagnetic NiO layer.Bulk NiO crystallizes in a cubic NaCl structure.Below its Néel temperature (T N ) of 523 K, the magnetic moments of Ni 2+ ions align ferromagnetically within the (111) planes, while the adjacent (111) planes are coupled antiferromagnetically.Recent demonstration of long-distance spin transport 12 , spin Hall magnetoresistance 13,14 and current-induced switching in NiO/Pt 15,16 revealed the potential of NiO as active element in spintronic devices.Experimental research on the FM/NiO structure does not unambiguously conclude about the relative orientation of magnetic moments of FM and AFM layers.Although, a perpendicular orientation of the FM and AFM easy axes was predicted for the ideal FM/AFM interface 17 , both collinear and non-collinear coupling have been reported for FM/NiO with [001] orientation of NiO so far [18][19][20][21] .A collinear coupling at the FM/AFM interface is often associated with an exchange bias effect which manifests itself as a horizontal shift of magnetic hysteresis loop and enhanced coercivity of the FM layer 22 .More complicated magnetic structures can be formed in FM/AFM/FM trilayers in which for metallic AFM spacer, the combination of exchange bias coupling with a long-range Ruderman-Kittel-Kasuya-Yosida (RKKY) coupling can exist 3,[23][24][25][26][27] .For comparable magnitudes of exchange bias and interlayer exchange coupling, a complex magnetization switching process was demonstrated in Fe/Mn/Co 23 .In trilayers with insulating AFM (iAFM) interlayer, the interaction between FM layers due to the conduction electrons of the spacer is excluded.Interlayer exchange coupling which arises from the spindependent electron-tunneling process causes monotonic decay of the indirect coupling strength together with an increase in the thickness of insulating spacer in the regime of ultrathin spacer layers 5,28 .Simultaneously, in FM/iAFM/FM, magnetostatic interactions, the spin structure of the iAFM layer, as well as interfacial coupling between FM and iAFM should be considered [29][30][31][32][33] .
In this work we focused on the magnetic properties of Co/NiO/Fe trilayer epitaxially grown on W(110) substrate.As we showed previously in NiO/Fe/W(110) 34,35 , MA and orientation of NiO spins within (111) plane can be tuned by magnetic properties of the underlying Fe layer.In particular, the direction of Fe spontaneous magnetization that can be controlled by a thickness and temperature driven spin reorientation transition was directly imprinted into the NiO spin structure.Here we proved that the easy axis direction and strength of anisotropy field in the Fe layer is transferred through NiO to a distant Co film which reveals an existence of considerable exchange coupling at both NiO/Fe and Co/NiO interfaces.As a consequence, modulation of the magnetic state and anisotropy of Co layer can be triggered by variation of Fe thickness or temperature.
Experimental
The samples were prepared in the ultra-high vacuum (UHV) chamber equipped with molecular beam epitaxy (MBE) evaporators.A bcc W(110) single crystal was used as a substrate.A standard cleaning procedure was performed to remove carbon impurities from the W surface 36 .Cleanliness of the W substrate surface was confirmed by low energy electron diffraction (LEED) (Fig. 1b).The Fe(110) layers were grown by MBE on W(110) at room temperature and annealed at 675 K for 15 min.A motorized shutter was moved in front of the sample throughout the Fe films growth.As a result, distinct 1 mm wide strips with varying Fe thicknesses (d Fe ) ranging from 92 Å to 120 Å were formed on the sample.In studied Fe thickness range, the spin reorientation transition (SRT) from [ 110 ] to [001] occurs as a function of increasing Fe thickness 37,38 or decreasing temperature.LEED pattern of Fe after annealing confirmed the existence of unreconstructed bcc Fe(110) surface (Fig. 1c).The same LEED pattern was visible on all of the regions with different Fe thicknesses.The 40 Å thick NiO layer was grown on top of Fe via reactive deposition of Ni in a molecular oxygen atmosphere under a partial pressure of 1 × 10 -6 mbar and a substrate temperature of 300 K. LEED pattern collected after the deposition of NiO shows sixfold symmetry, confirming that NiO(111) surface structure was formed regardless of the Fe underlayer thickness (Fig. 1d).Similarly to previously reported results 35 we obtained the following relation between crystallographic in-plane directions of Fe and NiO: Fe[001]‖NiO[011 ], Fe[110]‖NiO[211 ] (Fig. 1f).Growth of 10 Å thick Co layer at room temperature followed the NiO deposition.Figure 1a shows the schematic drawing of the sample.The LEED pattern collected on the Co surface shows six-fold symmetry expected from the (0001) surface orientation of hexagonal Co (Fig. 1e).To compare the magnetic properties of NiO with a thickness of 40 Å and 8 Å a similar sample, but with 8 Å thick NiO layer was prepared, following the same methodology.
Magnetic properties of Co/NiO/Fe trilayer were characterized by x-ray magnetic circular dichroism (XMCD) and x-ray magnetic linear dichroism (XMLD) 20,39 .X ray absorption (XA) spectra were collected at the PIRX beamline 40 of the National Synchrotron Radiation Center SOLARIS 41 .The XA spectra were measured in the total-electron-yield (TEY) detection mode by measuring the sample drain current.As TEY mode is surface sensitive, we probe only few top-most nm of Fe in XAS measurements.To probe the magnetic properties of NiO, XA spectra were collected using a linearly polarized x-ray beam with a photon energy corresponding to the Ni L 2 edge 42 .The measurements were performed in normal and grazing incidence geometry of the x-rays, with the linear polarization direction in the sample plane.The spin orientation of the Fe and Co sublayers was studied by collecting the XA spectra for the photon energy tuned to the Fe and Co L 2,3 absorption edges using two circular polarizations with opposite helicity.To ensure sampling from a region of uniform Fe thickness, the incident radiation spot size was restricted to 200 µm for all of the XAS measurements.Visualization of the magnetic domain structure in the FM and AFM layers was performed using x-ray photoemission electron microscopy (XPEEM).Prior to the XPEEM measurements the sample was capped with 1 nm of Au to prevent oxidation of Co layer.We proved that the capping layer does not influence magnetic properties of the stack (Fig. S4, Supplemental Materials).XPEEM images were collected at the CIRCE beamline of the ALBA Synchrotron Light Facility 43 and the Nanospectroscopy beamline of the Elettra synchrotron 44 .In both setups, the x-rays were incident on the sample at a grazing angle of 16° from the surface plane.The differential XMCD-PEEM images of Fe and Co layers were obtained at the respective L 3 absorption edges by taking the difference between images collected with the two opposite circular polarizations.The NiO XMLD-PEEM image was obtained by calculating the asymmetry from images taken at two absorption energies corresponding to the peaks within the Ni L 2 edge.(see Supplemental Materials for details).
Results and discussion
Figure 2a presents the XA spectra collected at 80 K for photon energies scanned across the Fe L 2,3 edges with right-and left-handed circular polarizations ( σ + and σ − , respectively) (see Fig. S2(a) in Supplemental Materials for details of measurements).The spectra, after subtraction of the background were normalized to the highest intensity value of the average of the two spectra.Prior to the XAS measurements, external magnetic field of 140 mT was applied along the Fe[110 ] in-plane direction, thus the following XA spectra were measured in the remanent magnetization state of ferromagnets.For the trilayer with Fe thickness of 96 Å we noted a strong polarization dependence of the spectra for the photon beam propagation direction along Fe[110 ] (Fig. 2a, upper spectra) and a clear XMCD signal (Fig. 2a, upper, dotted line).In contrast, for 112 Å-thick Fe there was no detectable XMCD signal for the same measurement geometry (Fig. 2a, lower, dotted line).This indicates that the magnetization M in thin Fe is aligned along the x-ray incidence vector (k) (Fe [ 110 ] direction) while for thicker Fe layers M is parallel to Fe[001] direction and perpendicular to k. Thickness-induced spin reorientation transition in Fe layer was confirmed by element-specific measurements of hysteresis loops showed in the last part of this paper (see below).Ni L 2 spectra collected under normal (θ = 0°) and 60° (θ = 60°) incident angle of x-rays from the sample regions with Fe thickness of 96 Å and 112 Å are shown in Fig. 2b.The spectra were recorded for linear polarization of the x-ray beam with the projection of the electric field vector E parallel to the Fe [ 110 ] in-plane direction (see Fig. S1(b) in Supplemental Materials for details on measurement geometry).Similarly to the XA spectra collected for circular polarization the spectra were normalized to the highest intensity value of the average of the two spectra.The spectra reveal a twin peak feature which is typical for the NiO XAS.The L 2 ratio (RL 2 ), defined as an intensity ratio of the higher energy peak (871.4 eV) to the lower energy peak (870.3 eV) can be employed to probe the orientation of magnetic moments in NiO 45 .In our study for d Fe = 96 Å we noted RL 2 = 0.8 while RL 2 = 0.73 was registered for Fe thickness of 112 Å.As it was shown previously 34,35 , such a difference in RL 2 is provoked by a change of orientation of NiO spins from Fe[110 ] (NiO[211 ]) to Fe[001] (NiO[011 ]) direction due to the exchange coupling between FM and AFM layers.At first glance one would expect stronger dependence of L 2 ratio on θ angle for NiO film grown on thinner Fe layer.However, we observed opposite behaviour, i.e. the XA spectrum collected for NiO grown on d Fe = 112 Å is much more sensitive to the change of incident angle than the spectrum recorded for NiO/Fe(d Fe = 96 Å).This result is a consequence of the fact that the XMLD asymmetry in NiO depends not only on the relative orientation of electric field E and AFM spins, but also the angular dependence with respect to the crystallographic axes.This conclusion is consistent with previous reports 35,46 .As we did not note polarization dependence of the spectra collected at Ni L 2 edge for two opposite circular helicities (Fig. S2, Supplemental Materials), we do not expect ferromagnetic Ni in the sample.To investigate if changes in spin directions in NiO and Fe affect the orientation of magnetic moments within the top Co layer in Co/NiO/Fe we measured XA spectra on L 2 , 3 absorption edges of Co (Fig. 2c).Similarly, to the Fe spectra, we noted a noticeable XMCD signal on the part of the sample with thinner Fe (Fig. 2c, upper), whereas no XMCD was detected on the sample region with thicker Fe (Fig. 2c, lower).
Transfer of magnetic properties from Fe film through NiO to Co top-most layer is also reflected in the domain structure of sublayers.The right column in Fig. 2 shows room temperature XMCD-PEEM (Fig. 2d and f)) and XMLD-PEEM (Fig. 2e) images collected at the boundary between Fe thickness of 104 Å and 108 Å, with a k photon beam direction parallel to the Fe[110 ] axis.The XMCD and XMLD contrasts were obtained as a result of the digital image processing described in Supplemental Materials.The characteristic zig-zag pattern indicates a 90° in-plane magnetization rotation [47][48][49] at the boundary between d Fe = 104 Å and d Fe = 108 Å regions, which indicates that the critical thickness (d crit ) of Fe at which spin reorientation transition occurs is 104 Å < d crit < 108 Å (see Fig. S3 in Supplemental Materials).The antiferromagnetic domain structure of NiO was imaged using the XMLD-PEEM at the Ni L 2 edge.The linear polarization of the incident x-ray beam was oriented in the plane of the sample along the Fe[110 ] (NiO[211 ]) direction.The domain pattern of underlaying Fe layer is directly reflected in the antiferromagnetic domains of the proximate NiO (Fig. 2e).Moreover, we noted the same zigzag pattern for the XMCD-PEEM picture collected at the Co L 3 edge, which confirms the imprinting of the Fe domain structure through the whole stack.
Systematic XAS measurements of Co/NiO/Fe performed as a function of d Fe revealed that the critical thickness of Fe defines not only the change of spin orientation in the Fe layer but also in NiO and Co components.Figure 3a and c show the evolution of XMCD asymmetry determined from the XAS intensity for two circular polarizations measured on Fe L 3 (Fig. 3a) and Co L 3 (Fig. 3c) absorption edges as a function of d Fe .The signals were normalized Fe could be the existence of "orange peel" effect 50 in our sample.A way to prove that the change of XMCD signal of Co is a consequence of exchange coupling at the interfaces and is not related to the magnetostatic coupling is to heat the sample above T N .For T > T N , antiferromagnetic order vanishes, thus if the effect is related to the exchange coupling at the interfaces, XMCD of Co should not be sensitive to the SRT in Fe.For the trilayer with a NiO thickness of 40 Å the relatively high T N > 400 K cannot be reached without intermixing in the multilayer structure or a reduction of the NiO layer.However, we proved that changes in magnetic properties of Co are induced by interfacial exchange coupling for the specially prepared sample, with thinner NiO sublayer, for which magnetic size effects reduce Néel temperature to below 300 K.For such Co/NiO(8 Å)/Fe sample, at 80 K, similarly to the results obtained for 40 Å of NiO we note that RL 2 (d Fe ) dependence (Fig. 3e, blue) follows the changes in XMCD of Fe (Fig. 3d, blue).As the NiO layer is magnetic at low temperatures, the rotation of spins in Fe and NiO is accompanied by a variation of spins direction in Co (Fig. 3f, blue).At room temperature, RL 2 is constant and remains insensitive to SRT in Fe, proving that the NiO layer is in a paramagnetic state.In this case, no change of XMCD(d Fe ) in Co is observed (Fig. 3f, red).Thus, the changes in XMCD of Co are correlated with the magnetic properties of the NiO layer and do not originate from magnetostatic interactions between ferromagnets.
As the strength of "orange-peel" coupling decays with an increase in interlayer thickness 50 , contribution from magnetostatic coupling in Co/NiO(40 Å)/Fe sample can be excluded.At temperature of 300 K (Fig. 3a and d, red) we noted a shift in d crit towards thicker Fe films in comparison with measurements performed at 80 K (Fig. 3a and d 110) 51 .
As surface and volume contributions to MA of Fe follow different temperature dependencies 51 , for Fe layers with thicknesses close to the d crit , which are characterized by a small MA, it is possible to induce SRT also by changing the temperature.As it was predicted theoretically 52 and confirmed in experiments 34,53 the temperatureinduced SRT reveals hysteresis.Thus, one of two possible, orthogonal ferromagnetic states (with M||[110 ] and M||[001]) can be stabilized over a range of temperature.Recently we demonstrated that thermal hysteresis of the SRT in Fe is reproduced in the neighboring NiO layer in NiO/Fe/W(110) 34 .Consequently, depending on the history of the sample, it is possible to stabilize one of two orthogonal states in NiO at a given temperature.In the current study we show that due to the strong exchange coupling at the top Co/NiO interface, temperature-assisted hysteresis in Fe is transferred not only to the proximate NiO layer but to the Co cover layer as well.Figure 4 shows the temperature dependence of XMCD and L 2 ratio determined for Co/NiO/Fe heterostructure for d Fe = 104 Å. Temperature evolution of XMCD and RL 2 was obtained from XAS measurements performed during coolingheating thermal cycling.Within the temperature range (200-240) K it is possible to stabilize two orthogonal Similarly to results obtained for NiO/Fe 35 , we noted that thermal hysteresis of the SRT in Fe is mimicked by the neighboring NiO layer.A pronounced change in RL 2 is visible at T = 180 K and T = 260 K during cooling (Fig. 4, blue stars) and heating branches (Fig. 4, blue hexagons) which proves coupling between Fe and NiO sublayers.Moreover, for Co we noticed a similar hysteresis of the XMCD signal, e.g. a sudden change in XMCD signal was observed at the same temperatures at which SRT occurred in both Fe and NiO layers (Fig. 4, red).This indicates that the exchange coupling at FM/AFM and AFM/FM interfaces enables the possibility to stabilize two orthogonal magnetization states in all three sublayers within the (200-240) K temperature range, driven only by a change of the temperature.The results of XAS measurements performed as a function of Fe thickness and for a given d Fe as a function of temperature revealed that 90-degree SRT in Fe is transferred through the NiO layer to the top Co.In addition, element-specific XMCD and XMLD measurements collected as a function of the external magnetic field demonstrate that not only the direction of M but the strength of the anisotropy field is transferred to the top Colayer.
Figure 5 presents the element-sensitive magnetic hysteresis loops measured utilizing the XMCD for ferromagnetic sublayers and XMLD for the antiferromagnetic NiO interlayer.XMCD measurements were carried out in grazing geometry with the x-ray beam illuminating the sample at θ = 60 • , where θ is the angle between the incident beam and the sample surface normal (as shown on Fig. S1(a), Supplemental Materials).During the measurements the in-plane component of the external magnetic field was applied along Fe[110 ] direction.Note that although a non-zero out-of-plane component of magnetic field exists for such measurement geometry, a significantly higher magnitudes of external fields are necessary to rotate the magnetic moments of Fe out of the (110) sample plane.Thus, the out-of-plane component of the magnetic field has no effect on Fe magnetization and can be omitted.On the part of the sample with d Fe = 96 Å (Fig. 5a, left), we obtained a square hysteresis loop at Fe L 3 absorption edge, while for Fe thickness of 112 Å a typical hard axis loop with almost zero magnetization in remanence state and anisotropy field of 10 mT was registered (Fig. 5a, right).This confirms that magnetization of Fe layer with a thickness of 96 Å is aligned along Fe [ 110 ] whereas M Fe of the layer with d Fe = 112 Å is parallel to Fe[001] direction.As the XMLD is insensitive to the 180° reversal of magnetic moments we do not observe a significant dependence of XMLD signal on magnetic field (Fig. 5b, left) for Co/NiO/Fe(96 Å).On the contrary, for Co/NiO/Fe(112 Å) an in-plane 90-degree switching of NiO spins is visible in the magnetic field evolution of XMLD signal (Fig. 5b, right).This can be understood if we consider that AFM spins follow the reversal of magnetization of the Fe layer for which the magnetic field drags M Fe towards the hard axis direction 35 .Please note, that independently on the Fe thickness the measured hysteresis loops are fully symmetric with respect to zero-field axis, i.e. they do not exhibit exchange bias field.This means that the antiferromagnetic NiO spins are rotatable and follow any reorientation of the Fe magnetization, in contrast to the frozen AFM spins 11 and large exchange bias in recently reported isostructural CoO(111)/Fe(110) system 54 .The shape of normalized hysteresis loops registered at the Co L 3 edge (Fig. 5c) is identical to the loops collected at Fe L 3 edge.In particular, for the hard axis loops we noted the same saturation field for Co and Fe layers (compare black and red loop in the right column in Fig. 5).This shows that not only the direction of MA but also anisotropy field strength is transferred from Fe layer through NiO to Co film.Magnetic proximity effect in Co/NiO/Fe trilayer is responsible for creation of the artificial two-fold magnetic anisotropy in hexagonal cobalt film with six-fold crystalline symmetry.
Conclusions
In summary, we showed that exchange coupling at NiO/Fe and Co/NiO interfaces along with well-defined and controllable magnetic anisotropy of Fe layer determines the magnetic properties of both NiO and Co layers in Co/NiO/Fe trilayer structure.XMCD and XMLD measurements performed for Co/NiO/Fe epitaxially grown on W(110) showed that thickness-driven spin reorientation transition in Fe is transferred not only to the NiO layer but also to ferromagnetic Co layer as well.We proved that magnetic anisotropy of Co overlayer can be precisely controlled by tuning the properties of bottom Fe layer.Specifically, reorientation processes in Co can be triggered by changing the Fe thickness or by variation of the temperature.The measurements of element-specific magnetic hysteresis loops revealed that besides MA, magnetic anisotropy field is transferred from Fe to Co layer.This result shows that Co film can be treated as a probe layer of magnetic properties of buried NiO and Fe layers which is valuable for surface-sensitive techniques, e.g.spin-polarized microscopy (STM) or spin-polarized low electron energy microscopy (SPLEEM).Use of conducting Co probe layer that mimics the magnetic state of the buried films enables to visualize the spin structure in buried NiO layers with laboratory methods.Moreover, our results are important for the process of imprinting of spin structure between neighboring magnetic systems with different type of magnetic order.In particular, in the case of AFM-FM systems, previous studies concerning continuous films have reported the imprinting of AFM domain structure onto the FM component [55][56][57] .Our results demonstrate a step further as the spin structure of the bottom Fe layer is not only transferred across the AFM NiO layer to the top Co overlayer but also the response of spins of all magnetic components of the system to the external magnetic field is unified.Such result can be exploited in grafting of more complex spin textures such as vortices 11 or skyrmions from a ferromagnet to antiferromagnet and further to another ferromagnet.Finally, we showed that such strong interaction between ferromagnetic layers, mediated by antiferromagnetic spacer, can be turned on or off by antiferromagnetic size effects.
Figure 1 .
Figure 1.(a): Schematic drawing of the sample, (b-e), from the bottom: LEED patterns collected on W(110) substrate (b), after the deposition of Fe (c), NiO (d) and Co (e).For each pattern an electron energy at which image was collected is shown; (f) schema of hexagonal diffraction pattern of NiO and Co (grey dots).Relative orientation of Fe and NiO in-plane directions within Fe(110) and NiO(111) planes were marked by brown and blue arrows, respectively.
Figure 2 .
Figure 2. left column: XA spectra collected at Fe L 2,3 (a) and Co L 2,3 (c) absorption edges for Co(10 Å)/NiO(40 Å)/Fe(96 Å) (upper) and Co(10 Å)/NiO(40 Å)/Fe (112 Å) (lower).Spectra were registered at 80 K for left-and right-handed circular polarization ( σ − and σ + , respectively).Spectra measured on two sample regions are offset for clarity.Black dashed curves represent calculated XMCD for respective pairs of circular polarization spectra.Values of XMCD are multiplied to improve visibility; (b) Ni L 2 XA spectra collected at 80K for thin and thick Fe part of the sample and two grazing angles θ for x-ray linear polarization with E || Fe[110 ] in-plane direction.Curves under the spectra present the calculated XMLD.Values were multiplied for clarity.; right column: Element specific XPEEM images taken at the boundary between the Fe thickness of 104 Å and 108 Å. Fe L 3 and Co L 3 XMCD-PEEM images (d and f, respectively), (e) Ni L 2 XMLD-PEEM image acquired at the same boundary area with x-ray linear polarization vector aligned along Fe[110 ] direction.Field of view on the presented XPEEM images is 8 × 10 µm 2 .
Figure 3 .
Figure 3. Fe thickness dependence of normalized XMCD determined from XAS measurements at the Fe L 3 (a, d) and Co L 3 (c, f) absorption edges at 80 K (blue) and 300 K (red); (b, e) The dependence of NiO RL 2 as a function of Fe thickness at 80 K (blue) and 300 K (red).Left and right column shows results obtained for Co/ NiO(40 Å)/Fe and Co/NiO(8 Å)/Fe, respectively.
, blue).In the Co/NiO(40 Å)/Fe for d Fe = 104 Å magnetic moments of Fe, NiO and Co are parallel to Fe[110 ] direction in contrast to a temperature of 80 K in which spins of FM and AFM layers were aligned along Fe [001].An enhancement of the critical thickness at the elevated temperature is a consequence of the increase in [ 110 ] in-plane MA of Fe(
Figure 4 .
Figure 4. Temperature dependence of XMCD (black and red for Fe and Co respectively) and Ni L 2 ratio (blue) in Co(10 Å)/NiO(40 Å)/Fe(104 Å) trilayer.The temperature-controlled bistability of the magnetic state is revealed by thermal hysteresis for all layers in the stack.
Figure 5 .
Figure 5. Element-specific XMCD (a, c) and XMLD (expressed by L 2 ratio) (b) magnetic hysteresis loops of Fe, Co and NiO, respectively.Left and right column represents measurements performed for Co/NiO/Fe(96 Å) and Co/NiO/Fe(112 Å), respectively.During measurements an external magnetic field was applied parallel to Fe[110 ] direction.
|
2024-01-21T06:17:03.575Z
|
2024-01-19T00:00:00.000
|
{
"year": 2024,
"sha1": "6bd852c237b214da4124efdeaf8e0816887a7447",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "aec3d7ab8eb50b0870dd96ea76f3b84320b5288e",
"s2fieldsofstudy": [
"Physics",
"Materials Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
6841250
|
pes2o/s2orc
|
v3-fos-license
|
Quantum Monte Carlo calculations of neutron matter with non-local chiral interactions
We present fully non-perturbative quantum Monte Carlo calculations with non-local chiral effective field theory (EFT) interactions for the ground state properties of neutron matter. The equation of state, the nucleon chemical potentials and the momentum distribution in pure neutron matter up to one and a half times the nuclear saturation density are computed with a newly optimized chiral EFT interaction at next-to-next-to-leading order. This work opens the way to systematic order by order benchmarking of chiral EFT interactions, and \emph{ab initio} prediction of nuclear properties while respecting the symmetries of quantum chromodynamics.
We present fully non-perturbative quantum Monte Carlo calculations with non-local chiral effective field theory (EFT) interactions for the ground state properties of neutron matter. The equation of state, the nucleon chemical potentials and the momentum distribution in pure neutron matter up to one and a half times the nuclear saturation density are computed with a newly optimized chiral EFT interaction at next-to-next-to-leading order. This work opens the way to systematic order by order benchmarking of chiral EFT interactions, and ab initio prediction of nuclear properties while respecting the symmetries of quantum chromodynamics.
Introduction.-The accurate prediction of the dynamics of a supernova explosion and of the structural properties of compact stars is tightly related to the correct understanding of the properties of dense matter, and in particular of its equation of state (EoS). The conditions of temperature and density in the core of a neutron star are such that a perturbative approach based on the fundamental theory of strong interactions, i.e. quantum chromodynamics (QCD) is not possible. Non perturbative calculations could be carried on in the framework of lattice QCD. However, at present, fully non-perturbative lattice QCD calculations of many nucleon are unfeasible, although the extraction of nuclear forces from lattice QCD is a topic of intensive current research, and impressive progress has been made in recent times [1].
An alternate bridge from QCD to low energy nuclear physics is provided by the use of nucleons as basic nonrelativistic degrees of freedom, determining their mutual interactions by means of the chiral effective field theory (EFT). Chiral EFT gives a systematic expansion for the nuclear forces at low energies based on the symmetries and the symmetry breakings of QCD [2]. Chiral interactions have already been employed in calculations of nuclear structure and reactions of light and medium-mass nuclei [3], and nucleonic matter [4][5][6][7].
For any given Hamiltonian, quantum Monte Carlo (QMC) methods have proven to be most accurate for computing ground state properties [8]. Some of the most accurate calculations for light nuclei and neutron matter were indeed performed using continuum diffusion based QMC methods [9], in conjunction with the semi phenomenological local Argonne-Urbana family of nuclear forces [10].
In general, the interactions obtained from chiral EFT are non-local, i.e., explicitly dependent on the relative momenta of the particles. It is difficult to incorporate non-local interactions in standard continuum QMC methods. Recently, an interesting approach was proposed in Ref. 6, where all the non-localities up to next-to-next-to-leading order (NNLO) were traded for additional spin-isopin operator dependence. This local chiral NNLO interaction was then included in a conventional auxiliary-field diffusion Monte Carlo (AFDMC) calculation. In this scheme, the residual non-localities would have to be treated perturbatively (See also,Ref. 11).
In this letter, we introduce an alternative and complementary approach, viz.
performing fully nonperturbative QMC calculations with the full non-local chiral interactions with the help of the newly developed configuration interaction Monte Carlo (CIMC) method introduced in Ref. [12,13]. The CIMC method is similar to continuum QMC, in that the ground state wave function is obtained by applying the power method stochastically with the help of a random walk in the space of relevant configurations. However, in contrast to continuum QMC, in CIMC the random walk in performed in Fock space, i.e., in the occupation number basis. As a result, non-local interactions can be easily incorporated in CIMC.
In this letter, we report extensive calculations with a non-local chiral interaction in which a proper QMC algorithm is used. We use the recently developed chiral NNLO opt interaction [14], The scattering phase shifts obtained from this interaction fit the experimental database [15] at χ 2 ∼ 1 for laboratory energies less than 125 MeV. However, the contribution from the three nucleon forces is smaller with this parametrization than with the previous ones.
We calculate the equation of state (EoS) and the nucleon chemical potentials in pure neutron matter up to one and a half times the nuclear saturation density. In addition, we also present unbiased QMC estimates of the momentum distribution.
Method.-In CIMC, the ground state (GS) wave function, Ψ GS , is filtered out by repeatedly applying the propagator P = e −τ (H−ET ) on an initial state, Ψ I , with a non-zero overlap with |Ψ GS Here, H is the Hamiltonian, E T is an energy shift used to keep the norm of the wave function approximately constant, and τ is a finite step in 'imaginary' time τ = it. This process is carried out stochastically in a manybody Hilbert space that is spanned by all Slater determinants that can constructed from a finite set of single particle (sp) basis states. In this work, we use the eigenstates of momentum and the z components of spin and isospin as the sp basis. The calculations are performed in a box containing A nucleons of size L 3 = A/ρ with periodic boundary conditions. The size of the box L is fixed by the density ρ of the system. The finite size of the box requires the sp states to be restricted on a lattice in momentum space with a lattice constant l = 2π/L. A finite sp basis is chosen by imposing a "basis cutoff" k max , so that only those sp states with k 2 ≤ k 2 max are included. A sequence of calculations with increasingly large values of k max are performed till convergence is reached.
Sampling of new states can be performed under the condition that the matrix elements of the propagator, P, are always positive semi-definite. For fermions interacting with a realistic potential this condition is never fullfilled. This gives rise to the so-called sign-problem, which is usually circumvented by using a guiding wave function to constrain the random walk to a subsector of the full many-body Hilbert space in which the sampling procedure is well defined. This restriction of the random walk introduces an approximation which is similar to the fixed-node/fixed-phase approximation commonly used in continuum QMC. As explained in Refs. 12 and 13, we use the coupled cluster double (CCD) type wave functions as the guiding wave functions. As a result, the CIMC method provides an interesting synthesis of QMC methods and CC theory.
We were able to extend the CIMC method to the case of complex hermitian Hamiltonians, in a way that preserves all the favorable properties of CIMC viz., (i) the ground state energy estimate is a rigorous upper bound on the true ground state energy; (ii) this upper bound is tighter than that provided by the guiding wave function; (iii) there is no bias due to finite (imaginary) time step, τ . Note that, none of the above properties (i-iii) hold for the nuclear GFMC or the AFDMC methods. Details of this generalization will be provided elsewhere.
Equation of state and chemical potentials.-In Fig. 1, we show our results for the EoS (energy per particle vs density) of pure neutron matter. Energies refer to a box containing 66 neutrons with periodic boundary conditions. For periodic boundary conditions, finite size (shell) effects are minimal for the shell closures at 14 and 66 (see, e.g. Ref. 16). For comparison, we have also included the variational APR EoSs (two body -AV18 and two plus three body -AV18+UIX interactions) [17], the AFDMC EoS (two body -AV8 ′ interaction) [18], and the NL3 EoS [19].
As mentioned earlier, in CIMC, successive calculations [18], blue dashed line -APR EoS with the 2b AV18 [17], blue solid line -APR EoS with the 2b AV18 + 3b UIX [17], black dashed dotted line -NL3 EoS [19]. The inset shows the convergence of our energies as a function of kmax at ρ = 0.08 fm −3 for 14 (black squares) and 66 (blue circles) particles. The dotted lines are a guide to the eye.
with larger sp basis sizes need to be performed till convergence. In the inset of Fig. 1 we plot the energy per particle as a function of k max at ρ = 0.08 fm −3 for 14 and 66 particles. We deem the CIMC calculations for have converged when the difference in the energy estimate between successive values of k max is less than the statistical error (typically ∼ 10 − 25 KeV at convergence). For all the densities considered in this work we observe a smooth convergence in the CIMC calculations as a function of k max . The nucleon chemical potentials in dense matter play a crucial role in determining the proton fraction at beta equilibrium, and consequently the equation of state and the cooling mechanism in neutron stars. In Fig. 2, we show the proton and the neutron chemical potentials in pure neutron matter. We calculate the neutron chemical potential (µ n = ρ∂(E/N )/∂ρ + E/N ) by numerical differention of the EoS. The proton chemical potential (µ p ) is calculated from the binding energy of one extra proton in pure neutron matter. The calculations for µ p were performed for 14 neutrons + 1 proton; however, we also checked in a few cases that the results for the 66 neutrons + 1 proton case are within 2%.
Most computer simulations of supernovae use phenomenological EoSs based typically on the liquid drop model, the most popular being the Lattimer-Swesty EoS [20], or on relativistic mean field theory [21,22]. As a prototype of such an EoS we have included the results from the NL3 EoS [21] in Figs. 1 and 2.
For µ p all the calculations are reasonably consistent with each other. For the EoS and µ n , however, only the calculations based on microscopic Hamiltonians fit to the scattering phase shifts are consistent (within ∼ 10%) at low densities (ρ 0.1 fm −3 ). Other many body calculations based on microscopic Hamiltonians [4][5][6][7] are also consistent with the ones shown in the figure in this density range. The NL3 model on the other hand has a completely different behavior for low density neutron matter. Such a failure of most of the currently popular phenomenological EoSs to meet the constraints set by microscopic calculations was also pointed out recently, in the context of chiral EFT interactions, in Ref 7. Momentum Distribution..-In interacting fermionic systems the momentum distribution, n(k), is modified from the ideal Fermi-Dirac distribution due to quantum correlations. In particular, the quasiparticle renormalization factor Z = n(k − F ) − n(k + F ) plays a fundamental role in Fermi liquid theory in quantifying the impact of the in-medium effective interactions [23]. In homogeneous systems, the Fourier transform of n(k) is the reduced off diagonal single particle density matrix, which is the primary object in density-matrix functional theory [24].
In continuum QMC methods, computing an estimate of the momentum distribution independent of the importance function (a.k.a. pure estimator) is notoriously difficult, due to the fact that n(k) is an off-diagonal operator in real space. In CIMC, on the the other hand, n(k) is a diagonal operator. We adopt the method proposed in Ref. 25 to our CIMC method to calculate the momentum distribution. In Fig. 3 we show n(k) in pure neutron matter for three different densities.
Our estimates for the occupation number at zero momentum n(0) and the renormalization factor Z are given in Table I. These results can be compared, e.g., with those in Ref. 26 Table. I is due to the softness of the NNLO opt interaction.
In the inset we compare the pure and mixed estimates of the momentum distribution at density ρ = 0.08 fm −3 . The mixed estimator contains an additional bias. For operatorsÔ other than the Hamiltonian the mixed estimator is Ψ T |Ô|Ψ GS , where Ψ T is the importance function used and Ψ GS is the ground state projected in the constrained Hilbert space. The corresponding pure estimator is instead given by Ψ GS |Ô|Ψ GS . We see that the biased mixed estimator for n(k) overestimates both the depletion at k → k − F and the growth at k → k + F by more than 50%.
Uncertainties of the calculations. In order to make reliable ab initio predictions, it is very important to have an estimate of the theoretical uncertainty coming from all possible sources. Gven the Hamiltonian and the number of particles, the uncertainties in our calculations come from two sources: (a) the inherent uncertainty in the chiral EFT Hamiltonian due to the neglect of higher orders and to the ultraviolet cutoff dependence, and (b) the uncertainty in the many body method. In this letter we do not address the former, while noting that a signif-icant amount of effort has already been devoted to this question by other authors. The latter, in our QMC calculations (assuming convergence in k max ), has two sources: the statistical error and the bias introduced due to the fixed-phase approximation. We see in Fig. 4 that the statistical error is 1−2% of the correlation energy (measured with respect to the Hartree-Fock energy). Note that this uncertainty can be systematically reduced by simply running the simulations for longer time. For comparison, we also show the (absolute) difference between our QMC energy, and the energies obtained from CC theory (with the CCD wave function) [5] and from 2nd order perturbation theory (PT-2), all as fractions of the QMC correlation energy. For this particular interaction and the densities considered, the CCD energy estimate is, in fact, quite close to the QMC estimate, differing at most by about 3% at ρ = 0.04 fm −3 ; while in PT-2, the correlation energies are overestimated by 24 − 36% compared to our QMC results.
The uncertainty due to the fixed-phase approximation is very difficult to assess in continuum QMC calculations because, in general, a systematic scheme to improve the guiding wave function is not available. Fortunately, in our CIMC method the energies are rigorous upper bounds, and CC theory provides a systematic scheme for constructing more general guiding wave functions. We exploit this hierarchy to provide a perturbative estimate of the leading order contribution to the bias due to the fixed-phase approximation, viz. that due to the exclusion of the irreducible triples in the guiding wave function. The difference between the CCD(T) , i.e., CCD with perturbative triples, and the CCD energies [5] provide an such an estimate. However, just as the correlation energy is overestimated in PT-2, we expect CCD(T), which is a similar perturbative estimate, to overestimate the residual correlation energy. Therefore, we obtain our improved estimate by multiplying this quantity by the ratio of our QMC correlation energy and the PT-2 correlation energy. Note that, the correction in energy from CCD(T) is always negative, which is consistent if one considers this to be the estimated correction on our QMC energy estimate, which is a variational upper bound. This is not the case for the energies obtained from standard CC theory. We plot the above estimate, again as a fraction of the QMC correlation energy in Fig. 4. This estimate of about 5 − 6% of the correlation energy (∼ 1% of the total energy), probably still overestimates the theoretical uncertainty, since in the homogeneous electron gas, the CIMC method (with a CCD type guiding wave function) was found to be accurate to within 2 − 3% in the moderately interacting regime [13]. In any case, the overall uncertainty in our many body method is certainly much less than the inherent uncertainty in the Hamiltonian, and in future work we plan to reduce it further by including the irreducible triples in our guiding wave function.
Conclusion.-In conclusion, we reported the first quantum Monte Carlo calculations with non-local chiral interactions. Unsurprisingly, we find that the equation of state of neutron matter at low densities is reasonably model independent as long as the interaction used is fit to the low energy scattering phase shifts. We also provided unbiased estimates of momentum distribution and showed that the commonly used mixed estimator grossly overestimates the depletion at the Fermi energy. The quantum Monte Carlo method described in this paper is quite general and can be used for nuclear matter and finite nuclei, and with three body forces. Work in these directions is in progress.
We are grateful to G. Hagen and T. Papenbrock for sharing their CCD code and benckmarks with us, and to A. Ekström, G. Hagen
|
2017-04-13T04:38:34.260Z
|
2014-02-07T00:00:00.000
|
{
"year": 2014,
"sha1": "d4575dba8588c4cfb23582e61e998e9325e195b3",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1402.1576",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "d4575dba8588c4cfb23582e61e998e9325e195b3",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
}
|
235214400
|
pes2o/s2orc
|
v3-fos-license
|
Identification of Bile Salt Hydrolase and Bile Salt Resistance in a Probiotic Bacterium Lactobacillus gasseri JCM1131T
Lactobacillus gasseri is one of the most likely probiotic candidates among many Lactobacillus species. Although bile salt resistance has been defined as an important criterion for selection of probiotic candidates since it allows probiotic bacteria to survive in the gut, both its capability and its related enzyme, bile salt hydrolase (BSH), in L. gasseri is still largely unknown. Here, we report that the well-known probiotic bacterium L. gasseri JCM1131T possesses BSH activity and bile salt resistance capability. Indeed, this strain apparently showed BSH activity on the plate assay and highly tolerated the primary bile salts and even taurine-conjugated secondary bile salt. We further isolated a putative BSH enzyme (LagBSH) from strain JCM1131T and characterized the enzymatic function. The purified LagBSH protein exhibited quite high deconjugation activity for taurocholic acid and taurochenodeoxycholic acid. The lagBSH gene was constitutively expressed in strain JCM1131T, suggesting that LagBSH likely contributes to bile salt resistance of the strain and may be associated with survival capability of strain JCM1131T within the human intestine by bile detoxification. Thus, this study first demonstrated the bile salt resistance and its responsible enzyme (BSH) activity in strain JCM1131T, which further supports the importance of the typical lactic acid bacterium as probiotics.
Introduction
Lactobacillus species have been considered as one of the major targets of probiotic research. Several Lactobacillus species provide positive impacts on human health; symptomatic improvements by probiotics have been reported in cases of various hard-to-heal diseases, e.g., allergy [1], diarrhea [2], Helicobacter pylori infection [3], and irritable bowel syndrome [4]. These probiotic effects are generally strain-specific and differ depending on each strain even among Lactobacillus strains of same species [5][6][7][8][9]. Among the several criteria for selecting candidate probiotic strains of Lactobacillus spp., bile salt resistance is one of the most important selective criteria, since bile salts are well known as strong surfactants and bile exposure in gastrointestinal tract is intensely toxic for probiotic Lactobacillus species to survive and retain activity in human intestine [10,11].
Bile salt resistance is mainly provided by bile salt hydrolase (BSH, EC3.5.1.24), an enzyme that deconjugates glycine and/or taurine-conjugated bile salts [12], though other resistance mechanisms (i.e., efflux pumps, stress response proteins, and cell wall modifications) have been reported [13]. Genes encoding BSH have been found in various Lactobacillus species [14], and the number of bsh gene orthologs vary in accordance with species and strains [11,[14][15][16][17]. BSH enzymes perform a crucial role in bile detoxification and thereby improve the colonization and survival of host probiotic bacteria in the human gastrointestinal tract [18]. In addition, BSH enzymes are known to involve in the reduction of blood cholesterol levels, the regulation of lipid absorption, glucose metabolism, and
The heterologous gene expression and protein purification experiments were performed according to our previous study with slight modifications [24]. In brief, the constructed plasmid was transformed into E. coli BL21 (DE3) Champion TM 21 competent cells and E. coli strain was cultured on LB broth at 37 • C until OD 600 reached 0.4-0.6. Isopropylβ-D-thiogalactopyranoside (IPTG, Nacalai Tesque, Kyoto, Japan) was added at the final concentration of 100 µM to the culture medium. After adding of IPTG, the E. coli cells were incubated at 20 • C for overnight with shaking. The cells were harvested by centrifugation at 5800× g for 10 min, suspended in buffer (20 mM Tris, 150 mM NaCl, 5% glycerol, 5 mM imidazole, pH 7.5), and disrupted for 5 min by sonication using an ultrasonic disintegrator (Sonicator Branson Sonifier 250 (Branson, Danbury, CT, USA); output control: 5, duty cycle: 50) in an ice-water bath. The cell debris were removed by centrifugation and the resulting supernatant was mixed with Ni-NTA Agarose HP (FUJIFILM Wako Pure Chemical Corporation). The His 6 -tagged recombinant protein was washed and eluted according to the previous study [24]. The eluted fraction was further dialyzed with buffer (20 mM Tris, 150 mM NaCl, 5% glycerol) using semipermeable membrane (Spectra/Por 3 membrane MWCO: 3500, Repligen, Waltham, MA, USA) and concentrated using Amicon Ultra centrifugal filter devices (30,000 MWCO, Millipore, Billerica, MA, USA). The purified protein was treated with sample buffer (Bio-Rad, Hercules, CA, USA), heat-denatured at 95 • C for 5 min, and analyzed on sodium dodecyl sulfate polyacrylamide gel electrophoresis Microorganisms 2021, 9, 1011 3 of 12 (SDS-PAGE) using 12% Mini-PROTEAN TGX precast polyacrylamide gel (Bio-Rad) [24]. The gel was stained with QC Colloidal Coomassie Stain (Bio-Rad) through gentle agitation.
Bile Salt Hydrolase Activity
The bile salt hydrolyzing activity of purified LagBSH was determined as described previously [25]. In brief, purified protein was mixed with 0.24 mg/100 µL of conjugated bile salts (glycocholic acid (GCA, Sigma-Aldrich, St. Louis, MO, USA), glycodeoxycholic acid (GDCA, Sigma-Aldrich), taurocholic acid (TCA, Nacalai Tesque), taurochenodeoxycholic acid (TCDCA, Sigma-Aldrich), and taurodeoxycholic acid (TDCA, Nacalai Tesque)) and incubated at 37 • C. The reaction was stopped by adding 15% trichloroacetic acid (FUJIFILM Wako Pure Chemical Corporation) and the resulting solution was centrifuged at 10,000× g for 15 min at 20 • C. The supernatant was then mixed with 0.3 M borate buffer with 1% SDS (pH 9.5), and 0.3% 2,4,6-trinitrobenzenesulfonic acid solution (Tokyo Kasei Kogyo Co., Ltd., Tokyo, Japan). The resulting mixture was incubated for 30 min at room temperature under dark condition, and then 0.6 mM HCl was added to stop the reaction. The absorbance at 416 nm was measured using a SPARK 10M multimode microplate reader (TECAN, Männedorf, Switzerland). The assays were performed in eight replicates. Student's t-test was used to assess the presence of statistically significant differences (α = 0.05) using GraphPad Prism version 8.0 software program (GraphPad Software, San Diego, CA, USA). As a negative control, a bile salt solution was reacted with buffer instead of purified protein.
Biochemical Characterization
The optimum pH and temperatures of LagBSH were determined according to the methods described [20] with slight modifications as follows. The purified LagBSH protein was mixed with taurocholic acid (TCA) at selected temperatures (10-90 • C, in intervals of 10 • C) and pH (pH 3.0-pH 10.0, in intervals of pH 1.0) ranges. S] pH9.0-10.0). After incubation for 6 h, the released taurine was detected as described above. All experiments were performed in eight replicates.
Bile Salt Tolerance Test
The bile salt tolerance ability of Lactobacillus gasseri JCM1131 T was estimated and calculated from its survival rates according to the previous study [26]. In brief, a fullgrown culture of strain JCM1131 T was mixed with GCA, GDCA, TCA, and TDCA at final concentrations of 0.05%. Cells were anaerobically incubated at 37 • C for 6 h, and the optimal densities (OD 600 ) were measured every hour using an Ultrospec 500 Pro visible spectrophotometer (GE Healthcare Life Sciences, Buckinghamshire, UK). As a negative control, strain JCM1131 T was incubated in GAM medium without bile salt. The assays were performed in triplicates.
Minimum inhibitory concentrations (MICs) were determined as the lowest concentration of bile salts preventing visible growth of L. gasseri JCM1131 T on MRS agar. Strain JCM1131 T was cultivated in MRS broth and inoculated on MRS agar plate with a selected bile salt. Plates were anaerobically incubated at 37 • C for 5 days. The tested bile salts were GCA, GDCA, TCA, and TDCA at final concentrations of 0.01%, 0.05%, 0.1%, 0.25%, and 0.5%. All experiments were carried out in triplicates.
Transcriptional Analysis
Reverse transcription polymerase chain reaction (RT-PCR) analyses of the lagBSH gene were performed as follows. Strain JCM1131 T was cultured on MRS broth with or without TCA and TDCA at final concentration of 0.05%. The total RNA samples were isolated using the RNeasy Mini Kit (Qiagen, Germantown, MD, USA), and the resulting RNA samples were treated with TURBO™ DNase (Thermo Fisher Scientific, Waltham, MA, USA) to remove contaminated genomic DNA. The presence of chromosomal genomic DNA was confirmed by PCR analysis with the 16S rRNA gene universal PCR primers 530F and 907R using each RNA samples as template. Reverse transcription reactions were performed using SuperScript IV Reverse Transcriptase (Thermo Fisher Scientific) in a 25 µL reaction volume according to the manufacturer's instruction. The synthesized cDNA samples were used as the PCR template with the following two PCR primer sets: lagBSH-Aset (5'-TCACACCACGCAACTATCCTC-3' and 5'-GTTGCCAAGGTTAGTAAGATGCC-3', amplicon size: 467 bp) and lagBSH-Bset (5'-TTAGCTTCTTACGAAATTATGC-3' and 5'-GAATGCTATCACCTGGTAAAC-3', amplicon size: 376 bp). The PCR products were analyzed using agarose gel electrophoresis in 2.0% agarose and were stained with Gelred (Fujifilm Wako Pure Chemical Corporation).
Identification of BSH Activity and Bile Salt Resistance of Lactobacillus gasseri JCM1131 T
In the present study, we first investigated whether L. gasseri JCM1131 T shows BSH activity using the standard plate assay method. We observed that the visible halo surrounding colonies and the white precipitates with colonies when strain JCM1131 T was cultured on an MRS agar plate supplemented with taurodeoxycholic acid (TDCA), one of the major conjugated bile salts in human gastrointestinal tract ( Figure 1). These characteristics (i.e., halo and white precipitates) are the well-known indicators of BSH activity [19,30], clearly suggesting that strain JCM1131 T represents BSH activity, though the previous study reported that L. gasseri ATCC33323 T (=JCM1131 T ) showed no significant BSH activity [31]. Importantly, Allain et al. demonstrated that L. gasseri strain CNCM I-4884 with strong BSH activity exhibited significant antiparasitic ability that antagonizes growth of the most common waterborne parasite (Giardia) [32] and further revealed that the antiparasitic effects of Lactobacillus spp. were well correlated with the expression of BSH activities [32]. Based on this fact, we expect that L. gasseri JCM1131 T with BSH activity may also exhibit antiparasitic activity as well as strain CNCM I-4884, though future study needs to clarify this point.
Transcriptional Analysis
Reverse transcription polymerase chain reaction (RT-PCR) analyses of the lagBSH gene were performed as follows. Strain JCM1131 T was cultured on MRS broth with or without TCA and TDCA at final concentration of 0.05%. The total RNA samples were isolated using the RNeasy Mini Kit (Qiagen, Germantown, MD, USA), and the resulting RNA samples were treated with TURBO™ DNase (Thermo Fisher Scientific, Waltham, MA, USA) to remove contaminated genomic DNA. The presence of chromosomal genomic DNA was confirmed by PCR analysis with the 16S rRNA gene universal PCR primers 530F and 907R using each RNA samples as template. Reverse transcription reactions were performed using SuperScript IV Reverse Transcriptase (Thermo Fisher Scientific) in a 25 μL reaction volume according to the manufacturer's instruction. The synthesized cDNA samples were used as the PCR template with the following two PCR primer sets: lagBSH-Aset (5'-TCACACCACGCAACTATCCTC-3' and 5'-GTTGCCAAGGTTAGTAA-GATGCC-3', amplicon size: 467 bp) and lagBSH-Bset (5'-TTAGCTTCTTAC-GAAATTATGC-3' and 5'-GAATGCTATCACCTGGTAAAC-3', amplicon size: 376 bp). The PCR products were analyzed using agarose gel electrophoresis in 2.0% agarose and were stained with Gelred (Fujifilm Wako Pure Chemical Corporation).
Identification of BSH Activity and Bile Salt Resistance of Lactobacillus gasseri JCM1131 T
In the present study, we first investigated whether L. gasseri JCM1131 T shows BSH activity using the standard plate assay method. We observed that the visible halo surrounding colonies and the white precipitates with colonies when strain JCM1131 T was cultured on an MRS agar plate supplemented with taurodeoxycholic acid (TDCA), one of the major conjugated bile salts in human gastrointestinal tract ( Figure 1). These characteristics (i.e., halo and white precipitates) are the well-known indicators of BSH activity [19,30], clearly suggesting that strain JCM1131 T represents BSH activity, though the previous study reported that L. gasseri ATCC33323 T (=JCM1131 T ) showed no significant BSH activity [31]. Importantly, Allain et al. demonstrated that L. gasseri strain CNCM I-4884 with strong BSH activity exhibited significant antiparasitic ability that antagonizes growth of the most common waterborne parasite (Giardia) [32] and further revealed that the antiparasitic effects of Lactobacillus spp. were well correlated with the expression of BSH activities [32]. Based on this fact, we expect that L. gasseri JCM1131 T with BSH activity may also exhibit antiparasitic activity as well as strain CNCM I-4884, though future study needs to clarify this point. We further determined the survivability of L. gasseri JCM1131 T against four different conjugated bile salts (GCA, GDCA, TCA, and TDCA) at final concentration of 0.05%. As shown in Figure 2, strain JCM1131 T showed high survivability toward primary bile salts (TCA and GCA) and the survival rates of strain JCM1131 T against TCA and GCA reached above 90% after exposed to the bile salts for 6 h. Additionally, this strain also exhibited moderate and low survivability toward TDCA and GDCA (secondary bile salts), and the survival rates against TDCA and GDCA were above 70% and below 60%, respectively ( Figure 2). halo surrounding colonies and the white precipitates with colonies are the well-known indicator of bacterial BSH activity [19,30].
We further determined the survivability of L. gasseri JCM1131 T against four different conjugated bile salts (GCA, GDCA, TCA, and TDCA) at final concentration of 0.05%. As shown in Figure 2, strain JCM1131 T showed high survivability toward primary bile salts (TCA and GCA) and the survival rates of strain JCM1131 T against TCA and GCA reached above 90% after exposed to the bile salts for 6 h. Additionally, this strain also exhibited moderate and low survivability toward TDCA and GDCA (secondary bile salts), and the survival rates against TDCA and GDCA were above 70% and below 60%, respectively ( Figure 2).
Figure 2.
Bile salt tolerance activity in Lactobacillus gasseri JCM1131 T . Full-grown culture of strain JCM1131 T was mixed with GCA, GDCA, TCA, and TDCA at final concentrations of 0.05% and incubated anaerobically at 37 °C. The optimal density (OD600) was measured every hour and survival rates were calculated as described previously [26]. The survival rate of control (without bile salt) was defined as 100%. Results indicated mean ± SD obtained in triplicate experiments.
We further investigated the bile salt tolerance capacity of L. gasseri JCM1131 T by determining the minimum inhibitory concentrations (MICs). As shown in Table 1, this strain showed low MIC value (0.05%) to GDCA, indicating that GDCA is toxic to strain JCM1131 T . De Smet et al. suggested that the high toxicity of GDCA would be caused by its weak acid property (TDCA is strong acid property) [33]. They further hypothesized that the protonated form of bile salts exhibited toxicity as it imported protons in the cell [33]. This hypothesis seems to be reasonable since weak acids are pretty much easier to protonate than strong acids. However, strain JCM1131 T displayed a higher resistance ability (MICs were >0.5%) to a secondary bile salt (TDCA) as well as the primary bile salts (TCA and GCA) ( Table 1), despite the fact that secondary bile salts have been known to be more toxic than primary bile salts [34]. Since the average bile concentration in human T Figure 2. Bile salt tolerance activity in Lactobacillus gasseri JCM1131 T . Full-grown culture of strain JCM1131 T was mixed with GCA, GDCA, TCA, and TDCA at final concentrations of 0.05% and incubated anaerobically at 37 • C. The optimal density (OD 600 ) was measured every hour and survival rates were calculated as described previously [26]. The survival rate of control (without bile salt) was defined as 100%. Results indicated mean ± SD obtained in triplicate experiments.
We further investigated the bile salt tolerance capacity of L. gasseri JCM1131 T by determining the minimum inhibitory concentrations (MICs). As shown in Table 1, this strain showed low MIC value (0.05%) to GDCA, indicating that GDCA is toxic to strain JCM1131 T . De Smet et al. suggested that the high toxicity of GDCA would be caused by its weak acid property (TDCA is strong acid property) [33]. They further hypothesized that the protonated form of bile salts exhibited toxicity as it imported protons in the cell [33]. This hypothesis seems to be reasonable since weak acids are pretty much easier to protonate than strong acids. However, strain JCM1131 T displayed a higher resistance ability (MICs were >0.5%) to a secondary bile salt (TDCA) as well as the primary bile salts (TCA and GCA) ( Table 1), despite the fact that secondary bile salts have been known to be more toxic than primary bile salts [34]. Since the average bile concentration in human intestine has been estimated to be 0.3% w/v [35], our findings suggest that strain JCM1131 T could have bile salt tolerance ability toward TCA, GCA, and TDCA. Previous studies reported that other strains of L. gasseri (BGHO89, 4M13, and FR4) showed high bile salt resistance ability (toward more than 0.3% bile salts) [36][37][38], suggesting that gut-derived L. gasseri strains would generally have bile salt resistance ability to survive and colonize the mammalian digestive tracts. Interestingly, these L. gasseri strains (BGHO89, 4M13, and FR4) with high bile salt resistance capacity further exhibited some probiotic functions including acid tolerance, bacteriocin production, antioxidation, and cholesterol-lowering activity [36][37][38]. Thus, although it has been reported that some other strains of L. gasseri show bile salt tolerance so far [21,36,37], the correlations between their bile salt tolerance ability and BSH activity have not been well demonstrated. In the present study, we first revealed both bile salt tolerance capability and its related key enzymatic function (BSH activity) in L. gasseri JCM1131 T , and these findings provide additional insights into the probiotic function in a well-known representative of the probiotic lactic acid bacterium.
Sequence and Phylogenetic Analyses of a Putative BSH Gene
Since both BSH activity and bile salt resistance capability of L. gasseri JCM1131 T were revealed, we then performed cloning and heterologous expression of the gene candidates associated with BSH activity. We herein found a putative bile salt hydrolase gene (designated as lagBSH) in L. gasseri JCM1131 T genome (CP000413) based on the sequence analyses and homology searches ( Figure 3A). The putative lagBSH gene comprises 951 bp. The deduced amino acid sequence of LagBSH (316 amino acids) was related to the cholylglycine hydrolase family of the Ntn-hydrolase superfamily proteins based on the domain and sequence comparison. The multiple amino acid sequence alignments revealed that LagBSH protein shared five residues (Cys, Arg, Asp, Asn, and Arg) associated with active site with previously identified BSHs from Lactobacillus species ( Figure 3B). Three-dimensional superposition analyses further revealed that the overall structure of LagBSH is composed of well-known αββα-sandwich folds of cholylglycine hydrolase proteins (Supplementary Figure S1A), which are similar to the structure of CpBSH, BSH from Clostridium perfringens 13 [29]. The putative LagBSH further conserved the catalytic active site structure identified with CpBSH (Supplementary Figure S1B). It has been reported in previous studies that N-terminal cysteine residue (Cys-2) plays a critical role in the BSH activity as catalytic nucleophile [29], and thus the putative protein would function as the BSH enzyme.
Amino acid sequence comparison analyses using the standard BLASTP proteinprotein BLAST search revealed that LagBSH exhibited high similarity (~93.99% amino acid sequence homology) to known BSHs, especially BSHs from L. johnsonii strain 100-100 (93.99%), strain NCC533 (93.67%), and strain PF01 (93.35%) (Supplementary Table S1). LagBSH exhibited significantly lower similarity to LgBSH form L. gasseri FR4 (39.94%) [20], despite the fact that both BSH enzymes are commonly derived from L. gasseri. The phylogenetic analysis demonstrated that the cholylglycine hydrolase family proteins were subdivided into several groups (Figure 4), and we found that LagBSH was classified into the L. johnsonii BSH subgroup (Figure 4). LgBSH was categorized into the L. acidophilus/johnsonii BSH subgroup, indicating that LagBSH are phylogenetically distinct from LgBSH. These sequence, structural, and phylogenetic analyses further suggested that LagBSH would have BSH activity as well as known BSHs from other Lactobacillus species.
Microorganisms 2021, 9, x FOR PEER REVIEW [39]. Bootstrap values greater than 50% are shown by circle symbols whose size correlates with the bootstrap values. CpBSH, BSH from Clostridium perfringens 13, was used as an outgroup.
Heterologous Expression of the Putative BSH Gene in E. coli
To obtain the recombinant protein, lagBSH gene was commercially synthesized with codon optimization for heterologous expression in Escherichia coli and subcloned into the NdeI and EcoRI sites of pET28-b expression vector. The gene was overexpressed in E. coli BL21 (DE3) Champion TM 21 and recombinant protein was purified by Ni-affinity chromatography. Based on sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS-PAGE) analysis, the molecular weight of purified His6-LagBSH protein was approximately 35.0 kDa in size (Supplementary Figure S2), which is nearly identical with the theoretical molecular weight based on its amino acid sequence.
Bile Salt Hydrolase Assay
The bile salt hydrolyzing activity of the purified LagBSH was determined by detecting the released glycine or taurine from conjugated bile salts as described previously [25]. We selected five major mammalian conjugated bile salts (TCA, TDCA, TCDCA, GCA, and GDCA) as substrates. The recombinant LagBSH clearly exhibited significant BSH activity toward all substrates tested. In particular, as shown in Figure 5A, LagBSH showed its high activity toward TCA and TCDCA. The BSH activities toward the other three substrates (TDCA, GCA, and GDCA) are relatively low, suggesting that LagBSH is a functional BSH enzyme, particularly showing high specificity for taurine-conjugated bile salts (TCA and TCDCA). Such substrate specificity of LagBSH was consistent with the previous studies. In fact, the previously identified BSH from L. johnsonii PF01 sharing high homology with LagBSH (93.35% homology) exhibited deconjugation activity against taurine-conjugated bile salts, but not glycine-conjugated bile salts [40], though most BSHs from lactic acid bacteria are more likely to deconjugate glycine-conjugated bile salts rather than taurineconjugated bile salts [40]. LgBSH from L. gasseri FR4 showed higher BSH activity toward glycine-conjugated bile salts than taurine-conjugated ones [20], indicating that the substrate specificity were also quite different between LagBSH and LgBSH, even though both enzymes are derived from the same species, L. gasseri. These enzymatic characteristics agree well with the previous report that the substrate specificity of BSH enzymes may be strain-specific [41]. Further structural and site-directed mutagenesis analyses of LagBSH [39]. Bootstrap values greater than 50% are shown by circle symbols whose size correlates with the bootstrap values. CpBSH, BSH from Clostridium perfringens 13, was used as an outgroup.
Heterologous Expression of the Putative BSH Gene in E. coli
To obtain the recombinant protein, lagBSH gene was commercially synthesized with codon optimization for heterologous expression in Escherichia coli and subcloned into the NdeI and EcoRI sites of pET28-b expression vector. The gene was overexpressed in E. coli BL21 (DE3) Champion TM 21 and recombinant protein was purified by Ni-affinity chromatography. Based on sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS-PAGE) analysis, the molecular weight of purified His 6 -LagBSH protein was approximately 35.0 kDa in size (Supplementary Figure S2), which is nearly identical with the theoretical molecular weight based on its amino acid sequence.
Bile Salt Hydrolase Assay
The bile salt hydrolyzing activity of the purified LagBSH was determined by detecting the released glycine or taurine from conjugated bile salts as described previously [25]. We selected five major mammalian conjugated bile salts (TCA, TDCA, TCDCA, GCA, and GDCA) as substrates. The recombinant LagBSH clearly exhibited significant BSH activity toward all substrates tested. In particular, as shown in Figure 5A, LagBSH showed its high activity toward TCA and TCDCA. The BSH activities toward the other three substrates (TDCA, GCA, and GDCA) are relatively low, suggesting that LagBSH is a functional BSH enzyme, particularly showing high specificity for taurine-conjugated bile salts (TCA and TCDCA). Such substrate specificity of LagBSH was consistent with the previous studies. In fact, the previously identified BSH from L. johnsonii PF01 sharing high homology with LagBSH (93.35% homology) exhibited deconjugation activity against taurine-conjugated bile salts, but not glycine-conjugated bile salts [40], though most BSHs from lactic acid bacteria are more likely to deconjugate glycine-conjugated bile salts rather than taurineconjugated bile salts [40]. LgBSH from L. gasseri FR4 showed higher BSH activity toward glycine-conjugated bile salts than taurine-conjugated ones [20], indicating that the substrate specificity were also quite different between LagBSH and LgBSH, even though both enzymes are derived from the same species, L. gasseri. These enzymatic characteristics agree well with the previous report that the substrate specificity of BSH enzymes may be strain-specific [41]. Further structural and site-directed mutagenesis analyses of LagBSH would perhaps lead to a better understanding of its substrate preference. In total, LagBSH has apparent BSH activity, and this functional enzyme would confer bile detoxification on the host microorganism L. gasseri JCM1131 T . served at pH 6.0 ( Figure 5C). LagBSH exhibited stable activity and retained approx above 80% of their original activity at broad pH range (pH 3.0-8.0), whereas sig decreases in enzyme activities were observed at more than or equal to pH 9.0 (Fig The optimum temperature and pH of LagBSH (37 °C and pH 6.0) are highly co with conditions of the human small intestine (around 37 °C and pH 5.0-8.0) growth condition of strain JCM1131 T in MRS broth (pH 6.0-6.5, 37 °C) accordin website of the Japan Collection of Microorganisms (https://jcm.brc.riken.jp/en/) ( on 8 June 2020). These biochemical features of LagBSH further support our hy that this enzyme may contribute to bile detoxification of L. gasseri JCM1131 T .
Biochemical Characterization of LagBSH
The optimum temperature and pH of LagBSH were determined. The purified LagBSH protein was mixed with taurocholic acid (TCA) at selected temperature (10-90 • C, in intervals of 10 • C) and pH (pH 3.0-pH 10.0, in intervals of pH 1.0) ranges. The maximum BSH activity was observed at 37 • C ( Figure 5B). We confirmed that LagBSH exhibited high BSH activity in wide temperature range (at 10-50 • C) and it retained above 80% of its original activity, whereas the enzyme activity significantly declined with higher temperature (>60 • C) ( Figure 5B). In addition, the maximum BSH activity of LagBSH was observed at pH 6.0 ( Figure 5C). LagBSH exhibited stable activity and retained approximately above 80% of their original activity at broad pH range (pH 3.0-8.0), whereas significant decreases in enzyme activities were observed at more than or equal to pH 9.0 ( Figure 5C). The optimum temperature and pH of LagBSH (37 • C and pH 6.0) are highly consistent with conditions of the human small intestine (around 37 • C and pH 5.0-8.0) and the growth condition of strain JCM1131 T in MRS broth (pH 6.0-6.5, 37 • C) according to the website of the Japan Collection of Microorganisms (https://jcm.brc.riken.jp/en/) (accessed on 8 June 2020). These biochemical features of LagBSH further support our hypothesis that this enzyme may contribute to bile detoxification of L. gasseri JCM1131 T .
Transcriptional Analysis of lagBSH Gene
To determine the regulation of gene transcription of the lagBSH gene, reverse transcription polymerase chain reaction (RT-PCR) analyses were conducted. We found that the lagBSH gene was constituently expressed in L. gasseri JCM1131 T (Figure 6, lane 1). In addition, the lagBSH gene transcription was also observed in this strain exposed to TCA ( Figure 6, lane 2) and TDCA ( Figure 6, lane 3), suggesting that the exposure to TCA and TDCA may have little effect on the lagBSH gene transcription in strain JCM1131 T . Since bile salt concentrations reach the millimolar level in the human small intestine and it should be toxic to the intestinal bacteria [12], strain JCM1131 T seems to constantly produce LagBSH enzyme to tolerate high concentration of bile salts and survive in the gut. 2021, 9, x FOR PEER REVIEW 10 of 12
Transcriptional Analysis of lagBSH Gene
To determine the regulation of gene transcription of the lagBSH gene, reverse transcription polymerase chain reaction (RT-PCR) analyses were conducted. We found that the lagBSH gene was constituently expressed in L. gasseri JCM1131 T (Figure 6, lane 1). In addition, the lagBSH gene transcription was also observed in this strain exposed to TCA ( Figure 6, lane 2) and TDCA ( Figure 6, lane 3), suggesting that the exposure to TCA and TDCA may have little effect on the lagBSH gene transcription in strain JCM1131 T . Since bile salt concentrations reach the millimolar level in the human small intestine and it should be toxic to the intestinal bacteria [12], strain JCM1131 T seems to constantly produce LagBSH enzyme to tolerate high concentration of bile salts and survive in the gut.
Conclusions
In this study, we identified that Lactobacillus gasseri JCM1131 T displayed bile salt resistance capacity toward primary bile salts and taurine-conjugated secondary bile salt. The present study further demonstrated that strain JCM1131 T exhibited apparent BSH activity, although this strain has been considered to be a non-BSH-producer so far. Moreover, we clarified the correlations between bile salt resistance and BSH activity in L. gasseri, which has been rarely investigated and poorly understood; indeed, only two strains (L. gasseri FR4 and L. gasseri CNCM I-4884 isolated from chicken and carious tooth, respectively [20,32]) have been reported to show both bile salt resistance ability and BSH activity by producing their BSH enzymes (LgBSH isolated from strain FR4 [20]) among L. gasseri isolates. In the present study, we also found that BSH enzyme from L. gasseri JCM1131 T (LagBSH) was significantly different from LgBSH in terms of their amino acid sequence homology, substrate specificity, and phylogenetic position. Since strain JCM1131 T is a human-derived lactic acid bacterium that exhibits oxalate-degradation activity [23] and increases in interleukin-10 production [22], this study could further expand and deepen the understanding of this beneficial probiotic bacterium.
In addition, we performed the enzymatic, transcriptional, and phylogenetic characterization of LagBSH isolated from strain JCM1131 T . LagBSH could function as BSH enzyme able to hydrolyze conjugated bile salts especially against taurocholic acid and taurochenodeoxycholic acid. We further demonstrated that the lagBSH gene was constantly transcribed in L. gasseri JCM1131 T . Therefore, this functional enzyme would confer a survival advantage on strain JCM1131 T within the human intestine by bile detoxification. Be-
Conclusions
In this study, we identified that Lactobacillus gasseri JCM1131 T displayed bile salt resistance capacity toward primary bile salts and taurine-conjugated secondary bile salt. The present study further demonstrated that strain JCM1131 T exhibited apparent BSH activity, although this strain has been considered to be a non-BSH-producer so far. Moreover, we clarified the correlations between bile salt resistance and BSH activity in L. gasseri, which has been rarely investigated and poorly understood; indeed, only two strains (L. gasseri FR4 and L. gasseri CNCM I-4884 isolated from chicken and carious tooth, respectively [20,32]) have been reported to show both bile salt resistance ability and BSH activity by producing their BSH enzymes (LgBSH isolated from strain FR4 [20]) among L. gasseri isolates. In the present study, we also found that BSH enzyme from L. gasseri JCM1131 T (LagBSH) was significantly different from LgBSH in terms of their amino acid sequence homology, substrate specificity, and phylogenetic position. Since strain JCM1131 T is a human-derived lactic acid bacterium that exhibits oxalate-degradation activity [23] and increases in interleukin-10 production [22], this study could further expand and deepen the understanding of this beneficial probiotic bacterium.
In addition, we performed the enzymatic, transcriptional, and phylogenetic characterization of LagBSH isolated from strain JCM1131 T . LagBSH could function as BSH enzyme able to hydrolyze conjugated bile salts especially against taurocholic acid and taurochenodeoxycholic acid. We further demonstrated that the lagBSH gene was constantly transcribed in L. gasseri JCM1131 T . Therefore, this functional enzyme would confer a survival advantage on strain JCM1131 T within the human intestine by bile detoxification.
Because BSH activity exert further positive effects on human health such as weight loss and cholesterol lowering [19], future studies need to examine the probiotic effects related to the BSH activity of L. gasseri JCM1131 T by in vivo animal model study. Altogether, our findings provide additional insights into the probiotic function in a well-known representative of probiotic lactic acid bacterium L. gasseri JCM1131 T .
|
2021-05-28T05:18:03.559Z
|
2021-05-01T00:00:00.000
|
{
"year": 2021,
"sha1": "0040180352057c644776acdc57adfc5b6c096d1e",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-2607/9/5/1011/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0040180352057c644776acdc57adfc5b6c096d1e",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
39624638
|
pes2o/s2orc
|
v3-fos-license
|
Light treatment for seasonal Winter depression in African-American vs Caucasian outpatients World of Psychiatry
AIM: To compare adherence, response, and remission with light treatment in African-American and Caucasian patients with Seasonal Affective Disorder. METHODS: Seventy-eight study participants, age range 18-64 (51 African-Americans and 27 Caucasians) recruited from the Greater Baltimore Metropolitan area, with diagnoses of recurrent mood disorder with seasonal pattern, and confirmed by a Structured Clinical Interview for the Diagnostic and Statistical Manual of Mental Disorders- Ⅳ , were enrolled in an open label study of daily bright light treatment. The trial lasted 6 wk with flexible dosing of light starting with 10000 lux bright light for 60 min daily in the morning. At the end of six weeks there were 65 completers. Three patients had Bipolar Ⅱ disorder and the remainder had Major depressive disorder. Outcome measures were remission (score ≤ 8) and response (50% reduction) in symptoms on the Structured Interview Guide for the Hamilton Rating Scale for Depression (SIGH-SAD) as well as symptomatic improvement on SIGH-SAD and Beck Depression Inventory- Ⅱ . Adherence was measured using participant daily log. Participant groups were compared using t-tests, chi square, linear and logistic regressions. RESULTS: The study did not find any significant group difference between African-Americans and their Caucasian counterparts in adherence with light treatment as well as in symptomatic improvement. While symptomatic improvement and rate of treatment response were not different between the two groups, African-Americans, after adjustment for age, gender and adherence, achieved a significantly lower remission rate (African-Americans 46.3%; Caucasians 75%; P = 0.02). CONCLUSION: This is the first study of light treatment in African-Americans, continuing our previous work reporting a similar frequency but a lower awareness of SAD and its treatment in African-Americans. Similar rates of adherence, symptomatic improvement and treatment response suggest that light treatment is a feasible, acceptable, and beneficial treatment for SAD in African-American patients. These results should lead to intensifying education initiatives to increase awareness of SAD and its treatment in African-American communities to increased SAD treatment engagement. In African-American vs Caucasian SAD patients a remission gap was identified, as reported before with antidepressant medications for non-seasonal depression, demanding sustained efforts to investigate and then address its causes.
range 18-64 (51 African-Americans and 27 Caucasians) recruited from the Greater Baltimore Metropolitan area, with diagnoses of recurrent mood disorder with seasonal pattern, and confirmed by a Structured Clinical Interview for the Diagnostic and Statistical Manual of Mental Disorders-Ⅳ, were enrolled in an open label study of daily bright light treatment. The trial lasted 6 wk with flexible dosing of light starting with 10000 lux bright light for 60 min daily in the morning. At the end of six weeks there were 65 completers. Three patients had Bipolar Ⅱ disorder and the remainder had Major depressive disorder. Outcome measures were remission (score ≤ 8) and response (50% reduction) in symptoms on the Structured Interview Guide for the Hamilton Rating Scale for Depression (SIGH-SAD) as well as symptomatic improvement on SIGH-SAD and Beck Depression Inventory-Ⅱ. Adherence was measured using participant daily log. Participant groups were compared using t -tests, chi square, linear and logistic regressions.
RESULTS:
The study did not find any significant group difference between African-Americans and their Caucasian counterparts in adherence with light treatment as well as in symptomatic improvement. While symptomatic improvement and rate of treatment response were not different between the two groups, African-Americans, after adjustment for age, gender and adherence, achieved a significantly lower remission rate (African-Americans 46.3%; Caucasians 75%; P = 0.02).
CONCLUSION: This is the first study of light treatment in African-Americans, continuing our previous work reporting a similar frequency but a lower awareness of SAD and its treatment in African-Americans. Similar rates of adherence, symptomatic improvement and treatment response suggest that light treatment is a feasible, acceptable, and beneficial treatment for SAD in African-American patients. These results should lead to intensifying education initiatives to increase awareness of SAD and its treatment in African-American communities to increased SAD treatment engagement.
INTRODUCTION
Many species manifest behavioral and physiological changes in response to seasonal changes in day length. Today, the effect of changes in daylength on humans may be smaller than in other animals because artificial lighting may blunt macroenviron mental photoperiodicity. However, a sizable proportion of individuals manifest more pronounced seasonal changes, with some having major depression episodes in the fall and winter with spontaneous remission in spring and summer, defined as Seasonal Affective Disorder with winter pattern (SAD) [1] . Symptoms of SAD resemble seasonal changes in physiology and behavior in other photoperiodic mammals (including changes in appetite, weight, sleeping patterns, and patterns of social interactions). Light treatment is a safe and efficacious antidepressant intervention for SAD [14] . It may also prove to be a beneficial treatment for nonseasonal depression [4,5] , as the mechanisms underlying the antidepressant action of bright light treatment overlap in part with those of antidepressant medications [6,7] . Although it appears that the prevalence of SAD in AfricanAmericans is similar to the general population living at the same latitude [8] , no previous study has focused on light treatment in AfricanAmericans. Based on previously reported lower rates of adherence to antidepressants in AfricanAmerican patients with nonseasonal depression [9,10] , we hypothesized a lower adherence to light treatment, and a lower treatment response and remission in AfricanAmerican patients when compared to Caucasian patients with SAD with winter pattern.
Uzoma HN et al . Light for seasonal winter depression in African-Americans
MATERIALS AND METHODS
This was a sixweek study primarily focused on prediction of remission and response following light treatment in AfricanAmerican and Caucasian patients with Seasonal Affective disorder with Winter pattern. This study was an exploratory aim of an National Institute of Healthsupported study (1R34MH073797 01A2). Recruitment took place over three consecutive Fall to Winter intervals, beginning in Fall 2007 and ending in Winter 2010. Approval for the study protocol was obtained from the Institutional Review Board of the University of Maryland.
Participants
Participants in this study were patients with SAD with Winter pattern, recruited from the larger Baltimore metropolitan area through posters, flyers, and local newspaper ads. Figure 1 is a flow chart showing numbers of participants involved in the study from the initial telephone screening to the end. Patients selfidentified as AfricanAmericans or Caucasians consistent with the methodology recommended by the National Research Council consensus document on race and ethnicity [11] .
Seasonal pattern assessment questionnaire
Prescreening (screen #1) was conducted by phone using the Seasonal Pattern Assessment Questionnaire (SPAQ) [12] . This seventeen item psychometric instru ment first designed by Rosenthal et al [12] has been widely used as a research instrument in the study of SAD. It has been shown to be an accurate screening instrument for identifying patients with SAD [13,14] . In one study by Magnusson, the SPAQ was shown to have sensitivity, specificity and positive predictive value for that group of 94%, 73% and 45%, respectively [15] . In another study, Young et al [16] reported that the SPAQ has good psychometric properties in terms of score distribution, testretest reliability, internal consistency, factor structure and itemlatent traits relationships. Several other studies have shown high sensitivity but low specificity for the SPAQ, thus making it a good screening but poor diagnostic tool. In our study, participants who received a SPAQ global seasonality score of 11 or greater, reported a Winter pattern, as well as reported being at least moderately affected in daily functioning were considered positive screen. These individuals were invited to attend an in person screening, informed consent session and for evaluation of eligibility to participate in the study. Inperson screening (screen #2) was performed by trained clinicians completing the Structured Clinical Interview for the Diagnostic and Statistical Manual of Mental Disorders (DSM)Ⅳ Axis I DisordersClinician Version (SCIDClinician Version) [17] .
Inclusion and exclusion criteria
Inclusion criteria for the study were: age 1864; history of Major Depressive Disorder or Bipolar Ⅱ Disorder with seasonal pattern specifier, by DSM-Ⅳ, text revision and/ or a score of 21 or greater on the Structured Interview Guide for Hamilton Rating Scale for DepressionSAD Version (SIGHSAD) [18] . Eligible participants repeated the SIGHSAD by phone interview 24 h prior to the first scheduled light session (screen #3), as well as on the morning of the first light session (screen #4). The participants had to demonstrate a consistent SIGH SAD score of 21 or greater over 2 wk to be eligible for participation. Women of childbearing potential who were pregnant, nursing, or intending to become pregnant were excluded. Patients were also excluded if they used drugs or had history of alcohol abuse in the past year, if they met SCID criteria for Bipolar Ⅰ, Psychotic disorders, if they reported past suicide attempts or active current suicidal ideation; or if they worked night shift. Other exclusion criteria were human immunodeficiency virus infection, systemic lupus, myocardial infarction or stroke, advanced glaucoma, and selfreport of sensitivity to bright light or vision problems that are uncorrectable by glasses (e.g., if they answered negatively when queried as to whether they could distinguish colors or see stars at night). Treatment with antidepressants, mood stabilizers, or antipsychotic medications during the one month prior to screening, were also criteria for exclusion. After complete description of the study, 140 March 22, 2015|Volume 5|Issue 1| WJP|www.wjgnet.com extensive experience with SAD and light treatment.
The methodology used in our study was similar to a protocol used in recent clinical trials comparing light treatment to cognitivebehavioral therapy in patients with SAD [22] .
Measures
Depression scores were assessed using two measures, one semistructured interview, the SIGHSAD [18] , and a self report measure, the Beck Depression Inventory Second Edition (BDIⅡ) [23] . The primary outcome measure was the SIGHSAD remission and response status, as defined above.
Adherence measurement
Participants were asked to complete a daily log reporting on the time and duration of light treatment used that day. Adherence logs were collected weekly by members of the research team and recorded. Adherence was defined as number of days in which light treatment was used as prescribed, and non adherence defined as total number of days missed or not used as prescribed.
Statistical analysis
Descriptive measures include means and standard deviations for continuous variables and proportions and percents for categorical variables. Comparisons of the participants by race were made using ttests for continuous variables (change from baseline for mood measures and adherence), and χ 2 tests for categorical variables. Multiple linear regression analysis was used to compare race subgroups on changes in depression scores with adjustment for adherence, age, education, and gender. A sensitivity analysis for remission was performed including all participants who entered the study rather than just those who completed it. Four participants who did not receive treatment were also included. Assumptions for the sensitivity analysis were that: (1) If SIGH SAD was ≤ 8 (i.e., remission) at week 6, then patient was considered in remission at the end of the study; (2) If SIGHSAD data were missing at week 6, but the rating at week 4 was ≤ 8, then patient was considered in remission at the end of the study; and (3) If data for both weeks 4 and 6 were missing, then patients were considered not in remission. For the sensitivity analysis of adherence, all patients were included. If data for one week were missing, we counted that as 7 d missed.
RESULTS
No differences were found between AfricanAmericans and Caucasians in mean age, gender, marital status, or proportion of bipolar Ⅱ patients. The overall educational level was lower in AfricanAmericans than Caucasians; specifically, a higher percentage of participants' understanding was evaluated using a research consent form booklet. Informed consent was obtained to show participants' agreement and voluntary enrolment in the study.
Light treatment
Treatment was performed using the Apollo BriteLITE 6 (Apollo, American fork, Utah) whose name has since changed to Phillips BriteLITE 6. (Phillips HealthCare, Andover MA, product HF 3310 http:// www.usa.philips.com/cp/HF3310_60/britelite6 energylight with dimensions of 7.1 cm × 11 cm × 17.4 cm in/18 cm × 28 cm × 44 cm), and a peak wavelength of 545 nm. The first light treatment session was conducted at the General Clinical Research Center of the University of Maryland, and included teaching and reviewing the light treatment procedures with participants and assessing response to the initial treatment. The intensity of 10000 lux was verified during the first session with a light meter and was obtained consistently from a distance of 33 cm from the center of the light box to the eyes. To maintain the correct distance from the light box to the eye level, a string measuring 34 cm was tied just above the light box. Participants were instructed to check their distance by straightening the string from the light box to a midpoint between their eyes, just above the nose.
Participants completed homebased daily bright light therapy for 6 wk, and treatment beginning at 10000 lux for 60 min upon awakening [19] .
Previous studies have shown that light therapy is generally most effective when administered earlier in the day [1921] , as early morning treatment advances circadian rhythms, which are often phase delayed in SAD patients [20,21] . Participants were also instructed to use a timer on the light box to automatically shut off treatment after the recommended time interval.
The protocol allowed for flexible duration of light exposure with weekly adjustment. Weekly assessments were completed by phone with trained clinicians to assess mood symptoms, adherence to light, side effects, and measures related to hunger and food craving. We decided against a fixeddose treatment instead preferring a flexible administration to maximize response and adherence in a clinically plausible context. Duration of light treatment was increased by fifteen minutes daily for participants who did not have at least a 30% reduction in SIGH SAD score at the end of week one, a 50% reduction in SIGHSAD score at the end of week two, or did not fulfill remission criteria (SIGHSAD score ≤ 8) at the end of week three. Conversely, duration of treatment was decreased by fifteen minutes daily for any report of a side effect of 4 or greater on a 7point Likert scale. The management of patients who showed poor response or side effects was discussed by a junior clinician with a senior psychiatrist with AfricanAmericans had a high school education or less (52% AfricanAmericans vs 26% Caucasians, P = 0.03) as shown in Table 1. There were no differences in frequency of side effects or adverse reactions between the two groups, and none were rated severe.
In Table 2 we present a comparison between the two racial groups on pretreatment and posttreatment depression scores using SIGHSAD and BDIⅡ. There was no statistical difference between average baseline and posttreatment depression scores between Caucasian and AfricanAmerican patients. Table 3 presents main outcome measures following 6 wk of light treatment. Although both race groups showed similar decreases in the SIGHSAD and BDIⅡ scores and similar posttreatment response rates, the proportion of remissions at posttreatment was significantly higher in Caucasians than in African Americans (46% AfricanAmericans vs 75% Caucasians, P = 0.02) (Figure 2). In a logistic regression model of remission at 6 wk adjusted for age, gender, education, and percent adherence, AfricanAmericans were less likely to achieve remission than Caucasians (OR = 0.25; 95%CI: 0.080.81; P = 0.02). Table 3 also shows no group differences on adherence expressed as number of days missed or percent days adherence (equivalent measures). Adherence was 73% for AfricanAmericans and 82% for Caucasians.
After linear regression adjustment for age, gender, education, and adherence, there were no significant differences in treatment related changes in depression scores between AfricanAmericans and Caucasians. This is shown in Table 4. Table 5 presents results of the intenttotreat sample, including all those who were randomized. Based on the assumptions described in the Methods (see sensitivity analysis) for n = 78, in the logistic regression model of remission at 6 wk (adjusted for gender, percent adherence, and education) African Americans were less likely to achieve remission than Caucasians (OR = 0.21, 95%CI: 0.070.66; P = 0.008). Those who were lost to follow up at 6 wk were compared with those with 6 wk data using χ 2 and ttest analyses. No significant race group differences were found for gender (P = 0.36), age (P = 0.96), baseline SIGHSAD (P = 0.68), and baseline BDIⅡ (P = 0.71). An ANCOVA model including race, gender, and age did not show significant difference between the groups in light exposure duration (F = 2.480; df = 1, 60; P = 0.12).
DISCUSSION
In this study, AfricanAmerican participants with SAD had a lower remission rate than Caucasian participants following six weeks of light treatment, but there were no significant differences between the two race groups in adherence, average symptomatic improvement, and response rates. To our knowledge, this is the first study investigating differences between AfricanAmericans and Caucasians in response to light treatment in SAD. Racial differences for treatment response and remission have been previously reported with phar macological treatment of nonseasonal depression. From the Sequenced Treatment Alternatives to Relieve Depression (STAR*D) trial, Lesser et al [24] reported that AfricanAmericans had lower remission rates on the 17item Hamilton Rating Scale for Depression and the 16item Quick Inventory of Depressive SymptomatologySelf Report (18.6% and 22% respectively) compared to whites (30.1% and 36.1%). However, there was no statistically significant difference between races after adjusting for baseline socioeconomic, clinical, and demo graphic differences. In their analysis of DNA samples from the STAR*D trial, Binder et al. showed that a Single Nucleotide Polymorphism [singlenucleotide polymorphisms (SNP); rs 10473984] variation within the corticotrophinreleasing hormone binding protein locus appeared contributory to poor response and lower remission to treatment with citalopram in AfricanAmericans and Hispanic patients [25] . In an earlier report from the STAR*D trial, McMahon et al [26] reported that the A allele of the SNP marker HTR2A which encodes for serotonin 2A receptor was over six times more frequent in white than in black participants and was responsible for better response to citalopram by white participants compared to blacks. Among many factors that may explain why certain outcomes such as adherence and response rates were similar in AfricanAmericans and Caucasians in our study in contrast to significant racial differences reported in previous pharmacological trials, AfricanAmericans may perceive light therapy as more acceptable and less stigmatizing than medications. This hypothesis was not formally tested.
In general, the response rate of 81% and remission rate of 52% reported in this study are somewhat more favorable compared to other studies on light treatment for SAD [3,20,21,27] . For instance, Lam et al [28] compared light treatment using 10000 lux bright light for 30 min daily with fluoxetine 20 mg orally daily for 8 wk.
Their study showed no significant difference in clinical response rate, which was 67% for both groups, while remission rates were 50% for light therapy and 54% for fluoxetine. We also note that there was no significant difference in adherence to daily light treatment between African Americans and their Caucasian counterparts in our study. The adherence of 73% in AfricanAmericans and 82% in Caucasians in our study is similar to the 83.3% found by Michalak et al [29] , and it compares favorably with adherence to antidepressants [30] . When compared to antidepressants, light treatment has more benign side effects [28] , potentially leading to higher adherence. Other side effects, such as precipitating hypomania and mania, are similar in light treatment and antidepressants [31] . Additionally, more rapid improvement with light treatment [28,32] may contribute to a better adherence with this nonpharmacological treatment.
The previously reported higher melanin content of the pupil and retinal pigment epithelium in African Americans [33,34] may reduce the retinal illuminance in AfricanAmerican SAD patients during light treatment. However, the magnitude of this effect is unclear, as is the effect of these reduced levels on remission.
Study limitations
Limitations of this study include an open label design, a We also did not collect information on number of lifetime depressive episodes or duration of current episode. The flexible dosing of treatment in our study, although representative of real world clinical practice, may have affected the outcomes. In addition, our generalizability may be limited as our inclusion and exclusion criteria may have led to selectively including a less severe subset of patients, having excluded patients with history of suicide attempts, current suicidal ideation, Bipolar 1 disorder, and comorbid substance abuse as well as patients on often used psychotropic medications. Including patients on psychotropic medications in the study, although increasing its generalizability, would have required unaffordably larger sample size and budget, to permit appropriate statistical adjustments. This is the first study directly comparing light treatment outcome in two groups, AfricanAmerican and Caucasian patients. A similar average degree of symptomatic improvement and adherence in African Americans, combined with a previously reported lower awareness of SAD in this minority group [8] justify increasing psychoeducation outreach efforts regarding SAD and light treatment in that sub sample of the population. However, there is a need to better understand the lower remission rate in AfricanAmericans, similar to that of antidepressant medications. In the subset of patients who respond but do not achieve remission with light treatment, combination with antidepressant medication or cognitive behavioral therapy is currently advocated. Considering differences in pupil and retinal pigmen tation in AfricanAmericans it might be possible that specific protocols with distinct light intensity and wavelength could reduce the reported racial remission gap. This is particularly important, as complete remission with antidepressant treatment is the target for all treatment modalities for mood disorders, considering implications for health, functioning, and quality of life.
ACKNOWLEDGMENTS
The authors thank Monika Acharya, MD, Muhammad Tariq, MD, Humaira Siddiqi, MD, and Partam Manalai, MD for their help in screening and rating patients. The authors declare no conflict of interest within the past 5 years.
Background
The authors' prior research has suggested that Seasonal Affective Disorder (SAD) prevalence in African-Americans is similar to that of the general population living at similar altitude, and awareness of the disorder and its treatment is low among African-Americans. This study is the first to investigate effects of light treatment, generally considered a first-line treatment for SAD, in African-Americans as compared to Caucasian patients.
Research frontiers
Research in this area is moving towards identification of biological markers of prediction of response to light therapy, as well as combination of treatments and individualizing modalities and sequences of treatment.
Innovations and breakthroughs
Multiple areas of investigation have included investigating ways of administering light treatment (e.g., dawn simulators), wavelength of treatment (lower wavelength stimulating novel photoreceptors-e.g., melanopsin receptors in the retinal ganglion cells) and the use of psychotherapy and combination treatments to prevent future episodes of SAD rather than just treating the current episode. African-American patients may require either a modified protocol of light treatment or an early combination of different treatment modalities.
Applications
SAD could be quite disabling. Light treatment is safe and effective for SAD and appears attractive especially to patients concerned about the side effects of antidepressant medications, or who do not have time for psychotherapy. Furthermore, for patients who require combination with antidepressants, the dose of medication could be kept lower and thus minimize side effects at the end of Applications African-American SAD patients manifested a remission gap in comparison to Caucasian patients, and additional research will be needed to uncover what leads to it in order to design alternative treatment regiments to improve outcome.
|
2018-04-03T04:33:58.984Z
|
2015-03-22T00:00:00.000
|
{
"year": 2015,
"sha1": "65a633b58fc0d7d9c333f54c6578a497327cd7fc",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.5498/wjp.v5.i1.138",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "6ad2f7313586429be3741c3f10addda01d59e774",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
244642503
|
pes2o/s2orc
|
v3-fos-license
|
LOW PERFORMANCE-RELATED PHYSICAL FITNESS LEVELS ARE ASSOCIATED WITH CLUSTERED CARDIOMETABOLIC RISK SCORE IN SCHOOLCHILDREN: A CROSS-SECTIONAL STUDY
Purpose. the study aim was to verify if there were associations between performance-related physical fitness levels and the clustered cardiometabolic risk score among children and adolescents. Methods. the cross-sectional study involved 1200 (655 females) children and adolescents aged 7–17 years. Performance-related physical fitness levels (upper limb strength [ULS], lower limb strength [LLS], agility, speed, and cardiorespiratory fitness [CRF]) were evaluated and categorized as healthy or unhealthy levels. Waist circumference, systolic blood pressure, glucose, and blood lipoprotein (triglycerides, total cholesterol, and high-density lipoprotein cholesterol) were measured. the clustered cardiometabolic risk score constituted the sum of internationally-derived standardized values (z-scores) for each risk factor divided by 5. Associations between performance-related physical fitness levels and the clustered cardiometabolic risk score were determined with linear regression models. Results. Participants with healthy ULS levels exhibited a less favourable clustered cardiometabolic risk score, whereas healthy levels of LLS, agility (only in girls), and CRF (only in boys) were related with a more favourable clustered cardiometabolic risk score. ULS ( : –0.091 [95% CI: –0.120; –0.062]), LLS ( : –0.272 [95% CI: –0.368; –0.177]), and CRF ( : –0.218 [95% CI: –0.324; –0.112]) were inversely associated with the clustered cardiometabolic risk score, while agility ( : 0.112 [95% CI: 0.082; 0.142]) and speed ( : 0.079 [95% CI: 0.039; 0.119]) demonstrated a positive association with the clustered cardiometabolic risk score. Conclusions. Our results emphasize the importance of following moderate-to-vigorous physical activity guidelines to better develop physical fitness levels for the maintenance of cardiometabolic health during childhood and adolescence.
Introduction the simultaneous increase of risk factors as behavioural habits and metabolic disorders characterizes the metabolic syndrome (MetS) [1]. the majority of children and adolescents do not present chronic outcomes related to MetS, but they are already in the initial stages of MetS risk factors development [2]. these risk factors are metabolic disorders such as higher levels of triglycerides and total cholesterol, hypertension, body composition, insulin resistance, and lower levels of high-density lipoprotein cholesterol. Additionally, the literature includes such behavioural habits as low health-related physical fitness levels on the risk factor list [3]. It has been reported that these risk factors for MetS start early in childhood [4], and they can track into adulthood [5].
MetS among children and adolescents has been widely investigated by using a clustered cardiometabolic risk score [2]. It is considered more efficient to verify the effective risk for cardiovascular diseases compared with traditional MetS diagnosis criteria because HUMAN MOVEMENT N. Saldanha Filho et al., Physical fitness and metabolic risk it comprises as much information concerning the risk factors as possible, with the consideration of the traditional risk factors for cardiovascular diseases, to construct a composite score [6].
Previous studies reported that low levels of healthrelated physical fitness components were associated with less favourable levels of cardiometabolic risk factors. Moreover, it is established that there is an inverse association with the clustering of cardiometabolic risk [7][8][9]. In agreement with those findings, active children and adolescents seem to be less likely to present an unhealthy cardiometabolic profile [10] once the physical activity practice positively affects overall health, the cardiorespiratory, metabolic, and musculoskeletal systems [11], and motor competence, which directly influences the performance-related physical fitness levels [12].
Research has often demonstrated the relationship between health-related physical fitness and the clustered cardiometabolic risk; as far as we know, however, this is the first study involving a Brazilian sample to investigate performance-related physical fitness levels with the clustered cardiometabolic risk score among childhood and youth. Furthermore, one can assume that the cardiometabolic risk develops when children and adolescents exhibit low performance-related physical fitness levels. this assumption becomes important since motor competence is closely related to daily physical activities and sport practice. the present study aimed to verify if there were associations between performancerelated physical fitness levels and the clustered cardiometabolic risk score among children and adolescents.
Material and methods
Participants and study design this cross-sectional study involved a sample composed of 1200 children and adolescents (655 females) aged 7-17 years from public and private schools of Santa Cruz do Sul, Brazil. the present research is part of a larger survey named 'Schoolchildren's Health -Phase II'.
the study was carried out in 25 randomly selected schools in Santa Cruz do Sul municipality, which has a total of 50 registered schools and a population of 20,380 students. the data collected represent the entire municipality (urban and rural areas), in view of the density of schoolchildren population in each region (centre, north, south, east, and west). the sample power calculation was performed by using the G*Power 3.1 program (Heinrich-Heine-Universität, düsseldorf, Ger-many). A minimum of 655 students were stipulated, with the consideration of a test power of (1 -) = 0.95, a significance level of = 0.05, and an effect size of 0.30, as suggested by Faul et al. [13] for the Poisson regression analysis (presence vs. absence of cardiometabolic risk as a dependent variable).
Body composition and socioeconomic status assessment
All evaluations were carried out in the University of Santa Cruz do Sul facilities by trained professionals. Body mass index (BMI) was calculated as the ratio between weight and height squared and classified as normal weight, overweight, or obesity [14]. the participants were categorized as having a low, medium, or high socioeconomic background as self-reported by using the Brazilian Economic Classification Criteria questionnaire [15].
Performance-related physical fitness assessment the performance-related physical fitness levels were determined in accordance with the standards, criteria, and tests established by the Projeto Esporte Brasil (PROESP-BR) [16]. the following components were evaluated: (1) upper limb strength, verified through the medicine ball throw test; (2) lower limb strength, verified through the horizontal jump test; (3) agility, verified through the square test; (4) speed, verified through the 20-meter run test; and (5) cardiorespiratory fitness, verified through the 9-minute running and walking test. data were categorized by the cut-off points established by PROESP-BR (weak, reasonable, good, very good, and excellent), with the consideration of sex and age, and dichotomized as: (1) unhealthy levels (weak or reasonable); and (2) healthy levels (good, very good, or excellent).
Clustered cardiometabolic risk score the waist circumference (WC) was evaluated by using an inelastic plastic tape measure. the systolic blood pressure (SBP) was measured with the children sitting at rest, by using a sphygmomanometer (B-d ® , aneroid, Germany) with a cuff suitable for the children's arm circumference and a stethoscope (Premium, Rappaport, China). the levels of fasting glucose, triglycerides, total cholesterol (tC), and high-density lipoprotein cholesterol (HdL-C) were determined through blood collection, after 12-hour fasting, carried out with Miura 200 automated equipment (I.S.E., Rome, Italy) by using commercial diaSys kits (diaSys diagnostic Systems, Germany).
Before analysis, skewed variables (WC, tC/HdL-C ratio, and triglycerides) were logarithmically transformed by the natural logarithm. the risk factors variables (WC, SBP, glucose, triglycerides, and tC/HdL-C ratio) were standardized in accordance with sex-and age-specific international reference values by using the following equation, suggested by Stavnsbo et al. [17]: z-score = (X Observed Brazilian value -X Predicted mean )/ SD International reference the clustered cardiometabolic risk score was derived by summing the SBP, WC, triglycerides, glucose, and tC/HdL-C ratio z-scores and dividing the value by 5.
Statistical analysis
the Statistical Package for the Social Sciences (SPSS, version 23.0, IBM, Armonk, NY, USA) software was used for all statistical analyses. A descriptive analysis was performed to describe the Brazilian sample. the independent Student's t-test was applied to verify mean differences between sexes. the quantitative variables were also analysed by linear regression models considering the performance-related physical fitness levels as independent variables and the clustered cardiometabolic risk score as a dependent variable. Unstand-ardized and standardized coefficients ( ) with 95% confidence intervals (CI) were presented. Preliminary analyses evaluated which confounders should adjust the models. the confounder variable tested should be associated with both the independent and dependent variables. the models were adjusted for BMI and socioeconomic status (more details within table 3). the values of p < 0.05 were considered significant in all analyses.
Ethical approval
the research related to human use has complied with all the relevant national regulations and institutional policies, has followed the tenets of the declaration of Helsinki, and has been approved by the Committee of Ethics in Research with Human Subjects of the University of Santa Cruz do Sul, under protocol number 2959/2011.
Informed consent
Informed consent has been obtained from the parents of all individuals included in this study.
Results table 1 presents the descriptive characteristics of the participants. Regarding the performance-related physical fitness, boys performed better in all tests, and these differences were statistically significant (p < 0.05). Discussion the present study demonstrated a high percentage of unhealthy levels of all physical fitness components in the sample. Overall, the high prevalence of unhealthy performance-related physical fitness levels is in line with the results obtained by Mello et al. [18] among 8750 Brazilian children and adolescents by using the PROESP-BR database. A recent tracking study conducted by true et al. [19] evaluating 9 measures of physical fitness at 5 time points throughout childhood to adolescence/young adulthood demonstrated the importance of developing healthy physical fitness early in life since physical fitness levels track into young adulthood for boys (r: 0.29; 0.79) and girls (r: 0.23; 0.89); the same refers to the cardiometabolic risk factors from childhood to adolescence [20] and adulthood [5]. these findings suggest a potential future problem for our sample since being physically fit during childhood and adolescence is associated with a lower cardiometabolic risk later in life, as evidenced by a systematic review of longitudinal studies [21].
Furthermore, the majority of the participants were from families with high socioeconomic status (53.8%). Lastly, 20.3% and 7.5% of the sample were overweight and obese, respectively. According to our results, performance-related physical fitness levels were associated with the clustered cardiometabolic risk score. Children and adolescents with healthy upper limb strength showed a less favourable clustered cardiometabolic risk score, whereas healthy levels of lower limb strength, agility (only in girls), and cardiorespiratory fitness (only in boys) implied more favourable clustered cardiometabolic risk scores. In turn, the linear regression models revealed an inverse association between upper limb strength and the clustered cardiometabolic risk score. these findings concerning the upper limb strength are not in concordance with evidence that demonstrates the preventive importance of a better muscular strength level during youth for overall and cardiometabolic health [22]. this might be attributable to methodological differences concerning muscular strength assessment: studies usually calculate z-scores from 2 or more muscular power measurements to obtain a muscular fitness variable. Also, the association between healthy upper limb strength and a less favourable clustered cardiometabolic risk score found in the present study could be explained by physical fitness differences depending on the body composition. Lopes et al. [23] investigated levels of upper and lower limb strength among obese and non-obese adolescents. their results demonstrated that obese individuals outperformed the non-obese ones in both muscular strength tests. Additionally, the higher body composition levels assessed by the BMI equation and the bioelectric impedance method were directly and significantly correlated with both upper and lower limb strength.
With respect to the other performance-related physical fitness components, the linear regression models showed that lower limb strength and cardiorespiratory fitness were inversely associated with the clustered cardiometabolic risk score for all participants, whereas agility and speed presented a positive association with the clustered cardiometabolic risk score. these results demonstrate that the better the performance-related physical fitness components are, the more favourable the clustered cardiometabolic risk score seems to be. Zaqout et al. [7], in a 2-year longitudinal study with 1635 European children aged 6-11 years, observed a similar association between all performance-related physical fitness components evaluated and the clustered cardiometabolic risk score, except for the upper limb strength (which slightly Linear regression model considering the performance-related physical fitness levels as independent variables and the clustered cardiometabolic risk score as a dependent variable (p < 0.05). All models adjusted for body mass index. * -standardized coefficient a model adjusted for body mass index and socioeconomic status, ** significant values (p < 0.05)
HUMAN MOVEMENT
N. Saldanha Filho et al., Physical fitness and metabolic risk contrasts with the current study). On the basis of this evidence, children and adolescents should be encouraged to follow moderate-to-vigorous physical activity guidelines to better develop their motor competence skills and, consequently, their performance-related physical fitness levels [24,25].
the literature demonstrates that children with higher motor competence outperform those with lower motor competence in physical fitness tests [12]. this cause and effect relationship could be explained by the fact that children who are more physically active have more chances to develop their motor skills and, consequently, continue their participation in sports across growing [26], which could lead to a better future cardiometabolic profile [10]. Besides, for a better cardiometabolic health, it is important to present low body composition levels, and also healthy performancerelated physical fitness levels. An analysis between physical fitness components and cardiometabolic risk mediated by body composition variables demonstrated that higher physical fitness was associated with a more favourable cardiometabolic risk, particularly when it was also accompanied by a good body composition level [27,28].
the present study has some strengths. the first one is the representative randomly selected sample of children and adolescents from a municipality of Southern Brazil. Secondly, a major strength is the use of common international reference values to standardize each of the cardiometabolic risk factors, as suggested by Stavnsbo et al. [17], rather than the traditional MetS diagnosis criteria. this is an accepted method for defining children's and adolescents' cardiometabolic health in the literature, more accepted than sample-specific methods to calculate z-scores. However, the present study also has some limitations that should be noted. the cross-sectional design makes it impossible to establish the cause and effect impact between the performance-related physical fitness components and the clustered cardiometabolic risk. the physical fitness levels during childhood and adolescence and the better future cardiometabolic profile hypothesized in our discussion remain an interesting theoretical construct, not fully supported by our data, which could and should be further tested within future studies with different methodological design approaches. Finally, the results may have been influenced by other potential confounding variables that were not available in the statistical analysis (e.g., moderate-to-vigorous physical activity level and the pubertal status).
Conclusions
In conclusion, the levels of performance-related physical fitness components were inversely associated with the cardiometabolic risk. Our results emphasize the importance of following moderate-to-vigorous physical activity guidelines to better develop physical fitness levels for the maintenance of cardiometabolic health during childhood and adolescence.
|
2021-11-26T16:05:57.202Z
|
2022-01-01T00:00:00.000
|
{
"year": 2022,
"sha1": "5ba9576be949fb7621531d51475f25e81f003031",
"oa_license": "CCBYNCSA",
"oa_url": "https://www.termedia.pl/Journal/-129/pdf-44781-10?filename=HM_23(3)_113_119.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "39519b33496ad2b52322dafa8c272b839b7f56f9",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
258822307
|
pes2o/s2orc
|
v3-fos-license
|
FIP1L1–PDGFRα-Positive Loeffler Endocarditis—A Distinct Cause of Heart Failure in a Young Male: The Role of Multimodal Diagnostic Tools
The presence of the Fip1-Like1-platelet-derived growth factor receptor alpha (FIP1L1–PDGFRα) fusion gene represents a rare cause of hypereosinophilic syndrome (HES), which is associated with organ damage. The aim of this paper is to emphasize the pivotal role of multimodal diagnostic tools in the accurate diagnosis and management of heart failure (HF) associated with HES. We present the case of a young male patient who was admitted with clinical features of congestive HF and laboratory findings of hypereosinophilia (HE). After hematological evaluation, genetic tests, and ruling out reactive causes of HE, a diagnosis of positive FIP1L1–PDGFRα myeloid leukemia was established. Multimodal cardiac imaging identified biventricular thrombi and cardiac impairment, thereby raising suspicion of Loeffler endocarditis (LE) as the cause of HF; this was later confirmed by a pathological examination. Despite hematological improvement under corticosteroid and imatinib therapy, anticoagulant, and patient-oriented HF treatment, there was further clinical progression and subsequent multiple complications (including embolization), which led to patient death. HF is a severe complication that diminishes the demonstrated effectiveness of imatinib in the advanced phases of Loeffler endocarditis. Therefore, the need for an accurate identification of heart failure etiology in the absence of endomyocardial biopsy is particularly important for ensuring effective treatment.
Introduction
Hypereosinophilic syndrome (HES) represents a complex diagnosis involving a broad field of disorders in which the main role of the pathological pathway is attributed to eosinophilic cells, which are associated with hematological malignancies and reactive causes [1]. The diagnosis of HES includes the confirmation of organ damage in addition to the primary criteria of a peripheral eosinophilic blood count of >1500 cells/m 2 , as is accepted by the current guidelines and consensus [1,2].
Organ damage in the presence of hypereosinophilic states is based on multimodal action of the molecules that are released by activated eosinophilic cells [3]. Persistent eosinophilia in the blood flow leads to infiltrative lesions affecting various territories (heart, lungs, liver, skin, etc.), which result in different degrees of dysfunction or secondary complications [3][4][5][6]. Cardiac involvement, known as Loeffler disease, alters the structure of the endothelium and is associated with a high probability of intracardiac thrombus formation.
In the revised 2022 World Health Organization (WHO) classification of eosinophilic disorders, diagnostic criteria were established relating to Fip1-Like1-platelet-derived growth factor receptor alpha (FIP1L1-PDGFRα) gene rearrangement when it is associated with hypereosinophilia (HE). Specifically, the criteria require the presence of a myeloid neoplasm with predominant eosinophilia and evidence of a FIP1L1-PDGFRα fusion gene, which can be detected by fluorescence in situ hybridization (FISH) or reverse transcription polymerase chain reaction [2]. Studies regarding this disorder reveal the prevalent distribution of this syndrome in male patients with no apparent molecular or physiopathological basis [5,8,9]. The presence of FIP1L1-PDGFRα fusion in individuals with HES helps in predicting a possible response to imatinib (a tyrosine kinase inhibitor) treatment, which is capable of suppressing the fusion gene [8][9][10].
The aim of this paper is to emphasize the importance of accurate diagnosis and the use of multimodal diagnostic tools in describing the cardiac involvement of HES when it is secondary to chronic myeloid leukemia with positive FIP1L1-PDGFRα fusion-specifically in the context of a young male patient who presented the symptoms and signs of heart failure.
Case Presentation
A 26-year-old Caucasian male patient, who was also a smoker, was admitted for exertional dyspnea, fatigue, and bilateral lower limb edema, which had progressively worsened one week prior to admission. Besides having a history of tobacco use, the patient denied any medical history or medication. Furthermore, he described his symptoms as beginning two months before presentation, with continual progression. A pneumological exam (clinical evaluation, pulmonary functional tests, and thoracic computed tomography (CT) was performed in ambulatory settings and raised the suspicion of asthma, and diffuse pulmonary fibrosis was diagnosed. At admission, the clinical examination indicated a mildly elevated blood pressure (135/100 mmHg), tachycardia (heart rate of 93 beats per minute, i.e., bpm), bilateral edema of the lower limbs, and hepatomegaly.
Initial laboratory data showed a total white blood count with significant eosinophilia (i.e., 42.7% of the white blood cells), mild anemia (hemoglobin of 11.9 g/dL, hematocrit 36%), mild thrombocytopenia (128.000/mm 3 ), and normal electrolyte levels as well as renal and liver function. Acute inflammatory reactants (erythrocyte sedimentation rate and fibrinogen) were within normal limits. The baseline N-terminal pro-B-type natriuretic peptide (NT-proBNP) value was 6541 pg/mL (cut-off value <125 pg/mL), and a lactate dehydrogenase value of 1000 U/L was also detected. The electrocardiogram illustrated sinus tachycardia with 93 bpm and a normal QRS axis; however, an incomplete right bundle branch block, as well as negative T waves in lateral and inferior leads, were also detected (see Figure 1).
Diagnostics 2023, 13, x FOR PEER REVIEW 3 of 12 36%), mild thrombocytopenia (128.000/mm 3 ), and normal electrolyte levels as well as renal and liver function. Acute inflammatory reactants (erythrocyte sedimentation rate and fibrinogen) were within normal limits. The baseline N-terminal pro-B-type natriuretic peptide (NT-proBNP) value was 6541 pg/mL (cut-off value <125 pg/mL), and a lactate dehydrogenase value of 1000 U/L was also detected. The electrocardiogram illustrated sinus tachycardia with 93 bpm and a normal QRS axis; however, an incomplete right bundle branch block, as well as negative T waves in lateral and inferior leads, were also detected (see Figure 1). Two-dimensional transthoracic echocardiography (TTE) disclosed a mildly reduced left ventricular ejection fraction (LVEF:45%, Simpson formula), moderate-to-severe mitral regurgitation, minimal pericardial effusion, and hyperechogenic masses that encompassed approximately 50% of both the ventricular cavities (see Figure 2). The TTE findings raised the question of Loeffler endocarditis in the presence of hypereosinophilia. Two-dimensional transthoracic echocardiography (TTE) disclosed a mildly reduced left ventricular ejection fraction (LVEF:45%, Simpson formula), moderate-to-severe mitral regurgitation, minimal pericardial effusion, and hyperechogenic masses that encompassed approximately 50% of both the ventricular cavities (see Figure 2). The TTE findings raised the question of Loeffler endocarditis in the presence of hypereosinophilia.
The coronary CT angiogram revealed a normal coronary anatomy with no atherosclerotic lesions or anatomic anomalies, but it did raise a suspicion of Loeffler endocarditis in the presence of biventricular thrombi; in addition, the contrast-enhanced thoraco-abdominal CT scans revealed small mediastinal adenopathic masses, multiple splenic infarctions (maximum size of 37/10 mm), and minimal ascites, results that were in accordance with those of the abdominal ultrasound.
To better characterize the ventricular mass, cardiac magnetic resonance imaging (CMR) was scheduled. An amorphous mass was identified, whereby it occupied the left ventricular apex and was found to extend into the inferior and inferolateral medio-cardiac segments, with overall dimensions of 7 cm × 5 cm × 4.7 cm. The apex of the right ventricle was found to be filled by a similar mass, which extended and covered the papillary muscles but had smaller dimensions. Neither the mass nor the adjacent myocardium showed signal changes suggestive of edema; further, since the native characterization of the mass was nonspecific, late gadolinium enhancement (LGE), which was conducted following contrast administration, demonstrated the lack of enhancement of the masses, with strong enhancement of the endocardial contours adjacent to the masses, thereby indicating fibrosis (see Figure 3). The CMR characteristics of the ventricular masses suggested a thrombotic structure, and an anticoagulation enoxaparin-weight-adjusted dose regimen was initiated alongside high doses of corticosteroids in order to suppress the hematological disorder. The coronary CT angiogram revealed a normal coronary anatomy with no atherosclerotic lesions or anatomic anomalies, but it did raise a suspicion of Loeffler endocarditis in the presence of biventricular thrombi; in addition, the contrast-enhanced thoraco-abdominal CT scans revealed small mediastinal adenopathic masses, multiple splenic infarctions (maximum size of 37/10 mm), and minimal ascites, results that were in accordance with those of the abdominal ultrasound.
To better characterize the ventricular mass, cardiac magnetic resonance imaging (CMR) was scheduled. An amorphous mass was identified, whereby it occupied the left ventricular apex and was found to extend into the inferior and inferolateral medio-cardiac segments, with overall dimensions of 7 cm × 5 cm × 4.7 cm. The apex of the right ventricle was found to be filled by a similar mass, which extended and covered the papillary muscles but had smaller dimensions. Neither the mass nor the adjacent myocardium showed signal changes suggestive of edema; further, since the native characterization of the mass was nonspecific, late gadolinium enhancement (LGE), which was conducted following contrast administration, demonstrated the lack of enhancement of the masses, with strong enhancement of the The presence of parasitic infections, allergies, or adrenal deficiency was excluded following additional assessments. The presence of an autoimmune disease as the secondary cause of eosinophilia was also ruled out (negative ANCA, ANA, normal values for the complement complex, and a negative Coombs test).
The hematological findings from a peripheral blood smear were 16% eosinophilic cells, 57% neutrophils, 18% lymphocytes, 42% eosinophilic cells, 30% neutrophils, 24% lymphocytes, 0% basophilic cells, and bone marrow biopsy-revealed anisocytosis. According to the recommendations, analyses were conducted to determine the status of JAK-2 mutation, bcr-abl, and FIP1L1-PDGFRα fusion, of which a positive result for the presence of the FIP1L1-PDGFRα fusion gene was returned via FISH. endocardial contours adjacent to the masses, thereby indicating fibrosis (see Figure 3). The CMR characteristics of the ventricular masses suggested a thrombotic structure, and an anticoagulation enoxaparin-weight-adjusted dose regimen was initiated alongside high doses of corticosteroids in order to suppress the hematological disorder. Hereafter, a diagnosis of HES-defined as myeloid neoplasm with FIP1L1-PDGFRα rearrangement with an aspect of Loeffler endocarditis complicated with heart failure (HF)was established two months after the patient's first presentation. As such, targeted therapy with a tyrosine kinase inhibitor (TKI-imatinib 400 mg/day) was initiated alongside preexistent HF regimen, which was administered since the first week of index admission, thereby replacing pre-treatment with corticosteroids. Initially, the TKI therapy was well tolerated by the patient; however, one month later, there was a relapse in HF symptomatology and the patient experienced intolerable digestive symptoms (diffuse abdominal pain, nausea, and vomiting). Echocardiography revealed a slight reduction in the extent of intraventricular mass. Still, following hematologist advice, the imatinib dose was reduced, and TKI administration eventually halted due to undesirable gastrointestinal side effects. Despite extensive treatment (including parenteral anticoagulation) and careful follow-up, the cardiac impairment progressively worsened, and there were additional complications of sepsis, multiple embolization (spleen, kidney, brain, and acute occlusion of terminal aorta), as well as a severe thrombocytopenia, which led to patient death.
The pathology examination revealed biventricular hypertrophy associated with endomyocardial fibrosis and extended thrombi, as well as diffuse infarcted areas of the spleen, brain, and left kidney; this was in addition to the presence of a large infrarenal aortic saddle thrombus (see Figure 4). A microscopic examination of the bone marrow indicated preserved histological structure with a subtle increase in the number of eosinophilic cells.
Discussion
Myeloid neoplasm with FIP1L1-PDGFRα gene rearrangement is an uncommon cause of HE consisting of peripheral eosinophilia, defined as an elevation of the eosinophil count above 1.5 × 10 3 /uL, whereby the presence of the FIP1L1-PDGFRα fusion gene is identified via the FISH technique [10]. The genotype of these patients includes a 4q12 deletion that leads to fusion between the FIP1L1 and PDGFRα genes, which in turn results in the expression of an active tyrosine kinase that is involved in the proliferation of eosinophilic cells [11]. Despite limited available data, the fusion gene is reported to be found in 10% to 60% of hypereosinophilia cases [8,12]. Moreover, hypereosinophilic syndrome is associated with myeloid proliferation and is predominantly identified in male patients in up to 80-90% of the studied cohorts [5,8,9,13]. Still, molecular or pathological arguments supporting this evident deviation to-
Discussion
Myeloid neoplasm with FIP1L1-PDGFRα gene rearrangement is an uncommon cause of HE consisting of peripheral eosinophilia, defined as an elevation of the eosinophil count above 1.5 × 10 3 /uL, whereby the presence of the FIP1L1-PDGFRα fusion gene is identified via the FISH technique [10]. The genotype of these patients includes a 4q12 deletion that leads to fusion between the FIP1L1 and PDGFRα genes, which in turn results in the expression of an active tyrosine kinase that is involved in the proliferation of eosinophilic cells [11]. Despite limited available data, the fusion gene is reported to be found in 10% to 60% of hypereosinophilia cases [8,12]. Moreover, hypereosinophilic syndrome is associated with myeloid proliferation and is predominantly identified in male patients in up to 80-90% of the studied cohorts [5,8,9,13]. Still, molecular or pathological arguments supporting this evident deviation towards male patients has not yet been mentioned or explained.
A comprehensive clinical profile is difficult to define, since the number of reported cases is limited, and the symptoms are usually not disease specific. However, an important aspect that needs to be emphasized is that the heterogeneity of symptoms is dependent on the involved organ. Once the diagnosis of HES is established, it is essential to differentiate the main etiologies in order to offer patient-targeted therapy in a timely fashion [14]. After HES is determined based on the complete blood count or in the presence of high clinical suspicion, peripheral blood and bone marrow smears are mandatory components of the diagnostic algorithm [10,15]. Tailored imagistic modalities (i.e., ultrasound exams, computed tomography, or magnetic resonance imaging) are useful for identifying organ involvement and other features (secondary sites, complications, etc.). Furthermore, they provide significant details even in the early phases of the structural alteration and may be able to support a diagnosis in the absence of morphopathological confirmation.
Organ involvement was noticed in 19% to 91% of cases of established hypereosinophilia, depending on the number of patients included in studies [13,16]. In these patients, tissue damage of the heart, as one of the prevailing organs, has a reported incidence of 34% to 75%, which leads to significant mortalities due to the extent of lesions and the development of consequent, progressive heart failure [8,17].
The first description of cardiac involvement in a patient with hypereosinophilia was provided by Wilhelm Loeffler in 1936. Loeffler endocarditis (LE) is considered a rare but aggressive complication of hypereosinophilia, described by most authors as consisting of three stages. First, there is eosinophilic infiltration of the endocardial tissue, where the eosinophils release mediators and cytotoxic molecules with subsequent local arteriolar necrosis, followed by thrombus formation on the exposed affected endothelium. The third stage involves fibrotic remodeling that generates restrictive cardiomyopathy [3,18]. In our case, the laboratory findings suggest reflected systemic inflammatory states and cardiac damage. By complementing the results of clinical presentation and cardiac imaging studies, NT-pro BNP is considered essential for an exhaustive diagnosis of heart failure.
Transthoracic and transesophageal echocardiography represent appropriate noninvasive tools for the purposes of evaluating cardiac structure and function. In addition, they also uncover the alterations that are induced by the hypereosinophilic state [19]. Thrombus formation may affect the subvalvular apparatus with secondary regurgitation or transvalvular occlusions by the large emboli. Endomyocardial infiltration and fibrosis lead to impaired diastolic function, which is the main reason for the clinical features of heart failure. According to the published reports, these individuals often have a restrictive echocardiographic pattern, most likely following the fibrotic phase. Polito et al., in their state-of-the-art review, acknowledged that endomyocardial thickening of the cardiac apex and/or the presence of ventricular thrombi were the most common echocardiographic findings that suggest LE, specifically in the context of clinical suspicion of HES [20]. In our case, TTE had a pivotal role in identifying the intraventricular masses that raised suspicion of thrombi, which was later supported by the CMR evaluation. The extension of the mass appeared to involve the papillary muscles and triggered the development of mitral regurgitation.
Cardiac magnetic resonance imaging is the mainstay noninvasive modality for obtaining an accurate description of endomyocardial structure. In addition, it provides the possibility of distinguishing between myocardial edema and fibrosis [21][22][23]. In the absence of endomyocardial biopsy (EMB), echocardiography and CMR play a pivotal role in the diagnostic workup of Loeffler endocarditis, leading to the initiation of suitable treatment [24]. In patients with confirmed HES, computed tomography is used to assess multiorgan involvement or the recognition of embolization originating from cardiac thrombi.
Although noninvasive cardiovascular imaging offers valuable information, EMB represents the gold standard in the diagnostic algorithm, despite its invasiveness and the possibility of it incurring multiple potential complications [25]. The main suggestive histologic alteration is represented by predominant interstitial infiltration into eosinophilic cells, whereby edema may be exposed in the acute phase. In our case, EMB could not be performed due to the substantial risk of embolization as a consequence of the localization and expansion of the mass.
Our paper emphasizes the necessity of multimodal imaging in order to acquire a more precise and refined diagnosis, as well as the necessity of determining a comprehensive description of multiorgan involvement. The patient-directed selection of imaging techniques, alongside laboratory tests, critically contributes to the etiological diagnosis of heart failure and, thereby, establishing an appropriate drug regimen.
Differential diagnoses between HES etiologies may be challenging, and the complexity of this entity can lead to delays in the initiation of treatment. The treatment of hypereosinophilic syndrome implies a correct diagnosis, but glucocorticoids are widely used as a first option [17]. One study, including 188 subjects who were diagnosed with HES of multiple etiologies, revealed that 85% of the patients who received corticosteroid monotherapy exhibited complete or partial responses; however, this is still considered a better option for the FIP1L1-PDGFRα-negative group of patients [26]. Isolated cases of HES with the FIP1L1-PDGFRα fusion gene have been described in the literature as positively responding to prednisone [27]. As in our case, steroids (initially prednisolone intravenous, followed by prednisone) were the initial therapeutic option until molecular testing was performed to ascertain the presence of FIP1L1-PDGFRα.
Imatinib mesylate, a tyrosine kinase inhibitor, is shown to be an effective treatment option for carriers of the FIP1L1-PDGFRα gene [10]. Small studies have suggested complete hematological or molecular remissions under the administration of imatinib after months or years of treatment [8,9,28]. Furthermore, the current data indicate that for certain patients, there is a dose-related long-term response, i.e., 400 mg rather than 100-200 mg per day of imatinib resulted in a better outcome [29]. Previous reported cases of myeloid neoplasm connected to the hybrid gene highlighted a complete hematological and molecular response, even in the presence of peripheral blast cells [30].
Without accurate diagnosis and targeted treatment, the prognostic of this rare disorder is poor, leading to death caused by systemic complications involving multiple organ damage. However, early detection and exhaustive evaluation in order to ascertain multiple organ involvement is crucial for the patient in the era of TKI treatment [31]. Relapsing after a period of reduced dose or after stopping the treatment highlights the idea that imatinib has a suppressive effect over the gene, rather than determining its abolition [6]. Following TKI treatment, a 5-year survival rate of 93.5% was reported in a single Chinese center [9]. Moreover, 2-year survival was noticed in a reduced cohort after ceasing TKI therapy, following complete molecular remission [8].
Data regarding the response to imatinib in patients with organ involvement are insufficient; however, certain case series have suggested a positive effect on partial or complete remission after weeks of treatment [9,32,33]. Cardiac alteration and the development of eosinophilic endocarditis may be identified in later phases when a mortality of 35% to 50% is reported [23]. Limited cases with cardiac involvement were reported to undergo remission based on the image findings [34]. Furthermore, the early detection of cardiac involvement, i.e., while the structural alterations are still reversible, is essential for initiating hematological treatment and may have positive prognostic value [35]. Intracardiac thrombus regression was reported in 42.4% of the cases included in one literature review related to Loeffler endocarditis, and the idea of an additional anticoagulation regimen was considered of foremost importance in the outcome and evolution of these patients [36].
Helbeg et al. published the results of a 12-year follow-up on patients who expressed FIP1L1-PDGFRα rearrangement and reported positive results subsequent to a treatment of imatinib. However, the study also cited one case of death due to heart failure, despite experiencing complete hematological and molecular responses [28]. Considering that there was an improvement in the hematological status of our patient after the initiation of imatinib, confirmed afterwards by the microscopic examination of the bone marrow during necropsy, there may a similarity to the case described by Helbig et al. [28]. The presence of heart failure and systemic complications led to progressive worsening in the clinical state of our patient. To the best of our knowledge, the scientific literature concerning HES is scarce, this case being one of the few documented cases of HES with positive FIP1L1-PDGFRα fusion and severe cardiac impairment. The absence of extensive research on this topic, as well as the reduced number of diagnosed and reported patients with HES associating heart complications, constitutes a barrier to comparing different treatment approaches.
Molecular studies on imatinib effects raise awareness on the possibility of inducing other hematological disorders by inhibiting hematopoiesis [12]. Unfortunately, our case draws attention to the complications that are caused by the frailty of these patients, independent of eosinophil serum levels, which were normalized under suppressive therapy. A multidisciplinary approach and follow-up are essential for the purposes of complex evaluation and informing management decisions.
Conclusions
Hypereosinophilic syndrome represents a complex condition for which a multidisciplinary approach is required to establish the etiology, and this should be followed up with the most appropriate treatment as soon as possible. The FIP1L1-PDGFRα gene rearrangement that is associated with myeloid neoplasm is one of the foremost causes of HES, and it has been noted that there is a positive response to imatinib as the first-line treatment for HES. Nevertheless, the development of heart failure and Loeffler endocarditis with peripheral embolization secondary to cardiac involvement represents critical complications that diminish the demonstrated effectiveness of imatinib in the context of adult hypereosinophilic syndrome. This case report highlights the potential fatal cardiac complications independently of the progression of FIP1L1-PDGFRα myeloid leukemia.
Informed Consent Statement:
Written informed consent to publish this paper was obtained from the patient's relatives.
Data Availability Statement: Additional information can be obtained from the corresponding author upon reasonable request.
Conflicts of Interest:
The authors declare no conflict of interest.
|
2023-05-21T15:07:08.619Z
|
2023-05-01T00:00:00.000
|
{
"year": 2023,
"sha1": "951d3ec61b76eb244e16f6aa94fb7201522c53c2",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/diagnostics13101795",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "beab3ed69e2cbfa96e957a3166e9d676eb9f6c14",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
46985930
|
pes2o/s2orc
|
v3-fos-license
|
Older Workers and Affective Job Satisfaction: Gender Invariance in Spain
Older employees’ affective job satisfaction is an aspect that arouses growing interest among researchers. Among the affective measures of job satisfaction, the Brief Index of Affective Job Satisfaction (BIAJS) is one of the most used in the last decade. This study is intended to the test the gender invariance of the BIAJS in two samples of workers over age 40 in Spain. The first sample, of 300 participants and the second sample, of 399 participants, have been used to test gender invariance of the BIAJS. In comparison with the original English version, the Spanish version of the BIAJS has adequate psychometric properties. The findings allow us to consider it a valid and reliable tool to assess older people’s affective expressions about their work. In addition, this study provides evidence of its factorial invariance as a function of gender.
INTRODUCTION
Older employees' affective job satisfaction is an aspect that arouses growing interest among researchers (Miao et al., 2016(Miao et al., , 2017Davies et al., 2017). In general, the influence of the affective facets (or aspects, dimensions) of work on other outcomes, such as the employee's individual performance and the achievement of organizational goals, is acknowledged (Saber, 2014;Cantarelli et al., 2016). As a result, interest in the evaluation of affective job satisfaction is as up-to-date nowadays as it was more than 60 years ago (Brayfield and Rothe, 1951), when the first emotional indicators of job satisfaction were proposed. However, since women's presence in the workplace is now very common, the affective evaluation of job satisfaction must be carried out with instruments whose invariance across genders cannot not be questioned (Collins et al., 2014;Karin Andreassi et al., 2014). Finally, population aging and the prolongation of working life make the evaluation of older workers' job satisfaction a highly topical theme (Lytle et al., 2015).
Among the affective measures of job satisfaction, the Brief Index of Affective Job Satisfaction (BIAJS) is one of the most used in the last decade (Hurtado et al., 2017;Kottwitz et al., 2017). Previous studies adequately documented its temporal stability, and cross-national and crosspopulation invariance, by job level and work organizations, but not its invariance as a function of gender (Thompson and Phua, 2012). Given that the literature documents the existence of gender-specific patterns of emotions, the evidence about gender invariance of job-related affective measures is a topic of interest (Judge et al., 2017). As a result, this database allows us to the test the gender invariance of the BIAJS in two samples of workers over age 40 in Spain.
Affective Measures vs. Cognitive Measures of Job Satisfaction
Affective job satisfaction is a global and positive emotional response to one's work as a whole (Moorman, 1993). Affective job satisfaction is often considered synonymous with general or overall satisfaction and it is evaluated through items that ask people how much they like their work. In contrast, the evaluation of the cognitive facets of job satisfaction arises from the rational comparison of work conditions with a desired, expected, or promised standard (Locke, 1969;Moorman, 1993;Spector, 1997). Accordingly, it seems clear that emotional job satisfaction is a different construct, although partially related to cognitive satisfaction.
Whereas job satisfaction is the most researched construct in the psychology of work and organizations, there is still no consensus about its structure (Judge et al., 2017). The conceptual and statistical relations between cognitive and affective job satisfaction are a source of controversy (Thompson and Phua, 2012). Whereas some argue that emotional satisfaction directly reflects all the cognitive facets of satisfaction, others insist that it only covers some specific aspects of the cognitive measures (Judge and Ilies, 2004). In this sense, it would still be problematic to establish which facets should be included and what relative weight they would have in the overall assessment (Thompson and Phua, 2012).
Many authors criticize that job satisfaction is generally defined as emotional, but it is evaluated in its cognitive aspects (Moorman, 1993). In this sense, the development of specifically affective measures of job satisfaction, such as the BIAJS, has received a major boost in the last decade (Pellegrini et al., 2010), as has the search for evidence on the invariance of this instrument (Thompson and Phua, 2012). However, crosspopulation equivalence tests have been based on nationality, job level, and job type, but not on gender. On another hand, the studies have relied almost exclusively on English-speaking samples (Pu et al., 2017), whereas studies related to affective job satisfaction with older Spanish-speaking employees are much rarer (Topa and Alcover, 2015). Lastly, although the initial drafting of the BIAJS items was based on interviews of older workers, with an age mode between 45 and 49 years, gender invariance with older workers was not tested.
So, the current data set includes the assessment of the BIAJS in a first sample of workers over 40, and in a second sample of workers over 44. It contributes to existing research by providing data from (a) two different samples of older workers, (b) correlations with other theoretically related variables (job involvement), and (c) it provides sociodemographic data. It therefore enables researchers to further explore the associations between BIAJS and older workers' attitudes and behaviors, as well as potential moderators of these associations (education, job type, etc.).
Ethics Statement
The Ethical Committee of the third author's University Bio-ethical Committee of the National Distance Education University (UNED) approved the Project in May 2014. The ethical standards for research of the Declaration of Helsinki revised in Fortaleza (World Medical Association, 2013) have been followed.
Sample 1
The first sample, of 300 participants, was split into two groups: males (164) and females (136). The males' group was composed of 164 participants with an average age of 53.01 years (SD = 4.94) and an age range of 44-63; educational attainment: 7.3% had completed basic education; 33.5% had completed secondary level studies, and 59.1% had finished university studies; regarding the females' group, the average age was 53.27 years (SD = 4.93) with an age range of 45-65; educational attainment: 5.1% had completed basic education; 32.4% had completed secondary level studies, and 62.5% had finished university studies.
Sample 2
The second sample, of 399 participants, was also split into two groups: males (243) and females (156); regarding the males' group, the average age was 61.51 years (SD = 2.42) with an age range of 45-66; educational attainment: 34.2% had completed basic education; 30.9% had completed secondary level studies, and 35% had finished university studies. The females' group was composed of 156 participants with an average age of 60.84 years (SD = 3.18) and an age range of 45-66; educational attainment: 32.3% had completed basic education, 28.3% had completed secondary level studies, and 39.3% had finished university studies.
Procedure
We developed the Spanish version of the BIAJS scale in two steps. First, the original seven-item BIAJS in English (Thompson and Phua, 2012) was translated to the Spanish context. Various experts in work and organizational psychology drafted the items based on the original version in English. Next, back-translation was carried out by a native English speaker, who was unaware of the contents of the original scale. We then compared the outcome with the original version of the questionnaire. Secondly, we administered the BIAJS scale to the two samples that make up the present study (Sample 1: workers between 44 and 65 and Sample 2: workers between 44 and 66). This step of the study was carried out by means of questionnaires distributed in different organizations by the collaborators of the research team, who performed the task after having received precise instructions to homogenize the administration procedures of the tests. The participants were informed of the goals of the study, of the anonymity of the data collected, and they expressed their consent, after which they completed the workbook containing the diverse scales of the study.
Instruments
The BIAJS (Thompson and Phua, 2012). This brief scale is a measurement instrument of Affective Job Satisfaction with just one factor composed of four items: "I find real enjoyment in my job, " "I like my job better than the average person, " "Most days, I am enthusiastic about my job" and "I feel fairly well satisfied with my job." Responses are rated on a five-point Likert scale ranging from 1 = strongly disagree to 5 = strongly agree. Moreover, to reduce priming effects and acquiescent answers, the scale includes three distracter items: "My job is unusual" (between Item 1 and 2), "My job needs me to be fit" (between item 2 and 3, and "My job is time consuming" (between Item 3 and 4) (see Annex I).
Job Involvement Scale (Kanungo, 1982): The Job Involvement Questionnaire was used, including four items. Examples of items are "Most of my interests are focused on my job, " "I consider my job to be very central to my existence." Response scale ranged from 1 (Totally disagree) to 5 (Totally agree). This scale reached α = 0.83 in a previous study (Topa and Alcover, 2015).
PRELIMINARY STATISTICAL ANALYSIS
We performed a preliminary factor analysis on the BIAJS (results reported in Table 1) to better explain their relevance. The statistical analyses were performed with SPSS 22 and AMOS 19 (Byrne, 2016). Previous assumptions were tested to ensure the applicability of exploratory factor analysis (EFA) and confirmatory factor analysis (CFA): large sample size, Kaiser-Meyer-Olkin (KMO) index and the Bartlett test of sphericity, multivariate normality, linearity, and correlation between variables (Tabachnick and Fidell, 2013).
For the Confirmatory Factor Analysis, the adequacy of the model was tested within the absolute fit indices with weighted least squares and with adjusted mean and variance criteria. It is recommended to estimate thresholds when equal or fewer than five response categories are used (Beauducel and Herzberg, 2006). For instance: the chi-square statistic (χ 2 ) (Jöreskog and Sörbom, 1979); the adjusted the goodness of fit index (AGFI), whose acceptable value reference is over 90 (Hu and Bentler, 1999); the CFI, the NFI and the Tucker Lewis index (TLI)in all three cases, the values range between 0 and 1, and the reference value is 0.90 (Bollen, 1989;Bentler, 1990) -and, within parsimony fit indices, the RMSEA and the RMR, the smaller their value, the better the fit, the reference value being 0.05 (Steiger and Lind, 1980;Cheung and Rensvold, 2002). All indices were calculated for the male and female groups and the total sample. For the multigroup analysis between male and females the fit indices used to test the factorial invariance were applied according to Vandenberg and Lance (2000).
Sample 1: the BIAJS scale showed an internal consistency for males of 0.884, and for females, of 0.847 (Cronbach's alpha). Regarding the males, item homogeneity was between 0.752 and 0.771 and, for females; it was a bit lower, between 0.608 and 0.726.
Sample 2: Cronbach's alpha was 0.880 for males, and 0.849 for females, and item homogeneity was between 0.706-0.801 and 0.627-0.757, respectively, for males and females. Table 1 presents descriptive analysis with means, standard deviations and correlations for each item of the scale and for both samples.
For Sample 1 and for the male group, Bartlett's sphericity test obtained χ 2 = 343.228 (df = 6), p < 0.001, the KMO value was 0.838; on the other hand, for the female group, Bartlett's sphericity test obtained χ 2 = 229.503 (df = 6), p < 0.001, and the KMO value was 0.789. The factorial weights for males were between 0.878 and 0.835 and for females between 0.770 and 0.858.
Confirmatory Factor Analysis was carried out on the data by sample and group to replicate the single factor solution of the BIAJS. Thus, the four items of the scales were expected to load on a single factor (see Table 2).
Finally, to test the criterion validity, the BIAJS was correlated with the other theoretical latent variables (constructs) related to affective job satisfaction. Significant correlations were found between affective job satisfaction and the Construct Job Involvement Scale, r = 0.262, p < 0.01. For each group, the correlations were also significant: r = 0.276, p < 0.01, for males, and r = 0.242, p < 0.01, for females.
For Sample 2, the correlations were also significant although weaker than in Sample 1. For all the entire sample, the correlation was r = 0.189, p < 0.01; and for the males, it was r = 0.179, p < 0.01, and for the females, r = 0.205, p < 0.05.
SUGGESTIONS OF FUTURE AVENUES OF RESEARCH USING THIS DATA SET
This data set would be tied in with research into the influence of affective job satisfaction on workers' desirable attitudes and behaviors. Meta-analytical reviews and individual studies frequently explore differences as a function of sociodemographic characteristics, such as race (Koh et al., 2016), or employment conditions, multiple versus single job holders (Kottwitz et al., 2017), or hierarchical level (Hurtado et al., 2017). However, studies that analyze factor structure invariance as a function of gender are more uncommon in job-related affective well-being measures, such as that of Laguna et al. (2017). To our knowledge, there is no study that has examined this psychometric property in relation to the BIAJS in older Spanish workers. As the literature shows that job satisfaction is a relevant predictor associated with the planning of and decision to retire or to continue working (Topa et al., 2018), this analysis seems relevant.
In comparison with the original English version, the Spanish version of the BIAJS has adequate psychometric properties. The findings allow us to consider it a valid and reliable tool to assess older people's affective expressions about their work. In addition, this set of data provides evidence of its factorial invariance as a function of gender.
In this sense, according to the internal consistency of the scale, this data set has shown similar reliability for both samples and groups as the original instrument (Thompson and Phua, 2012); likewise, homogeneity indices were higher in general than those of Thompson and Phua (2012). Regarding the fit indices, all were excellent for both samples and groups, showing factorial stability across gender for this scale. Also, the concurrent validity with the Job Involvement Scale (Kanungo, 1982) was tested for both samples and groups, with adequate findings. Some significant correlations were a bit weak, specifically, for Sample 2 and for the females.
Concerning the size and representativeness of the samples, the limitations of this data are obvious, especially those due to the sampling procedure used. Moreover, all the data proceed from self-reports, which can include a source of uncontrolled error from the common variance. However, because the BIAJS is focused on subjective evaluations of one's job, deviations from external criteria would not necessarily indicate that the BIAJS is an invalid instrument. In summary, we conclude that the available dataset could be used to expand research on job satisfaction and aging among Spanish-speaking populations (Unanue et al., 2017), and empirically support further theoretical development. In this regard, future analyses could reuse the data to develop practical interventions for subgroups of older workers who show weaker job satisfaction in order to improve their desirable job attitudes.
DATA SET DESCRIPTION
The data set, called Aged Worker's Job Satisfaction, can be found in Figshare repository and is accessible through the following hyperlink: https://figshare.com/articles/BIAJS_Brief_Index_of_ Affective_Job_Satisfaction/5806743.
The file named Sample 1 (Sample1_300.sav) contains the individual responses of the 300 participants on the seven items of the BIAJS scale, and job involvement assessment along with demographic data (age, gender, education, type of employment contract, and organizational seniority). The file named Sample
|
2018-06-08T13:06:20.170Z
|
2018-06-08T00:00:00.000
|
{
"year": 2018,
"sha1": "acdbdc796dcb65456382287e12375b48cdef03db",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3389/fpsyg.2018.00930",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "acdbdc796dcb65456382287e12375b48cdef03db",
"s2fieldsofstudy": [
"Sociology"
],
"extfieldsofstudy": [
"Medicine",
"Psychology"
]
}
|
211043665
|
pes2o/s2orc
|
v3-fos-license
|
L-functions of Certain Exponential Sums over Finite Fields
In this paper, we completely determine the slopes and weights of the L-functions of an important class of exponential sums arising from analytic number theory. Our main tools include Adolphson-Sperber's work on toric exponential sums and Wan's decomposition theorems. One consequence of our main result is a sharp estimate of these exponential sums. Another consequence is to obtain an explicit counterexample of Adolphson-Sperber's conjecture on weights of toric exponential sums.
INTRODUCTION
Let F q be the finite field of q elements with characteristic p. For each positive integer k, let F q k denote the degree k finite extension of F q and F * q k denote the set of non-zero elements in F q . Let ψ : F p → C * be a fixed nontrivial additive character over F p . In this paper, we are concerned with estimating the following exponential sum S k ( a) = S k (a 1 , a 2 , . . . , a 6 ) = a 5 x 1 x 2 + a 6 x 3 x 4 =1 x i ∈F * q k ψ(Tr k (a 1 x 1 + a 2 x 2 + a 3 x 3 + a 4 x 4 )), where Tr k : F q k → F p is the trace map and a i ∈ F * q , i = 1, 2, . . . , 6. In the past few decades, there have been extensive study and application of the exponential sum S k ( a). For instance, using Deligne's theorem on weights, Birch and Bombieri proved that if p > c 0 , then |S k ( a)| ≤ c 1 q 3k/2 for some absolute constants c 0 ,c 1 [BB85]. Their estimate is a crucial ingredient in Heath Brown's work on the divisor function d 3 (n) in arithmetic progressions [HB86]. Birch and Bombieri's result also plays a vital role in Friedlander and Iwaniec's work on estimating certain averages of incomplete Kloosterman sums in application to the divisor problem of d 3 (n) [FI85]. Relying on Friedlander-Iwaniec's and Birch-Bombieri's results of S k ( a), Zhang gained a boundary of the error terms in his work on twin prime conjecture [Zha14]. A consequence of our theorem is a sharp estimate: for all p, k and a, |S k ( a)| ≤ 6q 3k/2 + q k + 1. (1 − α i T ), where α i ∈ Q with the complex absolute value |α i | = q 3/2 . If we view the reciprocal roots α i as p-adic numbers and enumerate them with respect to the q-adic slopes, their p-adic norms are given by Corollary 1.2. Notations as above. For all p, k and a, |S k ( a)| ≤ 6q 3k/2 + q k + 1.
The following equation describes the relationship between S k ( a) and S * k (g), Based on the relationship, it's easy to check that L( a, T ) = 1 1 − T /q L * (g, T /q) = So it suffices to evaluate the nontrivial reciprocal roots of L * (g, T ) , denoted by β i (1 ≤ i ≤ 8). To gain the slopes, we use Wan's facial decomposition theorem to compute the q-adic Newton polygon of L * (g, T ). Wan's boundary decomposition theorem together with the slope information we obtained lead to an explicit expression of L * (g, T ), where β i ∈ Z[ζ p ] and |β i | = q 5/2 , i = 1, 2, . . . , 6. Our main theorem follows from this result. For the weight computation of L * (g, T ), one can also apply Denef-Loeser's weight formula obtained using intersection cohomology [DL91]. This formula becomes combinatorially complicated to use when the dimension n ≥ 5 and the polytope is not simple at the origin. This is precisely the case for our 5-dimensional example g. In contrast to Wan's decomposition theorem, Denef-Loeser's formula can't determine the two real roots. Moreover, if splitting the exponential sum S * k (g) into a product of two Kloosterman sums, one can also obtain the real roots and weight information using Katz's calculation of the local monodromy of the Kloosterman sheaf, which is more advanced [Kat88].
We remark that there is a much simpler weight formula conjectured by Adolphson-Sperber [AS90] which is true in many interesting cases including all low dimensional cases n ≤ 4. This conjectural formula however was disproved by Denef and Loeser [DL91] who showed the existence (no construction) of a counterexample in dimension 5. We also test our example using the Adolphson-Sperber formula and found it disagrees with the result obtained using the Wan's decomposition theorem. This means that our 5-dimensional example provides the first explicit construction of a counterexample to the Adophson-Sperber conjecture.
This paper is organized as follows. In section 2, we review some technical methods and theorems including Adolphson-Sperber's theorems and Wan's decomposition theorems. In section 3, we prove the main results. In the appendix, we list the vertices of all faces of the polytope that are needed in both the slope computation and the weight computation. These were obtained via a simple computer calculation.
2. PRELIMINARIES 2.1. Rationality of the generating L-function. For a Laurent polynomial f ∈ F q x ±1 1 , . . . , x ±1 n , the associated exponential sum is defined by where Tr k : F q k → F p is the trace map and ψ : F p → C * is a fixed nontrivial additive character. In analytic number theory, it's a classical problem to give a good estimate for the valuations of S * k (f ). In order to directly compute the absolute values of the exponential sums, we usually study the generating L-function of S * k (f ) defined as By a theorem of Dwork-Bombieri-Grothendieck [Dwo62,Gro66], the generating L-function is a rational function given by where all the reciprocal roots and poles are non-zero algebraic integers. After taking logarithmic derivatives, we have the formula This formula implies that the zeros and poles of the generating L-function contain critical information about the exponential sums.
From Deligne's theorem on Riemann hypothesis [Del80], the complex absolute values of reciprocal zeros and poles are bounded as follows For non-archimedean absolute values, Deligne [Del80] proved that |α i | ℓ = |β j | ℓ = 1 when ℓ is a prime and ℓ = p. Depending on Deligne's integrality theorem [Del80], we have the following estimates for p-adic absolute values The integer u i (resp. v j ) is called the weight of α i (resp. β j ) and the rational number r i (resp. s j ) is called the slope of α i (resp. β j ). In the past few decades, it has been tremendous interest in determining the weights and slopes of the generating L-functions. Without any further condition on the Laurent polynomial f or prime p, it's even hard to determine the number of reciprocal roots and poles. Adolphson and Sperber [AS89] proved that under a suitable smoothness condition of f in n variables, the associated Lfunction L * (f, T ) (−1) n−1 is a polynomial, i.e., L * (f, T ) is a polynomial or the inverse of a polynomial. In this case, the Newton polygon can be used to determine the slopes of the reciprocal roots. Adolphson and Sperber [AS89] also proved that if L * (f, T ) (−1) n−1 is a polynomial, its Newton polygon has a lower bound called Hodge polygon. The basic definitions of the Newton polygon and the Hodge polygon will be discussed in the next subsection.
2.2. Newton polygon and Hodge polygon. Let The Newton polyhedron of f , ∆(f ), is defined to be the convex closure in R n generated by the origin and the lattice points V j (1 ≤ j ≤ J). For δ ⊂ ∆(f ), let the Laurent polynomial be the restriction of f to δ. Then we define a certain smoothness condition of a Laurent polynomial.
Definition 2.1. A Laurent polynomial f is called non-degenerate if for each closed face δ of ∆(f ) of arbitrary dimension which doesn't contain the origin, the n-th partial derivatives ∂f δ ∂x 1 , . . . , ∂f δ ∂x n have no common zeros with x 1 . . . x n = 0 over the algebraic closure of F q .
Generally, we are more interested in the non-degenerate Laurent polynomials whose generating L-functions are polynomials or the inverse of polynomials.
Deligne's integrality theorem [Del80] implies that the p-adic absolute values of reciprocal roots are given by For simplicity, we normalize p-adic absolute value to be |q| p = q −1 . So we can compute the q-adic Newton polygon of L * (f, T ) (−1) n−1 to determine the q-adic slopes of its reciprocal roots. The q-adic Newton polygon is defined as follows.
, where Q p is the algebraic closure of Q p . The q-adic Newton polygon of L(T ) is defined to be the lower convex closure of the set of points {(k, ord q (a k )) |k = 0, 1, . . . , n} in R 2 .
Here ord q denotes the standard q-adic ordinal on Q p where the valuation is normalized by assuming ord q (q) = 1. The following lemma [Kob84] describes the relationship between the shape of a q-adic Newton polygon and the q-adic valuation of all the related reciprocal roots.
Lemma 2.4. In the above notation, let L(T ) = (1−α 1 T ) . . . (1−α n T ) be the factorization of L(T ) in terms of reciprocal roots α i ∈ Q p . Let λ i = ord q α i . If λ is the slope of a q-adic Newton polygon with horizontal length l, then precisely l of the λ i are equal to λ.
Assume f is a non-degenerate Laurent polynomial in n variables. So by Theorem 2.2, the L-function L * (f, T ) (−1) n−1 is a polynomial. Let NP(f ) denote the q-adic Newton polygon of L * (f, T ) (−1) n−1 . It's always hard to directly compute NP(f ). Adolphson and Sperber proved that NP(f ) has a topological lower bound called Hodge polygon, which is easier to calculate [AS89]. So it's a general way to first compute its lower bound Hodge polygon and then determine when the Newton polygon coincides with its lower bound.
Let ∆ be an n-dimensional integral polytope containing the origin in R n . Define C(∆) to be the cone generated by ∆ in R n . For any point u ∈ R n , the weight function w(u) is the smallest non-negative real number c such that u ∈ c∆. Let w(u) = ∞ if such c doesn't exist. Assume δ is a co-dimension 1 face of ∆ not containing the origin. Let D(δ) be the least common multiple of the denominators of the coefficients in the implicit equation of δ, normalized to have constant term 1. We define the denominator of ∆ to be the least common multiple of all such D(δ) given by: where δ runs over all the co-dimension 1 faces of ∆ that don't contain the origin. It's easy to check For a non-negative integer k, let be the number of lattice points in Z n with weight k/D.
Definition 2.5 (Hodge number). Let ∆ be an n-dimensional integral polytope containing the origin in R n . For a non-negative integer k, the k-th Hodge number of ∆ is defined to be It's easy to check that H ∆ (k) = 0, if k > nD. Adolphson and Sperber [AS89] proved that H ∆ (k) coincides with the usual Hodge number in the toric hypersurface case that D = 1. Based on the Hodge numbers, we define the Hodge polygon of a given polyhedron ∆ ∈ R n as follows.
Definition 2.6 (Hodge polygon). The Hodge polygon HP(∆) of ∆ is the lower convex polygon in R 2 with vertices (0,0) and Here the horizontal length H ∆ (k) represents the number of lattice points of weight k/D in a certain fundamental domain corresponding to a basis of the p-adic cohomology space used to compute the L-function. Adolphson and Sperber constructed the Hodge polygon and proved that it's a lower bound of the corresponding Newton polygon.
In order to study the ordinary property, we will apply Wan's decomposition theorems [Wan93]. Apparently, the ordinary property of a Laurent polynomial depends on its Newton polyhedron ∆. Wan's theorems decompose the polyhedron ∆ into small pieces which are much easier to deal with [Wan93]. We will give a brief introduction of Wan's facial decomposition theorem and boundary decomposition theorem in subsection 2.3.
Wan's decomposition theorems.
2.3.1. Facial decomposition theorem. In this paper, we use facial decomposition theorem to cut the polyhedron into small simplices. For each simplex, we can apply some criteria to determine the non-degenerate and ordinary property.
Theorem 2.9 (Facial decomposition theorem [Wan93]). Let f be a non-degenerate Laurent polynomial over F q . Assume ∆ = ∆(f ) is n-dimensional and δ 1 , . . . , δ h are all the codimension 1 faces of ∆ which don't contain the origin. Let f δ i denote the restriction of f to δ i . Then f is ordinary if and only if f δ i is ordinary for 1 ≤ i ≤ h.
In order to describe the boundary decomposition, we first express the L-function in terms of the Fredholm determinant of an infinite Frobenius matrix.
2.3.2. Dwork's trace formula. Let p be a prime and q = p a for some positive integer a. Let Q p denote the field of p-adic numbers and Ω be the completion of Q p . Pick a fixed primitive p-th root of unity in Ω denoted by ζ p . In Q p (ζ p ), choose a fixed element π satisfying ∞ m=0 π p m p m = 0 and ord p π = 1 p − 1 .
By Krasner's lemma, it's easy to check Q p (π) = Q p (ζ p ). Let K be the unramified extension of Q p of degree a. Let Ω a be the compositum of Q p (ζ p ) and K.
Define the Frobenius automorphism τ ∈ Gal(Ω a /Q p (π)) by lifting the Frobenius automor- In Dwork's terminology, a splitting function θ(t) is defined to be When t = 1, θ(1) can be identified with ζ p in Ω. Consider where V j ∈ Z n andā j ∈ F * q . Let a j be the Teichmüller lifting ofā j in Ω satisfying a q j = a j . Let The coefficients of F (f, x) are given by where the sum is over all the solutions of the following linear system J j=1 u j V j = r with u j ∈ Z ≥0 , and λ m is m-th coefficient of the Artin-Hasse exponential series E p (t).
Assume ∆ = ∆(f ). Let L(∆) = Z n ∩ C(∆) be the set of lattice points in the closed cone generated by origin and ∆. For a given point r ∈ R n , the weight function is given by In Dwork's terminology, the infinite semilinear Frobenius matrix A 1 (f ) is a matrix whose rows and columns are indexed by the lattice points in L(∆) with respect to the weights where r, s ∈ L(∆). Based on the fact that ord p F r (f ) ≥ w(r) p−1 , we have the following estimate Let ξ be an element in Ω satisfying ξ D = π p−1 . Then A 1 (f ) can be written in a block form, The q-adic Newton polygon of det(I − T A a (f )) has a natural lower bound which can be identified with the chain level version of the Hodge polygon.
By Dwork's trace formula [Dwo60], we can identify the associated L-function with a product of some powers of the Fredholm determinant [Wan04], Equivalently, we have, Proposition 2.12 ( [Wan04]). Notations as above. Assume f is non-degenerate with ∆ = ∆(f ). Then NP(f ) = HP(∆) if and only if the q-adic Newton polygon of det(I −T A a (f )) coincides with its lower bound P (∆).
where ∆ is an n-dimensional integral convex polyhedron in R n containing the origin. Let C(∆) be the cone generated by ∆ in R n . If the origin is a vertex of ∆, then it is the unique 0-dimensional open cone in B(∆). Recall that A 1 (f ) = (a r,s (f )) is the infinite semilinear Frobenius matrix whose rows and columns are indexed by the lattice points in L(∆). For Σ ∈ B(∆), we define A 1 (Σ, f ) to be the submatrix of A 1 (f ) with r, s ∈ Σ. Let f Σ be the restriction of f to the closure of Σ.
After permutation, the infinite semilinear Frobenius matrix can be written as where B ij = 0 for i > j. Then det(I − T A 1 (f )) = h i=0 det(I − T B ii ) and we have the boundary decomposition theorem.
Consider the boundary decomposition B(∆). Let B(∆)
If the origin is a vertex of ∆, then Σ 0 is the origin and Σ 0 = Σ 0 .
The restriction polynomial f Σ 0 = c where c is the constant term of f . By boundary decomposition theorem, it's easy to get Since the k-th exponential sum where Tr k is the trace map from F q k to F p , the corresponding L-function is given by Combining with Theorem 2.2, we have where α i ∈ Z[ζ p ] and |α i | ≤ q n/2 . The corollary then follows.
2.4. Diagonal local theory. In this subsection, we give some non-degenerate and ordinary criteria for the diagonal Laurent polynomials whose Newton polyhedrons are simplices.
is called diagonal if f has exactly n non-constant terms and ∆(f ) is an n-dimensional simplex in R n .
Let f be a Laurent polynomial over F q , where a j ∈ F * q and V j = (v 1j , . . . , v nj ) ∈ Z n , j = 1, 2, . . . , n. Let ∆ = ∆(f ). Then the vertex matrix of ∆ is defined to be where the i-th column is the i-th exponent of f . If f is diagonal, M(∆) is invertible.
Let S(∆) be the solution set of the following linear system
It's easy to prove that S(∆) is an abelian group and its order is given by By the fundamental structure theorem of finite abelian group, we decompose S(∆) into a direct product of invariant factors, where d i |d i+1 for i = 1, 2, . . . , n − 1. By the Stickelberger theorem for Gauss sums, we have the following ordinary criterion for a non-degenerate Laurent polynomial [Wan04].
is a non-degenerate diagonal Laurent polynomial with ∆ = ∆(f ). Let d n be the largest invariant factor of S(∆). If p ≡ 1(modd n ), then f is ordinary at p.
PROOF OF THE MAIN THEOREM
We prove the main theorem in this section. Recall that the exponential sum S k ( a) has the expression, where ψ : F p → C * is a nontrivial additive character, Tr k : F q k → F p is the trace map and a j ∈ F * q . Our main purpose is to determine the weights and slopes of the reciprocal roots of the generating L-function corresponding to S k ( a) using Wan's decomposition theorems. Let 1 , . . . , x ±1 5 ] be the Laurent polynomial defined by g = a 1 x 1 + a 2 x 2 + a 3 x 3 + a 4 x 4 + x 5 a 5 x 1 x 2 + a 6 x 3 x 4 − 1 .
The associated exponential sum of g is ψ Tr k a 1 x 1 + a 2 x 2 + a 3 x 3 + a 4 x 4 + x 5 a 5 x 1 x 2 + a 6 x 3 x 4 − 1 .
Let L( a, T ) be the generating L-function of S k ( a) and L * (g, T ) be the generating Lfunction of S * k (g). Since S k ( a) we have the following relationship between S k ( a) and S * k (g), So to estimate the reciprocal roots of L( a, T ), it's sufficient to evaluate L * (g, T ). In order to obtain the weights and slopes of the reciprocal roots of L( a, T ), we consider the Newton polygon of L( a, T ).
Slopes of L( a, T ).
Hereinafter, let ∆ = ∆(g) denote the Newton polyhedron corresponding to the Laurent polynomial g. Claim that dim ∆ = 5 and ∆ has 8 vertices.
We now turn to the faces of ∆. Let δ be a face of ∆ not containing the origin. Note that dim δ = rank of the vertex matrix M(δ). As a face of ∆, δ satisfies the criterion in Proposition 3.1.
Proposition 3.1. Let ∆ be an n-dimensional convex polyhedron in R n and δ be a codimension 1 face of ∆ not containing the origin. Let h(x) = n i=1 e i x i = 1 be the equation of δ, where e i are uniquely determined rational numbers not all zero. For any vertex V of ∆, we have h(V ) ≤ 1.
Restricted by Proposition 3.1, ∆ has 9 codimension 1 faces not containing the origin, denoted by δ i (1 ≤ i ≤ 9). The equations of δ i (listed as follows) and the corresponding vertices (see appendix) can be directly obtained using computer programming.
Recall that the restriction of g to δ i is defined by It is explicit from the equations of δ i that the denominator D = D(∆) = 1. For 1 ≤ i ≤ 9, note that each |det M(δ i )| = 1 and Laurent polynomial g δ i is diagonal. According to Proposition 2.17 and 2.18, g δ i is non-degenerate and ordinary. From Theorem 2.9, g is ordinary which means that the Newton polygon of L * (g, T ) coincides with its Hodge polygon. In order to determine the slopes of the reciprocal roots of L * (g, T ), we compute the Hodge numbers of ∆. Proof. For any integer i (1 ≤ i ≤ 9), let ∆ i be the polytope generated by δ i and the origin. The facial decomposition of ∆ [Wan93] is defined by It follows that the volume of ∆ i satisfies the relationship Vol(∆ i ).
Theorem 2.2 and the first result of Theorem 3.2 show that the degree of L * (g, T ) is 9. For 1 ≤ i ≤ 8, let α i denote the reciprocal roots of L( a, T ) and let β i denote the nontrivial reciprocal roots of L * (g, T ). Now we factor L( a, T ) and L * (g, T ) as follows.
Proposition 3.3. Notations as above. We have where |β i | ≤ q 5 2 and α i = β i q . Proof. From Theorem 2.2, one has where the last equality can be deduced from Corollary 2.15 that L * (g, T ) has a trivial unit root β 0 = 1. Substituting formula (3.1) into L( a, T ), we obtain The next corollary gives the slopes of α i .
Corollary 3.4. Notations as above. Let h k be the number of α i with slope k, i.e.
Proof. Since g is ordinary, the Newton polygon of ∆ coincides with its Hodge polygon. In this case, the slope k segment in Newton polygon has horizontal length H ∆ (k). By Lemma 2.4, H ∆ (k) equals the number of β i such that ord q β i = k . Values of h k follow from H ∆ (k) since ord q (α i ) = ord q (β i ) − 1.
3.2.
Weights of L( a, T ). In this subsection, we refer 3 methods to determine weights of β i and demonstrate the first one in detail. Weights of α i can be deduced directly from β i . For a Laurent polynomial f , the generating L-function L * (f, T ) can be expressed as a product of some powers of the Fredholm determinant using Dwork's trace formula. See subsection 2.3 for details. With the help of Wan's boundary decomposition theorem, we identify that L * (g, T ) has three real reciprocal roots, 1, q, q 2 , using formula (2.4). Restricted by the Hodge numbers, weights of β i can be obtained.
Lemma 3.5. Let f be a non-degenerate odd Laurent polynomial with an n-dimensional Newton polyhedron ∆. The associated L-function is of the following form, where |α i | = q ω i /2 and ω i ∈ Z ∩ [0, n]. If α i is real, then If α i is non-real, then the conjugate α i is also a reciprocal root of L * (f, T ) (−1) n−1 and ord q α i + ord q α i = ω i ≤ n. The lemma then follows.
Proof. Denote D(q i T ) = det (I − q i T A a (g)). First we compute D(T ) using boundary decomposition theorem. Recall the boundary decomposition B(∆) defined in definition 2.13. Let N(i) denote the number of i-dimensional face Σ i,j of C(∆), where 0 ≤ i ≤ dim ∆ and 1 ≤ j ≤ N(i). For Newton polyhedron ∆ = ∆(g) in our example, N(i) equals the number of i-dimensional faces of ∆ containing the origin, which yields N(0) = 1, N(1) = 6, N(2) = 15 (see Table 2 in appendix). The following proof with respect to face Σ i,j is independent of the choice of j. For simplicity, we abbreviate Σ i,j to Σ i . Note that Σ i is an open cone and Σ i ∈ B(∆). Let Σ i be the closure of Σ i . Denote It is obvious that the unique 0-dimensional cone Σ 0 is the origin and D 0 (T ) = D ′ 0 (T ) = 1 − T . When i = 1, each g Σ 1 can be normalized to x by variable substitution. Explicitly, By formula (2.5), we have Since the only boundary of Σ 1 is Σ 0 , we get D ′ 1 (T ) after eliminating D ′ 0 (T ), i.e., Similarly, we transfer each g Σ 2 to x 1 + x 2 . Then L * (g Σ 2 , T ) = 1 1−T and Boundary of Σ 2 consists of the origin and 2 sides, i.e. Σ 1 , which yields For 3 ≤ i ≤ 5, Σ i is not necessarily a simplex cone. By Proposition 2.12 and Theorem 3.2, where #{γ i | ord q γ i = k} = W ∆ (k) for k ∈ Z ≥0 . Using Wan's boundary decomposition theorem, where |γ 1 | q = q −1 and the slopes of reciprocal roots of h 1 (T ) are greater than 1. From formula (2.4), we obtain where the slopes of reciprocal roots of h 2 (T ) are greater than 1. We claim that γ 1 = q and prove it by contradiction. Suppose γ 1 = q, then L * (g, T ) has no complex reciprocal roots with slopes ≤ 1. Let β 6 be the unique slope 4 reciprocal root of L * (g, T ). By formula (3.3) and (3.4), β 6 is non-real with weight ≥ 6 but ≤ 5 which leads to a contradiction. Combining with Theorem 2.2, we get It can be deduced from Hodge number H ∆ (k) and Lemma 3.5 that β i is non-real for By H ∆ (k) and formula (3.6), we have which implies ω ′ i = 5 for 1 ≤ i ≤ 6. Then our main theorem follows from Proposition 3.3 and Corollary 3.4.
Theorem 3.7. Notations as above. We have where |α i | = q 3/2 for 1 ≤ i ≤ 6. Weights and slopes of α i are given by Table 1. Let e ω (0 ≤ ω ≤ 5) be the number of β i with weight ω. The second way is to calculate e ω by Denef-Loeser's weight formula obtained using intersection cohomology [DL91]. This formula works for non-degenerate Laurent polynomial f with respect to the Newton polyhedron ∆(f ). When dim (∆(f )) ≥ 5 and the ∆(f ) is not simple at the origin, this formula becomes combinatorially complicated to use, which is precisely the case for our example g. After cumbersome computation, we obtain e 0 = 1, e 1 = 0, e 2 = 1, e 3 = 0, e 4 = 1, e 5 = 6. This method also leads to weights of all β i , while it cannot determine the exact values of all real roots.
Katz's calculation of the local monodromy of the Kloosterman sheaf provides the third strategy [Kat88]. The rank 3 Kloosterman sum is defined by Kl(a, b, c) = x 1 ,x 2 ∈F * q k ψ Tr k ax 1 + bx 2 + c x 1 x 2 , where a,b,c ∈ F * q , ψ and Tr k are as defined in this paper. If the variable x 5 in S * k (g) is fixed, the exponential sum symmetrically splits into a product of two rank 3 Kloosterman sums, i.e. S * k (g) = x 5 ∈F * q k Kl(a 1 , a 2 , a 5 x 5 )Kl(a 3 , a 4 , a 6 x 5 )ψ (Tr k (−x 5 )) .
One then needs to calculate the weights of the cohomology on G m of the tensor product of two Kloosterman sheaves with an Artin-Schreier sheaf. This reduces to understanding the local monodromy at 0 and infinity of this tensor product. Katz's strategy arrives at the same result as Wan's and is more advanced. As a corollary of Theorem 3.7, we propose an estimation for the exponential sums S k ( a).
Theorem 3.8. Notations as above. We have |S k ( a)| ≤ 6q 3k 2 + q k + 1. Denef and Loeser showed in [DL91] that the conjecture about the number of weight k reciprocal roots of an n-dimensional polytope (k ≤ n ∈ Z) by Adolphson and Sperber [AS90] is false for some 5-dimensional simplicial Newton polyhedron. They proved the existence of counterexample for n = 5, but didn't propose an explicit one. Our example, in fact, describes the first explicit counterexample for Adolphson-Sperber's conjecture. ). Let f ∈ F q x 1 , . . . , x n , (x 1 . . . x n ) −1 be nondegenerate and suppose dim ∆(f ) = n. Let w k be the number of roots of L * (f, T ) with weight k, where 0 ≤ k ≤ n. Then where V (σ) is the volume of σ normalized by the assumption that a fundamental domain of the lattice Z n ∩(affine space of σ) has unit volume and F σ (i) is the number of i-dimensional faces of ∆(f ) that contain σ.
Take k = 5, we compute w 5 for our example using Adolphson-Sperber's Conjecture.
This result contradicts with the one we obtained. Namely, Conjecture 3.9 fails in our example.
|
2020-02-07T18:41:26.695Z
|
2020-02-05T00:00:00.000
|
{
"year": 2020,
"sha1": "e19a8969083a3dc09aa349ed0b32ec8966149682",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2002.02023",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "e19a8969083a3dc09aa349ed0b32ec8966149682",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
103387472
|
pes2o/s2orc
|
v3-fos-license
|
Physico-chemical properties of physiologically active polysaccharides from wheat tissue culture
Polysaccharides (PS) from wheat cell culture were isolated by liquid-liquid extraction. The molecular mass distribution was determined by gel-permeation chromatography (GPC) using dual detectors for the simultaneous detection. It was supposed that PS sample from wheat cell culture has molecular weight of 1632 Da. The physic-chemical properties of PS such as solubility in different solvents, surface activity, ξ-potential, the pH value, polydispersity (PDI) were determined. The PS sample was soluble in water and insoluble in ethanol, acetone and chloroform. ξ-potential of PS was evaluated in order to determine its charge at different pH value from 3 to 9. As a result, the ξ values for the PS solution were negative throughout the pH range studied, varying from -2.85 mV (pH 3.0) to -21.1 (pH 9). Using tensiometry method, surface tension of the PS at the liquid/air interface was investigated. At 0.05% concentration interfacial tension decreases slowly and reaches an equilibrium value after ~ 8-8.5 hours. The pH was equal to 5.6±0.05. For a PS solution of 0.001% at pH 5.5 PDI was equal to 0.595.
Introduction
All cell wall polysaccharides except cellulose are water-soluble compounds; their anchorage in the cell wall exists by use of different types of bonds [1]. The main monosaccharides that are part of the cell wall are: glucose, galactose, mannose, rhamnose and fucose, which contain six carbon atoms, as well as arabinose and xylose containing five carbon atoms. Common component of plant cell wall polysaccharides are uronic acids -modified sugars that have not closed in the ring -CH 2 OH group is replaced by a carboxyl group -COOH. Most frequent uronic acids are presented by galacturonic acid, which is a derivative of galactose [2].
One of the main physic-chemical parameters characterizing a macromolecule -whether it is naturally occurring or synthetically produced -is its «molecular weight» [3].
Determination of the molecular weight of watersoluble polysaccharides is usually carried out by gel-permeation chromatography (GPC), which effectively allocates the molecules based on their hydrodynamic volume. GPC is used in carbohydrate research to determine molecular weight distributions of polysaccharides. The columns are calibrated us-ing commercially available standards which are commonly dextrans or pullulans [5].
GPC of polysaccharides is often based on size exclusion mechanism: physical exclusion of molecules that are unable to penetrate the pore structure of the resin. Sample molecules with a size greater than the pore diameter of the support matrix cannot enter the pores. They are excluded and eluted rapidly from the column in the void volume. Molecules with a size smaller than the pore diameter enter the pores and elute differentially in volumes that are in size between the void volume and the void volume plus pore volume [6].
Alternative approaches to determine the molecular weight is the analysis of light scattering and viscometer [7].
Polysaccharides consisting of one type of sugar unit uniformly linked in linear chains are usually water insoluble even when the molecules have a low molecular weight with degrees of polymerization (DP) 20-30. Insolubility results from the fit of molecules and their preference for partial crystallization. An exception to the rule is in (1→6)-linked homoglycans, which because of the extra degrees of freedom provided by the rotation about the C-5 to C-6 bonds gives higher solution entropy values. Homoglycans with two types of sugar linkages or heteroglycans composed of two types of sugars are more soluble than purely homogeneous polymers. Ionized linear homoglycans are soluble but like all soluble linear polymers easily form gels because of segmental association which sometimes may be in a double helix formation. As these junction zones develop a stronger tertiary structure, gel hardness increases [8].
Zeta Potential analysis is a technique for determining the surface charge of particles in solution (colloids) [9]. ξ-potential has values that typically range from +100 mV to -100 mV. The magnitude of the zeta potential is predictive of the colloidal stability. Particles with zeta potential values greater than +25 mV or less than -25 mV typically have high degrees of stability. Dispersions with a low zeta potential value will eventually aggregate due to Van Der Waal inter-particle attractions [10].
Substances that capable to decrease the surface tension of the system (dy/dc< 0) are referred to as surface active substances (SAS), or surfactants [11]. It follows from the Gibbs equation that the adsorption of such compounds is positive, i.e. their concentration within the surface layer is higher than that in the bulk. For example, at air-water and water-hydrocarbon interfaces the surface active compounds are the ones containing hydrocarbon (non-polar) chain and a polar group (-OH, -COOH, -NH 2 etc.) in their structure. Such an asymmetric (diphilic) structure of surfactant molecules accounts for their similarity to the nature of both contacting phases: a well-hydrated polar group has the strong affinity towards the aqueous phase, while the hydrocarbon chain has the affinity towards the non-polar phase [12].
The aim of present work was to reveal physicchemical properties of extracellular polysaccharides (PS) isolated from wheat cell suspension culture, including determination of molecular weight, surface activity and zeta potential of total PS fraction.
Materials and methods
The molecular mass distribution was determined using size exclusion system at the Max Planck Institute of Colloids and Interfaces (Potsdam, Germany). The chromatograph was equipped with a reflective index (RI) detector (GE, Sweden) operating at 265 nm and two column in series, packed with Suprema 30 and Suprema 3000 (both from GE, Sweden. The range for Suprema 30 column is from 100-30000 Da, whereas Suprema 3000 column is suitable for range from 1000-3 000 000 Da. The analysis of molecular mass was carried out by this method [13]. In or-der to determine the molecular weight the polysaccharides samples were analyzed by gel-permeation chromatography (GPC) using dual detectors for the simultaneous detection. The system was calibrated with pullulan standards. A mixture of standard pullulans with different molecular weights (342, 1460, 5600-710 kDa) were dissolved in 0.1 N NaNO3 and applied on the same size exclusion Suprema 30 and Suprema 3000 columns and the same chromatography condition for the samples. After that, 7 mg of polysaccharide fraction was dissolved in 35 ml 0.1 N NaNO3 solution at a flow rate of 1.000 ml/min and then filtrated through 0.22 m we can suppose that our PS sample has molecular weight of 1632 Da.
Determination of PS solubility and pH of PS solutions.
The polysaccharide was soluble in water and insoluble in ethanol, acetone and chloroform. The PS sample was dissolved in distilled water in concentration of 1% (w/v) and measurements of pH value were conducted in three replications. As a result, it was found that pH of PS solutions was equal to 5.6±0.05.
According to the results obtained by tensiometry method, it was established that investigated cell culture polysaccharides have the features of surfaceactive substances (SAS), which characterized by capability to decrease the surface tension of solutions.
ξ-potential and dynamic light scattering. Electrostatic forces are usually the major driving force for the interaction of charged biopolymers in aqueous solutions, and so it was important to determine the electrical characteristics of the biopolymers used in this work. ξ-potential of PS was evaluated in order to determine the charge of their molecules at different pH value -from 3 to 9. The electronegativity of the solution increased with increasing pH from -2, 85 mV (pH 3.0) to -21.1 (pH 9), respectively. At pH 3 ξ-potential value of PS was equal to -2.85±0.27 Determination of surface activity. The ability of the molecules of surfactants adsorb at interfaces is well known due to their amphiphilic structure. As a result of this process, the adsorption layers are formed, which in the equilibrium condition can be characterized by surface tension isotherms.
To clarify the characteristics of the formation of interfacial adsorption layers of PS solutions kinetic dependencies of the interfacial tension decrease at the time have been studied (Figure 4, 5). Figure 4 shows that the interfacial tension of the PS at 0.05% concentration decreases slowly and reaches an equilibrium value after ~ 8-8.5 hours. Duration of the interfacial tension decrease was due to the slow diffusion of the macromolecular coil to the surface, as well as conformational rearrangement of statistical balls` macromolecules on the surface of the active segment, which is reflected in the values of the relaxation time.
For the 1% solution the surface tension should be lower than for 0.05%. We have the opposite, which can be explained only by a certain multi-component composition of the substance, and consequently a completely different composition of the interfacial layer ( Figure 5). (Figure 6), thus showing that polysaccharide isolated from wheat is a practically neutral polysaccharide. The polysaccharides may be constituted either by polycations or by polyanions, depending on their functional group, and may also be neutral, which is the case of different types of polysaccharides with a higher content of mannose and galactose units.
Time, sec
In neutral medium at pH 5.5 it was established that ξ-potential has higher negative value of -18.8±2.05 (Figure 7). At pH 9 the value of ξ-potential corresponded to the -21.7±0.55 (Figure 8).
The pH dependence of the zeta potential (ξ) for the PS solution is shown in Figure 9. The ξ values for the PS solution were negative throughout the pH range studied, varying from -2.85 mV (pH 3.0) to -21,1 (pH 9). The increase of negative ξ values that occurred by the increase of pH values can be attributed to the ionization of the carboxylic moieties (-COOH) giving rise to carboxylate groups (-COO -), while the decrease of negative ξ values at pH=3 value are due to the protonation of the amino moieties (-NH 2 ) giving rise to ammonium groups (-NH 3 + ). ξ-potential values are also related to the stability of solutions. As a general rule, absolute values of ξ-potential above 60mV indicate an excellent stability, from 60 to 30 mV are physically stable, from 30 mV to 5 mV are at the limit of stability and below 5 mV are not stable and there are aggregates can be formed. According to this general rule, the results obtained with at different pH show that PS solutions are stable.
The information obtained by ξ-potential and Dynamic Light Scattering (DLS) are crucial to indicate the occurrence of stable functional nanostructures. Polydispersion index (PDI), obtained by DLS is a measure of the size distribution width (Figure 10) When polydispersity equals zero, the sample is monodisperse. Values of PDI close to or above 0.5 represent heterogeneous solutions in relation to the particle size and are characteristic of samples outside the standards. The term «particle» represents the molecule of polysaccharide, which stay disperses into diluted solution. For a PS solution of 0.001% at pH 5.5, Zaverage and PDI were 926.8 and 0.595, respectively. It was established that PS sample is polydisperse.
In conclusion, physic-chemical properties of extracellular polysaccharides from wheat cell culture have been determined for the first time: molecular weight, solubility in different solvents, the pH value, surface activity, ξ-potential, polydispersity (PDI).
It was revealed that total fraction of PS consists of high molecular weight proteoglycan (430400 Da) and low molecular weight glycan (1632 Da).
Investigated PS are soluble in water, insoluble in ethanol, acetone and chloroform; pH of 1% solutions is equal to 5.6±0.05.
Parameters of PS surface tension as well as dependency from concentrations of PS solution have been determined. Obtained results suggest that PS are high-molecular non-ionogenic compounds. It was shown that investigated PS have the characteristics of surface active substances.
Established dependence of ξ-potential from pH allow to assume the neutral nature of investigated polymers. Obtained results from dynamic light scattering (DLS) show the polydispersity of investigated PS.
|
2019-04-09T13:08:54.285Z
|
2015-12-27T00:00:00.000
|
{
"year": 2015,
"sha1": "772849c5f3f3aaac067f8f51a45a968656b5c2a6",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.26577/2218-7979-2015-8-2-18-22",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "709ebfedce3b533a57b44e0acc210f0202376d2a",
"s2fieldsofstudy": [
"Chemistry",
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Chemistry"
]
}
|
14074606
|
pes2o/s2orc
|
v3-fos-license
|
Reflection matrices for the $U_{q}[sl(m|n)^{(1)}] $ vertex model
We investigate the possible regular solutions of the boundary Yang-Baxter equation for the vertex models associated with the graded version of the $A_{n-1}^{(1)}$ affine Lie algebra, the $U_{q}[sl(m|n)^{(1)}]$ vertex model, also known as Perk-Schultz model.
Introduction
The R-matrix associated with the U q [sl(m|n) (1) ] superalgebra [4,5,6], whose matrix elements are the statistical weights of the Perk-Schultz vertex model [7] has the form (2.1) where N = n + m is the dimension of the graded space with n fermionic and m bosonic degree of freedom and E ij refers to the N by N Weyl matrix with only one non-null entry with value 1 in row i and column j.
In what follows we shall adopt the grading structure and the corresponding Boltzmann weights with functional dependence on the spectral parameter u = ln x are given by Here q denotes an arbitrary parameter. The R-matrix (2.1) satisfies symmetry relations, besides the standard properties of regularity and unitarity, namely: • Weaker property [8,9]: where ζ(x) = a 1 (x)a 1 (x −1 ) and M is a symmetry of the R-matrix The matrix K − (u) satisfies the left boundary Yang-Baxter equation [10], also known as the reflection equation [11], R 12 (x/y)K − 1 (x)R 21 (xy)K − 2 (y) = K − 2 (y)R 12 (xy)K − 1 (x)R 21 (x/y), (2.7) which governs the integrability at boundary for a given bulk theory. A similar equation should also hold for the matrix K + (u) at the opposite boundary. However, one can see from [12] that the corresponding quantity satisfies the right boundary Yang-Baxter equation. Here st= st 1 st 2 and st i stands for transposition taken in the i th superspace. Therefore, we can start for searching the matrices K − (x). In this paper only regular solutions will be considered. Regular solutions mean that the matrix K − (x) has the form (2.9) and satisfies the condition Substituting (2.1) and (2.9) into (2.7), we will get N 4 functional equations for the k ij matrix elements, many of which are dependent. In order to solve them, we shall proceed in the following way. First we consider the (i, j) component of the matrix equation (2.7). By differentiating it with respect to y and taking y = 1, we get algebraic equations involving the single variable x and N 2 parameters Second, these algebraic equations are denoted by E[i, j] = 0 and collected into blocks (2.13) and the equation E[j, i] = 0 is obtained from the equation E[i, j] = 0 by the interchanging In this way, we can control all equations and a particular solution is simultaneously connected with at least four equations.
3 The U q [sl(m|n) (1) ] K-matrix solutions Analyzing the U q [sl(m|n) (1) ] reflection equations one can see that they possess a very special structure. The simplest equations are 3), a i (x) = a j (x) when the labels i and j are different types of degree of freedom. It means that all U q [sl(m|n) (1) ] reflection matrices have the following block diagonal structure where K b is a m by m matrix with entries k i,j for i, j = {1, 2, ..., m} and K f is a n by n matrix with entries k r,s for r, s = {m + 1, m + 2, ..., N}. Now, by direct inspection of the equations (2.12), one can see that the diagonal equations B[i, i] are uniquely solved by the relations It means that we only need to find the m(m − 1)/2 and n(n − 1)/2 elements k i,j with i < j. Now we choose a particular k i,j (i < j) to be different from zero, with β i,j = 0, and try to express all remaining non-diagonal matrix elements in terms of this particular element. We have verified that this is possible provided that Combining (3.3) with (3.4) we will obtain a very strong entail for the elements out of the diagonal It means that for a given k i,j (x) = 0, the only elements different from zero in the i th -row and in the j th -column are k i,i (x), k j,i (x), k j,j (x). Analyzing more carefully these equations with the conditions (3.3) and (3.5), we have found from the m(m − 1)/2 elements k i,j (x)(i < j) ∈ K b and n(n − 1)/2 elements k i,j (x)(i < j) ∈ K f that there are three possibilities to choose a particular k i,j (x) = 0: • Only one non-diagonal element and its symmetric are allowed to be different from zero. Thus, we have m(m−1)/2 reflection K-matrices with N +2 non-zero elements and n(n − 1)/2 reflection K-matrices with N + 2 non-zero elements. These solutions will be denoted by K [ij] and named Type-I solutions.
• For each k i,j (x) = 0, additional non-diagonal elements and its asymmetric are allowed to be different from zero provided they satisfy the equations It means that we will get a K-matrix with entries of the diagonal principal and the entries of a diagonal secondary with the element k i,j (x) on the top. These solutions will be denoted by K (α) [ij] and named Type-II solutions.
• For each k i,j (x) = 0, additional non-diagonal elements and its asymmetric are allowed to be different from zero provided they satisfy the equations It means that we will get a K-matrix with the diagonal principal elements and the elements of two diagonal secondary with the top elements k i,j (x) ∈ K b and k r,s (x) ∈ K f . These solutions will be denoted by K [rs] and named Type-III solutions.
Here the symbols α and β mean the number of additional pairs of non-zero entries (k a,b (x), k b,a (x)) on the secondary diagonals.
For example, the U q [sl(4|2) (1) ] model has the following K-matrix where we identify 7 Type-I solutions In addition to K (0) [14] we have the Type-II solution with the constraint equation k 1,4 k 4,1 = k 2,3 k 3,2 , (the pairs of entries of the same secondary diagonal) and the Type-III solution with the constraint equation k 1,4 k 4,1 = k 2,3 k 3,2 = k 5,6 k 6,5 since (1 + 4) = (5 + 6) mod 6, k 1,4 ∈ K b and k 5,6 ∈ K f . Although we already know as counting the K-matrices for the U q [sl(m|n) (1) ] models we still have to identify among them which are similar. Indeed we can see a Z N similarity transformation which maps their matrix elements positions: where g a are the Z N matrices In order to do this we can choose K 0 as K (α) [12] and the similarity transformations (3.8) give us the K a matrices whose matrix elements are in the same positions of the matrix elements of the K (α) [1j] and K (α) [2m] matrices. However, due to the fact that the relations (3.4) involve the ratio c 2 (x)/c 1 (x) = x, as well as the additional constraints (3.6), we could not find a similarity transformation among these K ′ s matrices, even after a gauge transformation.
Even for the Type-I solutions the similarity account is not simple due to the presence of three types of scalar functions and the constraint equations for the parameters β i,j . Nevertheless, as we have found a way to write all solutions, we can leave the similarity account to the reader.
Having identified these possibilities we may proceed in order to find the N diagonal elements k i,i (x) in terms of the non-diagonal elements k i,j (x) for each K (α) i,j matrix. These procedure is now standard [13]. For instance, if we are looking for K (1) (0) [14][56] , the nondiagonal elements k i,j (x), (i + j = 5 mod 6 ) in terms of k 1,4 (x) = 0 are given by Substituting (3.10) into the reflection equations we can now easily find the k i,i (x) elements up to an arbitrary function, in this example identified as k 1,4 (x). Moreover, their consistency relations will yield us some constraints equations for the parameters β i,j .
After we have found all diagonal elements in terms of k i,j (x), we can, without loss of generality, choose the arbitrary functions as This choice allows us to work out the solutions in terms of the functions f i,i (x) and h i,j (x) defined by for i, j = 1, 2, · · · , N. Now, we will simply present the general solutions and write them explicitly for the first values of N in appendices.
The quasi-diagonal K-matrices
For Type-I and Type-II solutions we have the same general K-matrix form For the Type-III solutions we have matrices with non-diagonal entries into two secondary diagonals with different degree of freedom but related by Z N symmetry Note that for α, β = 0 we can use α = [ j−i−1 2 ] and β = [ r−s−1 2 ]. Moreover, we have defined more three types of scalar functions The number of free parameters is fixed by the constraint equations which depend on the presence of these scalar functions: when Y (i) l (x) is present in the K-matrix we have constraint equations of the type but, when Z i (x) is present the corresponding constraints are of the type The presence of at least one X j+1 (x) yields a third type of constraints, Here we recall again that i + j = r + s mod N. From (3.13) and (3.14) we can see that each solution we have at most two scalar functions in addition to the f ii (x) and α + β pairs of the h(x) functions in addition to h ij (x) and h rs (x) functions. It means that our Type-I matrices are 3-parameter solutions. The Type-II matrices have 3 + α free parameters and the Type-III matrices have 4 + α + β free parameters.
The diagonal K-matrices
For diagonal solution we have β i,j = 0. It means that all scalar functions h i,j (x) are equal to zero and we have to solve the constraint equations (3.16)-(3.18). Now, we can recall (3.13) and (3.14) and replace the scalar function X j+1 (x) by x 2 f 11 (x −1 ) or by or by x 2 f ii (x −1 ) and the scalar function Z i (x) by f ii (x −1 ) or by f ii (x) in order to get the diagonal solutions. It follows due to the substitution of the solutions of (3.16)-(3.18) into (3.15) lim β j,j →±β 1,1 +2 and lim This reduction procedure gives us the diagonal solutions: and From these results we can see that the U q [sl(m|n) (1) ] model have many diagonal solutions. In particular, the substitution (3.20) yields the diagonal solutions already derived in [9] and used in the study of the nested Bethe ansatz for Perk-Scultz model with open boundary condition [14]. Moreover, these diagonal solutions have been used recently in [15] for the study of the nested Bethe ansatz for 'all' open chain with diagonal boundary conditions.
Conclusion
After a systematic study of the functional equations we find that there are three types of solutions for U q [sl(m|n) (1) ] model. We call of Type-I the K-matrices with three free parameters and n+m+2 non-zero matrix elements. These solutions were denoted by K (0) [ij] to emphasize the non-zero element out of the diagonal and its symmetric, which results in n(n − 1)/2 and m(m − 1)/2 reflection K-matrices.
The Type-II and Type-III solutions are more interesting because their have many free parameters. We also have used a reduction procedure to obtain the diagonal solutions. However, we could not derive a similar procedure in order to obtain the Type-I solutions from the Type-II solutions or the Type-II solutions from theType-III solutions. Thus, we believe that they are independent.
The corresponding K + (x) are obtained from the isomorphism (2.8). Out of this classification we have the trivial solution (K − = 1, K + = M) for these models.
Before the end of our discussion on the U q [sl(m|n) (1) ] reflection matrices, we will make (by a referee suggestion), the comparision with the sl(m + n) reflection matrices [13]. The diagonal solutions and the Type-I solutions are the same for the both models. The Type-III solutions of the U q [sl(m|n) (1) ] model are identified with the Type-II of the sl(m + n) model. However, the Type-II solutions of the U q [sl(m|n) (1) ] model are different because in the graded case, the Z N symmetry (3.7) is lost when the labels have the same degree of freedom (3.6).
Acknowledgment: This work was supported in part by Fundação de Amparoà Pesquisa do Estado de São Paulo-FAPESP-Brasil and by Conselho Nacional de Desenvolvimento-CNPq-Brasil.
A Some examples
In this appendix some K-matrices are written explicitly only for the cases with m ≥ n. The cases m < n are easily deduced from the U q [sl(m|n) (1) ] solutions with m > n using (3.8).
|
2008-06-23T12:00:47.000Z
|
2008-06-23T00:00:00.000
|
{
"year": 2010,
"sha1": "f6aec283e1ea56f466550975d1c339b2052bc44c",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1011.3119",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "19e1f35a1cda2fa266395cfe012af59cc20cddd0",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Physics",
"Mathematics"
]
}
|
245445241
|
pes2o/s2orc
|
v3-fos-license
|
Comparisons of Flow Dynamics of Dual-Blade to Single-Blade Beveled-Tip Vitreous Cutters
Introduction: The aim of this study was to compare the flow dynamics of the dual-blade to the single-blade beveled-tip vitreous cutters. Methods: The aspiration rates of balanced salt solution (BSS) and swine vitreous were measured for the 25-gauge and 27-gauge dual- and single-blade beveled-tip vitreous cutters. The flow dynamics of BSS and diluted vitreous mixed with fluorescent polymer at the maximal cutting rates and the reflux of BSS were measured in images obtained by a high-speed camera. The distal end of the cutter was defined as the head end. Results: The aspiration rates of BSS and vitreous by the 25- and 27-gauge dual-blade cutters were significantly higher than those of both single-blade cutters at the maximal cutting rate (all p ≤ 0.01). The mean aspiration flow of BSS in front of the port from a lateral view was significantly faster for both dual-blade cutters than for both single-blade cutters (p = 0.003, p = 0.019). The angle of the mean flow of BSS of both dual-blade cutters was from the distal end (p < 0.001, p < 0.001) but that of the single-blade cutters was from the proximal end. The velocity and angle of the mean reflux flow of both types of cutters were not significantly different. The mean aspiration flow of diluted vitreous was significantly faster for 25-gauge dual-blade cutters with the angle more from the proximal end and 27-gauge dual-blade cutters more from the distal end than both single-blade cutters (p = 0.018, p = 0.048). Conclusion: The dual-blade beveled-tip vitreous cutters improve the efficiency of the vitrectomy procedures and maintain the distal aspirating flow by the beveled tip.
Introduction
Small gauge or microincision vitrectomy was introduced in 2002, and it was reported to improve the patient comfort with less conjunctival scarring, less postoperative inflammation, and earlier visual recovery than after 20-gauge vitrectomy [1][2][3][4][5][6][7]. The closer distance of the opening port from the tip of the smallgauge vitreous cutters enabled dissection closer to the retina. These cutters served as multifunctional instruments because there was no need for forceps and scissors [8][9][10]. In addition, the higher cutting rates of the small-gauge vitreous cutters improved the efficiency of vitreous cutting which then reduced tissue traction during the cutting [11][12][13][14][15][16].
The conventional vitreous cutter has a port on one side and an inner tube moves back and forth to open and close the port which creates guillotine cutting [5,6]. Changes in the design of the inner tube have been made to increase the flow and cutting rates [17]. These newly designed vitreous cutters with an additional inner opening have been commercially available as a dual-blade cutter which enabled a constant aspiration rate with a higher duty cycle [5,18,19]. These changes increased the opening time of the port, and the cutting was from both the proximal (incoming) and distal (outgoing) blades of the cutter [5,[19][20][21][22][23].
Another newly designed beveled-tip cutter has been developed with an oblique-angled head of 30°, while the conventional cutters have flat-head tips [24,25]. The aspirating flow stream of the conventional flat-tip cutters goes through the opening port, then changes direction through the inner tube by 90° during the cutting and aspiration. The oblique-angle design of the cutter tip allows the surgeon to place the head parallel and closer to the surface of the retina and preretinal tissue [10,24]. In addition, the beveled cutter led to a straighter aspiration from the opening port to the inner tube to improve the effectiveness of removing vitreous with less inner-flow turbulence [24]. We have reported that the aspiration rate of the beveled-tip cutters was higher than the conventional flat-tip cutter due to the enhancement of the proximal flow [25]. In addition, the beveled-tip cutters also had a faster reflux flow rate. However, the flow dynamics of dual-blade, beveled-tip vitreous cutters have not been examined in detail. It has also not been determined whether these two alterations of the cutter affected the cutting and aspirating properties. Thus, the purpose of this study was to compare the aspiration rates and flow dynamics of the dual-blade to that of the single-blade beveled-tip vitreous cutters during the aspiration of balanced salt solution (BSS) and swine vitreous.
Aspiration Rates of Vitreous Cutters while Aspirating BSS and Swine Vitreous
The 25-gauge and the 27-gauge beveled dual-blade (HYPER-VIT ® dual-blade cutter; Alcon Laboratories, Fort Worth, TX, USA) and the single-blade vitreous cutters (Advanced ULTRAVT ® beveled high-speed cutter; Alcon Laboratories) driven by the CON-STELLATION ® Vision System (Alcon Laboratories) were studied. To measure the aspiration rates, a 100-mL beaker was filled with approximately 70 mL of BSS or swine vitreous and placed on a weight scale. The vitreous cutter was held vertically by the arms of a support stand, and the tip of the cutter was placed into the BSS or vitreous deep enough from the surface. The weight loss of BSS or swine vitreous in the beaker was measured every 15 s after an initial 10 s as described [25]. The volume of the BSS or vitreous aspirated by the 25-gauge and 27-gauge cutters was calculated from the reduction of the weight (n = 9) at cutting rates of 5,000, 10,000, 15,000, and 20,000 cuts/min (cpm) for the dual-blade cutters and cutting rates of 2,500, 5,000, 7,500, and 10,000 cpm for the singleblade cutters. Because the dual-blade cutters have an additional port in the inner tube, this led to two cuts per stroke, and the cpm was double that of the single-blade cutter. The aspiration pressure was set at 250, 450, and 650 mm Hg. To calculate of the aspiration rates from the weight loss, the relative density of the BSS was set at 1.06 g/mL and that of swine vitreous as 1.00 g/mL.
Flow Dynamics during Aspiration of BSS or Swine Vitreous
To determine the flow dynamics of BSS and swine vitreous, fluorescent polymer microspheres (FLUOstar ® ; EBM Corp., Tokyo, Japan) were mixed with BSS or swine vitreous. For the flow analysis of the vitreous, the vitreous was diluted with the same amount of BSS (1:1) to mimic core vitrectomy after the infusion was turned on, and the irrigating fluid was infused into the vitreous cavity. The tip of the 25-or 27-gauge dual-or single-blade vitreous cutters was aligned vertically in the BSS or the diluted vitreous in a transparent acrylic cube container with 3-cm sides (Fig. 1). A slit-beam of a green laser (Nd:YAG, wavelength = 532 nm) was projected through a cylindrical lens to create a cross-sectional image of the movement of the fluorescent microspheres. For a lateral view, the slit-beam laser was projected vertically into the container toward the opening port of the cutter, and a cross-sectional image was viewed from the side opposite the opening port. For a bottom view, the slit-beam laser was projected horizontally into the container at the level of the opening port of the cutter, and the crosssectional image with a bottom view was obtained through a mirror located beneath the container. The aspiration pressure of the vitrectomy device was set at 650 mm Hg, and the cutting rate was set at 0 and 20,000 cpm for the dual-blade cutters and 0 and 10,000 cpm for the single-blade cutters.
Cross-sectional video images of the flow movements were seen as the movements of the fluorescent microspheres that were recorded with a high-speed video camera, HAS-D71 (DITECT, Tokyo, Japan). The velocity and direction of movement of the fluorescent microspheres were analyzed by the Flownizer 2D software (DITECT). The velocity and direction of the mean aspiration flow of BSS or diluted vitreous as parameters of the flow dynamics were extracted with the direction and the length of arrows and also in the color maps (n = 5). The mean aspiration flow was calculated as the velocity and the angle of the mean flow at the measured point within an area in front of the port from the video images of at least 50 cutting cycles (n = 5). The proximal end of the cutter was defined as the body end and the distal end as the head end of the cutter (Fig. 1b). The angle of the flow direction in the lateral view was defined as the horizontal angle with the opening port set at zero and plus on the distal end and minus on the proximal end relative to the horizontal angle.
In the lateral view, the maximum aspiration flow of BSS in an area of 3 × 3 mm around the tip of the vitreous cutter was evaluated when the fastest aspiration flow was seen from the video images during the aspiration of BSS without cutting (0 cpm, aspiration only) and with a maximal cutting rate of 20,000 cpm for the dual-blade cutters and 10,000 cpm for the single-blade cutters. The mean aspiration flow during the aspiration of BSS with a cutting rate of 0 cpm and with the maximal cutting rate was calculated DOI: 10.1159/000521468 within an area of 1 × 2 mm in front of the port (n = 5). The maximum aspiration flow during the removal of diluted vitreous in an area of 5.4 × 3.2 mm around the tip of the vitreous cutter was evaluated with the maximal cutting rate of the dual-blade and the single-blade cutters. The mean aspiration flow of removing diluted vitreous with the maximal cutting rate of the dual-blade and the single-blade cutters was calculated as the average at each velocity and the angle of the aspiration flow at the measured point within an area of 2.5 × 2.5 mm in front of the port (n = 5).
In the bottom view, the mean aspiration flow during the aspiration of BSS with a cutting rate of 20,000 cpm of the dual-blade cutters and 10,000 cpm of the single-blade cutters was calculated by the average of each velocity. The angle of the flow was measured within an area of 2.5 × 2.5 mm around the cutter port (n = 5).
Flow Dynamics of Expelled BSS in Reflux Mode
A glass beaker was filled with 100 mL BSS and set on a weight scale. The changes in the weight of the beaker with the BSS was measured during the reflux of fluid every 15 s after an initial backflushing of at least 10 s. The expelled volume of BSS by the 25-gauge and 27-gauge single-or dual-blade vitreous cutters at an expelling pressure of 40, 80, and 120 mm Hg was calculated from the changes in the weight (n = 5).
To determine the direction and rate of extraction, the tip of the 25-gauge or 27-gauge dual-or single-blade vitreous cutters was placed into a 3-cm transparent acrylic cube filled with BSS mixed with fluorescent polymer microspheres. The expelling pressure of the BSS was set at 40 mm Hg in the reflux mode. Cross-sectional images of the movement of the fluorescent microspheres were recorded with a high-speed camera (HAS-D71; n = 5) during the backflushing of the BSS. The direction of the movement and velocity were analyzed by the Flownizer 2D software, and the mean expelling flow in an area of a 4.2 × 3.5 mm area in front of the opening port of the vitreous cutters from the lateral view was calculated.
Statistical Analyses
The significance of the differences between the 2 types of dualblade and single-blade vitreous cutters was determined by the Mann-Whitney test. Statistical analyses were performed with the software in the VassarStats website (http://vassarstats.net). A p value of <0.05 was taken to be statistically significant.
Aspiration Rates of Dual-Blade Vitreous Cutters for BSS
The aspiration rate for BSS by the 25-gauge dualblade cutter did not decrease significantly from that at 0 cpm to that at 5,000 cpm (p = 0.117, Mann-Whitney test, Fig. 2), also from that at 5,000 cpm to that at 10,000 cpm (p = 0.413) and from that at 10,000 cpm to that at 15,000 cpm (p = 0.298) and from that at 15,000 cpm to that at 20,000 cpm (p = 0.117) for an aspiration pressure of 650 mm Hg. In contrast, the aspiration rate of BSS by the 25-gauge single-blade cutter at an aspiration pressure of 650 mm Hg decreased significantly from that at 0 cpm to that at 2,500 cpm (p = 0.0002, Mann-Whitney test) and from that at 2,500 cpm to that at 5,000 cpm (p = 0.0002). There was an increase in the aspiration rate of BSS from that at 5,000 cpm to that at 7,500 cpm (p = 0.189) and from that at 7,500 cpm to that at 10,000 cpm (p = 0.3446), but these increases were not significant. Aspiration rates of BSS and swine vitreous by dual-blade and single-blade vitreous cutters. a Aspiration rates of BSS by the 25-gauge and 27-gauge dual-blade cutters show higher aspiration rates at lower and higher cutting rates. In contrast, the aspiration rates of BSS by the 25-gauge and 27-gauge single-blade cutters decrease with higher cutting rates. b Aspiration rates of swine vitreous by the 25-gauge and 27-gauge dual-blade and single-blade cutters increase with an increase of the cutting rates more than that of single-blade cutters. Vit, swine vitreous; dual, dual-blade beveled vitreous cutter; single, single-blade beveled vitreous cutter. * 5,000 cpm for the single-blade cutter = 10,000 cpm for dual-blade cutter. * 10,000 cpm for the single-blade cutter = 20,000 cpm for dual-blade cutter.
The aspiration rate of BSS by the 27-gauge dual-blade cutter decreased significantly from that at 0 cpm to that at 5,000 cpm (p = 0.017, Mann-Whitney test, Fig. 2) but did not decrease significantly from that at 5,000 cpm to that at 10,000 cpm (p = 0.50), from that at 10,000 cpm to that at 15,000 cpm (p = 0.189), and from that at 15,000 cpm to that at 20,000 cpm (p = 0.268) at an aspiration pressure of 650 mm Hg. The aspiration of BSS by the 27-gauge single-blade cutter at an aspiration pressure of 650 mm Hg decreased significantly from that at 0 cpm to that at 2,500 cpm (p = 0.0002, Mann-Whitney test) and from that at 2,500 cpm to that at 5,000 cpm (p = 0.0013). On the other hand, there was an increase from that at 5,000 cpm to that at 7,500 cpm (p = 0.363) and from that at 7,500 cpm to that at 10,000 cpm (p = 0.363), but the increases were not significant.
The aspiration rate of BSS by the 25-gauge dual-blade cutter at an aspiration pressure of 650 mm Hg was 14.23 ± 0.42 mL/min at 20,000 cpm and 14.08 ± 0.46 mL/min at 10,000 cpm. These were increases of 56% (p < 0.0001, Mann-Whitney test, Table 1) and 64% (p < 0.0001) compared to that by the single-blade cutters of 9.12 ± 0.71 mL/ min at 10,000 cpm and 8.56 ± 0.76 mL/min at 5,000 cpm. The aspiration rate of BSS by the 27-gauge dual-blade cutter at an aspiration pressure of 650 mm Hg was 7.50 ± 0.53 mL/min at 20,000 cpm and 7.73 ± 0.34 mL/min at 10,000 cpm which were greater than that by the single-blade cutter at 4.44 ± 0.45 mL/min at 10,000 cpm and 4.12 ± 0.41 mL/min at 5,000 cpm. These were increases of 69% (p < 0.001) and 88% (p < 0.0001), respectively.
Aspiration Rates of the Vitreous Cutter for Swine Vitreous
The aspiration rate for swine vitreous by the 25-gauge dual-blade cutter increased with increases of the cutting rates (Fig. 2), and it increased significantly at 20,000 cpm from that at 5,000 cpm (p = 0.019, Mann-Whitney test) at an aspiration pressure of 650 mm Hg. However, the in- crease was not significant from that at 5,000 cpm to that at 10,000 (p = 0.123), that at 10,000 cpm to that at 15,000 cpm (p = 0.181), and that at 15,000 cpm from that at 20,000 cpm (p = 0.468). In contrast, the aspiration rate for swine vitreous by the 25-gauge single-blade cutter did not increase significantly at 10,000 cpm from that at 2,500 cpm (p = 0.111) or from that at 2,500 cpm to that at 5,000 (p = 0.50), from that at 5,000 cpm to that at 7,500 cpm (p = 0.312), and from that at 10,000 cpm to that at 7,500 cpm (p = 0.111). The aspiration rate for swine vitreous by the 27-gauge dual-blade cutter increased with an increase of the cutting rates (Fig. 2). There was a significant increase at 20,000 cpm and 10,000 cpm from that at 5,000 cpm (p = 0.0001, p = 0.0274) at an aspiration pressure of 650 mm Hg. However, the aspiration rates did not change significantly from that at 15,000 cpm to that at 10,000 cpm (p = 0.164) and from that at 20,000 cpm to that at 15,000 cpm (p = 0.071). In contrast, the flow rate for removing swine vitreous by the 27-gauge single-blade cutter did not increase significantly at 10,000 cpm from that at 2,500 cpm (p = 0.097) and also from that at 2,500 cpm to that at 5,000 cpm (p = 0.397), from that at 5,000 cpm to that at 7,500 cpm (p = 0.113), and from that at 7,500 cpm to that at 10,000 cpm (p = 0.334).
The aspiration rate for swine vitreous by the 25-gauge dual-blade cutter at an aspiration pressure of 650 mm Hg was significantly higher at 10,000 cpm (p = 0.009, Table 1) and at 20,000 cpm (p = 0.001) compared to that by the single-blade cutters at 5,000 and 10,000 cpm. These were increases of 42% (p < 0.0001, Mann-Whitney test) and 46% (p < 0.0001), respectively. The aspiration rate for swine vitreous by the 27-gauge dual-blade cutter at an aspiration pressure of 650 mm Hg was significantly higher at 10,000 cpm (p = 0.010) and at 20,000 cpm (p < 0.0001) than that by the single-blade cutters at 5,000 and 10,000 cpm. These were increases of 47% (p = 0.0104, Mann-Whitney test) and 55% (p < 0.0001), respectively.
Lateral View of Flow Dynamics while Aspirating BSS
The color map analysis of flow dynamics showed that the direction and velocity of aspirating flow changed according to the movement of the inner blade of the vitreous cutter. The maximal aspiration flow of the 25-gauge and 27-gauge dual-blade cutters indicated a faster oblique flow from the distal end of the cutter at 0 cpm (aspiration only) and at 20,000 cpm of the maximal cutting rate at an aspiration pressure of 650 mm Hg (Fig. 3). The oblique flow appeared to be parallel to the angle of the bevel of the tip of the cutter. The maximal aspiration flow of the 25-gauge and 27-gauge single-blade cutters indicated a faster oblique flow from the distal end at 0 cpm but a faster oblique flow from the proximal end at 10,000 cpm.
Because the direction and velocity of the aspirating flow changed with the cutting movement of the cutter, the velocity and angle of the mean aspiration flow were analyzed. The velocity of the mean aspiration flow of the 1 mm × 2 mm area in front of the opening port of the cutter at 0 cpm (aspiration only) indicated a slightly greater but not significant velocity with the 25-gauge and the 27-gauge dual-blade cutters (7.60 ± 0.50 mm/s and 3.66 ± 1.08 mm/s, respectively; Fig. 4) than the 25-gauge and 27-gauge single-blade cutters (7.1 ± 2.1 mm/s, p = 0.215 and 3.07 ± 0.30 mm/s, p = 0.136, respectively). The angle of the mean aspiration flow of the same area indicated a slightly greater angle, but it was not significant with the 25-gauge and the 27-gauge dual-blade cutters (7.01 ± 1.63° and 6.61 ± 3.43°, respectively, Fig. 4) at 0 cpm than the 25-gauge and
Bottom View of Flow Dynamics while Aspirating BSS
The direction and velocity of aspirating flow changed with a wavy movement around the vitreous cutter from a bottom view in association with the cutting movement of the cutter. However, the velocity and angle of the mean aspirating flow were similar among the vitreous cutters with faster aspirating flow from the anterior side of the port of the cutter (Fig. 6). The mean aspirating flow of the 25-gauge and 27-gauge dual-blade cutters appeared to be faster than that of the 25-gauge and 27-gauge single-blade cutters according to the scale bar in Figure 6.
Flow Dynamics of Expelled BSS in Reflux Mode
The volume of the expelled fluid in the proportional reflux mode increased as the expelling pressure increased with both types of cutters (Fig. 7). The volume was not significantly different between the 25-gauge
Flow Dynamics during Aspiration of Diluted Swine Vitreous
The movements of the diluted vitreous gel during cutting and aspiration by the vitreous cutters were a combination of rapid and slow movements, depending on the density and structure of the vitreous fibers. Even though we used diluted vitreous, the movement of the vitreous was not uniform during the vitreous cutting, and the duration of smoother movements during the vitreous cutting was used for the flow analysis (online suppl. Videos 1, 2; see www.karger.com/doi/10.1159/000521468 for all online suppl. material). This was detected in the static images that were taken from the video of the high-speed camera at the maximal cutting rate with an aspiration pressure of 650 mm Hg (Fig. 8). The color maps of the The mean velocity in the 2.5 × 2.5 mm area in front of the cutter with the lateral view indicated a significantly greater aspiration velocity with the 25-gauge and 27-gauge dual-blade cutters than that with 25-gauge and 27-gauge single-blade cutters (p = 0.018 and p = 0.048, respectively). The angle of the aspirating flow with the 25-gauge dual-blade cutter was significantly smaller on the proximal end than that of 25-gauge single-blade cutters. In contrast, the angle of the aspirating flow with the 27-gauge dual-blade cutter was significantly greater on the distal end than that of 27-gauge single-blade cutters.
Discussion
The results showed that the flow dynamics during vitreous aspiration was dependent on the properties of the aspirated fluid. In addition, the aspirating flow of the vitreous had whip-like movements because of the mixture of collagen fibers, hyaluronic acid, and vitreal fluids [26,27]. These components make predicting the flow difficult. The aspiration of fluids can be done without a cutting procedure which is different for the vitreous that requires cutting before it can be aspired into the opening port of the cutter. The higher cutting rates of the vitreous cutters even with smaller instruments can cut the vitreous into smaller pieces to reduce its viscosity to that of fluids and increase the efficiency of removing the vitreous [14][15][16].
The flow dynamics of the vitreous cutter was evaluated with BSS or saline due to the repeatability of the analyses [12,13,23]. It has been reported that the aspiration rates of BSS of the conventional single-blade cutter decreased as the cutting rate increased due to a decrease in the duty cycle [14][15][16]. In contrast, a flat-tip dual-blade cutter to cut the vitreous in a forward and backward movement during each cycle increased the duty cycle and cutting rates twofold [20][21][22]. Another innovation of the new vitreous cutter was an oblique angle of the beveled-tip single-blade that led to a more linear aspiration flow than that of the flat-tip cutters which had improved the aspiration rate of BSS at the higher cutting rates and faster aspiration flow by increasing the proximal aspiration flow [25].
We compared the beveled dual-blade and single-blade pneumatic guillotine cutters. The beveled single-blade cutters had significantly lower aspiration rates of BSS at the higher cutting rates as described [4][5][6]. In contrast, the beveled dual-blade cutters had a constant aspiration rate with BSS from no cutting (aspiration only) to the higher cutting rates (Fig. 2) because the dual-blade cutters had an increased duty cycle of approximately 100% with either port opened. The aspiration rate with the dualblade cutters for the swine vitreous increased more than that with the single-blade cutters. These results of the flow dynamics of BSS and vitreous with the beveled dual-blade cutters were comparable to that with the flat-tip dualblade cutters [20,21].
The lateral view showed a faster aspiration flow of BSS with the 25-gauge and 27-gauge beveled dual-blade cutters at 20,000 cpm of the maximal cutting rate than that with both single-blade cutters (Fig. 5). These findings for the distal flow were similar to the distal aspirating flow with both cutters without cutting (Fig. 4). We assume that the dual-blades with opening of either port produce constant flow and enhance the distal aspirating flow produced with the oblique angle of the cutter tip. The aspiration flow of BSS with the 25-gauge and 27-gauge beveled single-blade cutters showed that a faster oblique flow from the distal end of the cutter without cutting (Fig. 3c, g) but a faster oblique flow from the proximal end with the maximal cutting rates (Fig. 3d, h). The direction of the flow with the maximal cutting rates was more proximal end with the 27-gauge beveled single-blade cutter than that with the 25-gauge beveled single-blade cutter (Fig. 3d vs. h). The proximal aspirating flow may depend on the rapid inward flow when the port of the cutter is fully open especially in the inner tube with a smaller diameter [25].
The lateral view evaluated the aspiration rate in front of the open port of the vitreous cutter. We also recorded the bottom view to evaluate the aspirating flow from the side and backside toward of the opening port. However, no significant differences were seen between both the beveled dual-blade and single-blade cutters (Fig. 6). These findings indicated that the differences of aspirating flow were mainly from the front side of the port in the vertical direction. The aspiration flow rate and reflux flow without cutting did not differ between the beveled dual-blade and single-blade cutters (Fig. 7) because the inner tube of the cutter did not move, and these flow dynamics depended on the shape of the outer tube of the cutter which did not differ but may have had minor differences in the size of the port of the dual-blade cutters. The flow dynamics during vitreous aspiration included various complicated movements of the vitreous which were not uniform and consisted of thick vitreous fibers and uneven hyaluronic acid [27]. We used a slit-beam laser to obtain cross-sectional images of the movement of the fluorescein microspheres. However, the slit-laser beam was scattered by the thick vitreous, and the movements in front and behind the cross-sectional images were also detected. This is why we used diluted vitreous to reduce the effect of uneven movements of the vitreous and also to try to mimic core vitrectomy with infusion of BSS. A faster aspiration flow of diluted vitreous with the 25-gauge and 27-gauge beveled dual-blade cutters than that with both single-blade cutters was observed (Fig. 8e). However, the direction of the aspirating flow of vitreous with the 25-gauge beveled dual-blade cutter deviated more to the proximal end than that with the 25-gauge single-blade cutter (Fig. 8f). In contrast, the direction of aspiration flow of vitreous with the 27-gauge beveled dual-blade cutter deviated to the distal end than that with the 27-gauge single-blade cutter. The differences in the direction between the 25-gauge and 27-gauge beveled single-blade cutters were due to the aspirating flow of BSS of the 25-gauge cutter deviated more to the distal end than that of the 27-gauge cutter (Fig. 4b). The direction of the aspirating flow of BSS with both 25-gauge and 27-gauge beveled dual-blade cutters deviated to the distal end at the maximal cutting rate (Fig. 5b) because of the oblique angle of the cutter tip. The direction of aspirating flow of vitreous with the 27-gauge beveled dual-blade cutter deviated more to the distal end than that with the 25-gauge beveled dual-blade cutter (Fig. 8f, a vs. c, g vs. i), and the oblique angle of the cutter tip appeared to be more effective for vitreous cutting with the 27-gauge beveled dual-blade cutter because the aspirating direction was more stable for BSS (Fig. 4e vs. 5e) than vitreous (Fig. 8c vs. i). The flat-tip dual-blade cutter has been reported to be more efficient with a lower hydraulic resistance of human vitreous than the single-blade cutter at higher cutting rates without a reduction of the viscosity [26]. The hydraulic resistance was reduced more with smaller gauge cutters and acted more like fluid dynamics [26]. The dual-blade cutters have two blades to cut the vitreous for forward and backward movement of the inner tube [21][22][23]. The backward movement of the blade of the 25-gauge beveled dual-blade cutter may be dominant for the diluted vitreous at the maximal cutting rate, and the dominant inner blade may differ depending on the thickness of vitreous, cutting rates, aspiration pressures, and diameter of the cutter (Fig. 9). The aspirating flow of the vitreous with a dual-blade vitreous cutter of a smaller gauge may enhance the effect of the oblique angle of the cutter tip to produce stable flow similar to the aspirating flow of BSS.
There are limitations in this study. We used one concentration of diluted vitreous to examine the flow dynamics during the cutting of the vitreous because of difficulty of flow analysis due to unpredictable whip-like movements of the vitreous during cutting. Thus, the mechanism of the faster aspirating flow of vitreous need to be further analyzed. However, we found that the differences in the direction of aspirating flow between the two gauges of the vitreous cutter and dual-and single-blade cutters. We used a vitrectomy machine to drive the vitreous cutters with a Venturi pump and not with a peristatic pump which may influence the flow rate [20].
Conclusions
In conclusion, the 25-gauge and 27-gauge beveled-tip dual-blade cutters enable a constant aspirating flow of BSS independent of the cutting rates. This increases the effectiveness of vitreous cutting, especially at higher cutting rates and smaller gauge cutters. The beveled dualblade cutters produce a greater volume and velocity of aspirating flow by increasing the distal aspirating flow enhanced with the beveled head of the cutter especially in the smaller gauge instruments.
|
2021-12-25T06:16:20.090Z
|
2021-12-23T00:00:00.000
|
{
"year": 2021,
"sha1": "23cd50286239f96b10bb6d1c53ad2fb9f32c5b37",
"oa_license": "CCBYNC",
"oa_url": "https://www.karger.com/Article/Pdf/521468",
"oa_status": "HYBRID",
"pdf_src": "Karger",
"pdf_hash": "566c3b7e18e74e14208202a5193b1fbad6dc3af6",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
12998394
|
pes2o/s2orc
|
v3-fos-license
|
Cervical and ocular vestibular evoked myogenic potentials in multiple sclerosis participants
Background: Multiple sclerosis (MS) is a chronic neurological disease that affects brain and spinal cord. The infratentorial region contains the cerebellum and brainstem. Vestibular evoked myogenic potentials (VEMPs) are short-latency myogenic responses. Cervical vestibular evoked myogenic potential (cVEMP) is a manifestation of vestibulocolic reflex and ocular vestibular evoked myogenic potential (oVEMP) contributes to the linear vestibular–ocular reflex. The aim of this study was to evaluate cVEMP and oVEMP in MS patients with and without infratentorial plaques and compare the findings with normal controls. Methods: In this cross-sectional study, latency and amplitude of cVEMP and oVEMP were recorded in 15 healthy females with mean age of 31.13±9.27 years, 17 female MS patients with infratentorial plaque(s) and mean age of 29.88±8.93 years, and 17 female MS patients without infratentorial plaque(s) and mean age of 30.58±8.02 years. All patients underwent a complete clinical neurological evaluation and brain MRI scanning. Simple random sampling method was used in this study and data were analyzed using one way ANOVA through SPSS v22. Results: The latency of N1-P1 and P13 in MS participants with and without infratentorial plaques were significantly prolonged compared to normal controls (p<0.001). Additionally latency of P13- N23-N1 and P1 in MS patients with infratentorial plaques were significantly prolonged compared to patients without infratentorial plaques subjects (p<0.001). Conclusion: Abnormality of both cVEMP and oVEMP in MS patient with infratentorial plaque are more than that of MS patient without infratentorial plaque. Recording both ocular and cervical VEMPs are appropriate electrophysiologic methods assessing the function of both ascending and descending central vestibular pathways.
Introduction
Multiple sclerosis (MS) is a chronic neurological disease that affects brain and spinal cord. Most people are diagnosed between the ages of 20-40. It is estimated that women are affected three times more than men (1).
MS damages the myelin sheath involving the white matters of central nervous system. Tentorial incisure (also known as the tentorial notch or incisura tentorii) refers to an extension of one of the membranes covering the cerebrum which, with the transverse fissure, separates the cerebrum from the cerebellum. The infratentorial region of the brain is the area located below the tentorium cerebelli. The infratentorial region contains the cerebellum and brainstem (2,3).
Evoked potentials play an important role in the diagnosis and follow up of MS. They can detect subclinical lesions and provide information on the function of different parts of the nervous system (4).
Vestibular evoked myogenic potentials (VEMPs) are short-latency myogenic responses which are evoked by brief pulses of air conducted (AC) acoustic signals, bone conducted (BC) vibration or electrical stimulus. VEMPs are recorded via surface electrodes over surface of muscles such as sternocleidomastoid (SCM) (5). Cervical vestibular evoked myogenic potential (cVEMP) was first described by Colebatch et al (1992), and now has become an accepted test of vestibular functions. The cVEMPs are recorded from the SCM or trapezius muscles (5). It is a manifestation of vestibulocolic reflex. An auditory tone burst stimulation leads to activation of the saccular vestibular neurons, which in turn cause a modulation of tonic muscle activity of the SCM muscle. The measurement of the VEMPs requires tonic contraction of ipsilateral SCM muscle (6). The cVEMP response has two main peaks, labeled P13 and N23 (7).
In recent decade VEMPs have been recorded from the extraocular muscles (ocular or oVEMP) (8,9). In this test loud acoustical stimuli produce vestibular dependent responses in the extraocular muscles. The responses are believed to be originated from the otoliths and they contributed to the linear vestibular-ocular reflex (10). The primary oVEMP projection appears to be crossed and the initial negative peak occurred at about 10 ms.
Studies have shown infratentorial plays an important role in the transition, perception and interpretation of vestibular information and also both the vestibule-ocular and vestibulocolic pathways pass from this area. Therefore, the lesions in this region can significantly impact on the balance and cVEMP and oVEMP. Since the impact of plaque(s) in the infratentorial on the cVEMP and oVEMP is not investigated, the aim of this study was to compare cervical and ocular vestibular evoked myogenic potentials in multiple sclerosis patients with and without infratentorial plaque(s) and compare the findings with normal controls.
Methods
In this cross-sectional study cVEMP and oVEMP were performed on 15 healthy female subjects with mean age of 31.13±9.27 years, 17 female patients with definite MS according to the MC Donald criteria 2005, with infratentorial plaque(s) with mean age of 29.88±8.93 years and 17 female MS patients with definite MS without infratentorial plaque(s) with mean age of 30.58±8.02 years. MC Donald criteria 2005 incorporated magnetic resonance imaging (MRI) into the well-established diagnostic workup that focuses on detailed neurological history and examination and a variety of paraclinical laboratory examinations such as cerebrospinal fluid analysis (11,12).
All tests of the present study did from October to December 2013 in Tehran University of Medical Science.
All participants were evaluated using otoscopy and audiometric testing before study. Subjects with abnormal audiometric results (hearing thresholds worse than 20 dB HL) (13), with a history of seizures, depression, and head trauma and neck or eye movement limitation like strabismus and cervical arthritis were excluded. All patients underwent a complete clinical neurological evaluation and brain MRI scanning within three weeks. Then patients were divided into two groups consisting of patients with and without infratentorial plaque(s).
To record cVEMPs, subjects were sat on a comfortable chair, turned their head opposite to the side of the stimulated ear in order to flex the SCM muscle. Contraction of SCM muscle was monitored with manometer. In this way subjects pushing with their jaw against the hand-held inflated cuff to generate a 40 millimeter of mercury cuff pressure (14). After cleaning the skin active surface electrode was placed on the upper half of each SCM muscle with a reference electrode over the upper sternum and a ground electrode over central forehead(15) (Figure1). Cervical VEMPs were recorded from ipsilateral SCM muscle (from the stimulated side) in response to AC 500 Hz short tone burst delivered via insert earphones at 95 dB nHL.
For recording of oVEMP, subjects were in a sitting position. For each eye the active recording electrode was placed on the infraorbital ridge 1 cm below the center of each lower eyelid and the reference was positioned about 1-2 cm below the active one and ground electrode was placed on the sternum (16) (Fig.1). During recording, the subjects were instructed to look upward at a small fixed target >2m from the eyes, with a vertical visual angle of approximately 30-35 degrees above the horizontal plane (17). Contraction of contralateral extraocular muscle was monitored with degree of visual angle (18).
Ocular VEMPs were recorde d over the contralateral extraocular muscle (from the stimulated side) in response to AC 500 Hz short tone burst delivered via insert earphones at 95 dB nHL. Both cVEMP & oVEMP were performed using GN Otometric-ICS CHARTER EP.
Data were analyzed using one way ANO-VA and paired t-test because the distribution of data was normal according to the Kolmogrov-Smirnov test and p<0.05 was considered statistically significant. SPSS software version 22 was used in this study. Tehran University of Medical Sciences Ethical Committee approved this study, and informal consent was obtained from each subjects.
Results
The mean latency and amplitude of P13, N23, N1 and P1 in all participants have been showed in Table 1. Sample of cVEMPs and oVEMPs in three groups are shown in Fig. 2.
In the control group cVEMPs and oVEMP were recorded from both sides of all participants. Analysis showed no significant differences between the mean latency and amplitude of P13, N23, N1 and P1 in cVEMP oVEMP Fig. 1 In multiple sclerosis patients without infratentorial plaques cVEMPs were recorded from both sides of all patients but oVEMP were recorded from both sides in 64.7% (n=11) patients and recorded from one side in 11% (n=2) patients and not recorded bilaterally in 23.5% patients (n=4). Statistical analysis showed no significant differences between the mean latency and amplitude of P13, N23, N1 and P1 in right and left side (p>0.05).
In multiple sclerosis patients with infratentorial plaques cVEMPs were recorded from both sides in 59.0% (n=10) of patients and from one side in 17% (n=3) of patients and not recorded in 23.5% (n=4) of patients. Ocular VEMPs were recorded from both sides in 23.5% (n=4) of patients and recorded from one side in 23.5% (n=4) of Statistical analysis showed the latency of N1, P1 and P13 in multiple sclerosis with and without infratentorial plaques were significantly prolonged compared to normal controls (p<0.001). Also latency of P13, N23, N1 and P1 in MS patients with infratentorial plaques were significantly prolonged compared to patients without infratentorial plaques subjects (p<0.001). There was no significant difference in the mean amplitude of both oVEMP & cVEMP between MS patients with infratentorial plaque(s) and MS patients without infratentorial plaque(s) (p>0.05).
Discussion
The aim of the present study was to evaluate cVEMP & oVEMP in MS patients with and without infratentorial plaques and compare the findings with normal controls.
The latency of N1-P1-P13 were significantly longer in MS patients with and without infratentorial plaque(s) compared to control subjects and latency of P13-N23-N1&P1 in MS patients with infratentorial plaques were significantly prolonged compared to patients without infratentorial plaques. Prolonged latency of VEMPs in MS patients has been reported in previous studies (18)(19)(20)(21). Only latency of N23peak of MS patient without infratentorial plaque(s) was not significantly different compared to normal controls probably because of larger normal limits which has been reported in previous studies (21). Prolonged latency of VEMPs are not specific for MS and cannot help distinguish MS from other etiologies (18,22,23). Delays of latency in VEMPs have been seen in other neurological disease that affecting brainstem such as stroke and tumors (24).
Overall in the present study 64.25% had some form of cVEMP abnormality and 85.66% had some form of oVEMP abnormality in MS patients with infratentorial plaque(s). In MS patients without infratentorial plaque(s) abnormality of cVEMP & oVEMP were seen in 18.26% and 45.28% of patients, respectively. Data showed ocular VEMPs are often abnormal in patients with infratentorial plaques, because ocular VEMP pathway passes through the brainstem which is in the infratentorial zone. The abnormality percentage of oVEMPs was higher than that of the cVEMPs in our study in both MS patient groups. Our results showed that oVEMPs is more sensitive than cVEMPs in MS (18,20).
VEMPs are capable to demonstrate subclinical dysfunctions or silent lesions. In MS patients without infratentorial plaque(s) 43.23% of cVEMP and 61.48% oVEMP were abnormal. Clinically silent lesions can explain physiologic changes that are not accompanied by physical signs or symptoms. Perhaps small demyelinating lesions not detected by MRI can produce slow nerve conduction velocity (18,19,21,23). VEMPs can also demonstrate clinical dysfunction. In this study cVEMP abnormalities were seen in 73.43% and oVEMP abnormalities were seen in 94.11% of MS patients with infratentorial plaque(s). Most common abnormality found in MS patients with infratentorial plaque(s) was absent response and the other abnormality was latency prolongation. Demyelinization can cause speed reduction which can result in slow nerve conduction velocity. However severe demyelinizing lesions can cause conduction block or desynchronized conduction (19, 21).
There were no statistically significant differences between the mean amplitude of cVEMP and oVEMP of both MS groups with normal subjects. This VEMP parameter in MS patient is not a proper diagnostic criteria because other variables such as muscle contraction and stimulus intensity can affect it (18,23,25). Amplitudes of VEMPs responses should not be used alone and should be interpreted together with latency values in MS patients (23). Findings of the current study showed that abnormality of both cVEMP and oVEMP in MS patient with infratentorial plaque are more than that of MS patient without infratentorial plaque. Recording both ocular and cervical VEMPs are appropriate electrophysiological methods assessing the function of both ascending and descending central vestibular pathways in diagnosis, monitoring disease progression and rehabilitation in MS patients.
|
2016-10-09T20:37:22.850Z
|
2015-01-15T00:00:00.000
|
{
"year": 2015,
"sha1": "33d8ad71c4213cd1fc6d38f1d24b069f47212723",
"oa_license": "CCBYNC",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "33d8ad71c4213cd1fc6d38f1d24b069f47212723",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
10117135
|
pes2o/s2orc
|
v3-fos-license
|
Developing paediatric eye care teams in India
We confess that the word ‘adult’, in the strictest sense does not deserve to be part of the title. Although not deliberate, we have used it in the sense of ‘visually mature’ patients being enrolled in this study. Even though it escaped the review process, the fault if any is entirely ours. Interestingly, there exist other articles using the word ‘adult’ which have included patients with a minimum age of 15 years![4,5]
Authors' reply
Dear Editor, We are thankful to Dr. Gopal, [1] for keenly perusing our article. [2]e agree that there exists evidence that suggests that recovery of binocularity is a likely outcome aft er successful squint surgery. [3]ur study only serves to substantiate that.
Although not stated separately, a simple arithmetic subtraction of the period of duration of strabismus from the age at presentation would have yielded the age of onset of strabismus.Remarkably the average was 6.6 Ϯ 10 years, and with a median of one year.
We confess that the word 'adult', in the strictest sense does not deserve to be part of the title.Although not deliberate, we have used it in the sense of 'visually mature' patients being enrolled in this study.Even though it escaped the review process, the fault if any is entirely ours.Interestingly, there exist other articles using the word 'adult' which have included patients with a minimum age of 15 years! [4,5]tably, 12 of the 15 patients had strabismus onset at Յ6 years; and eight at one year of age.This only justifi es questioning the assumption whether disruption in the 'critical period' really is all that critical, since many of these patients, with early onset strabismus, gave favourable responses.
We agree that our study has largely exotropes and thus is more representative of them, and we have clearly said so.Still, one study in adults shows that 74.2% (23 of 31) of esotropes achieved postoperative fusion compared to 65% (52 of 80) of exotropes. [5]In fact, Kushner's research showed that 81% (60 of 74) of congenital esotropes demonstrated binocularity on Bagolini striated glasses (BSG). [3]Signifi cantly, he reveals that in patients with best corrected visual acuity Յ 20/200 in the deviated eye, 68% (45 of 66) demonstrate binocularity on BSG.Once again our preexisting notions, such as congenital esotropes or those with dense amblyopia being less likely to recover binocularity, need to be questioned.
Patients of cataracts with exotropia who recover binocularity postoperatively have litt le in common with our study.
Our basic message that visually mature patients, the majority with onset of strabismus in the 'critical period', are likely to recover binocularity following surgical realignment, still holds.
Developing paediatric eye care teams in India
Dear Editor, Tackling childhood blindness is a priority for many reasons -the number of blind years a child has to face is much more than an adult and, secondly, many causes of childhood blindness are preventable or treatable. [1]Of the approximately 1.4 million children who are blind, about three-quarters live in the poorest countries of Asia and Africa. [2]It is estimated that the prevalence of childhood blindness in India is about 1 per 1000 children. [3]he VISION 2020 -Right to Sight initiative considers control of childhood blindness as one of the priority areas.
Examining children needs special skills and the treatment requires specifi c training, knowledge and equipment.There are few trained paediatric-oriented ophthalmic personnel in India. [4]n a survey of paediatric eye care facilities in India, opportunities to train a paediatric eye care team was noted to be grossly lacking.[4] The paediatric ophthalmology learning and training center (POLTC) project is an initiative by ORBIS International, an international non-governmental organization, in collaboration with tertiary eye care institutes in India toward the development of comprehensive paediatric eye care teams.
The POLTC project aims at the development of a paediatric eye care team comprising of six personnel: An ophthalmologist, anaesthetist, optometrist, nurse, counsellor and outreach coordinator by building their capacity through specifi c training in the management of eye diseases in children.Hospitals throughout the country designate teams through ORBIS to the respective tertiary eye care institutes.The ophthalmologist gets training for 12-18 months with a focus on the management of squints, paediatric cataract, amblyopia, corneal problems, retinopathy of prematurity, retinoblastoma management and congenital glaucoma.In addition, it also includes the component of community eye care, especially vision screening and awareness initiatives.Targets and indicators are set for number of children examined, number of children treated, number of children undergoing surgery and number of children receiving spectacles.The project also encompasses capacity building of partner hospitals by providing infrastructure and technical support.Providing various research opportunities and conducting research on topics that have a profound impact on the community is again an integral part of this project.The project also supports regular hospital-based training programs and supports continuing medical education programs that are important to build the capacity of the POLTC staff .Training of all the personnel is performed in established international, acclaimed centers of excellence in India.The teams,
|
2018-04-03T00:16:56.050Z
|
2010-03-01T00:00:00.000
|
{
"year": 2010,
"sha1": "03ea964f782bfaf4b3774b396d4d8d46f7ccff79",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.4103/0301-4738.60082",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "1ad09aa6b0084404573a3ab20a5d740819f780a4",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
268926974
|
pes2o/s2orc
|
v3-fos-license
|
The utility of endobronchial ultrasound‐guided transbronchial mediastinal cryobiopsy (EBUS‐TBMC) for the diagnosis of mediastinal lymphoma
Abstract Endobronchial ultrasound‐guided transbronchial needle aspiration (EBUS‐TBNA) is a revolutionary tool for the diagnosis and staging of mediastinal disorders. Nevertheless, its diagnostic capability is reduced in certain disorders such as lymphoproliferative diseases. EBUS‐guided transbronchial mediastinal cryobiopsy (EBUS‐TBMC) is a novel technique that can provide larger samples with preserved tissue architecture, with an acceptable safety profile. In this case report, we present a middle‐aged gentleman with a huge anterior mediastinal mass and bilateral mediastinal and hilar lymphadenopathy. He underwent EBUS‐TBNA with rapid on‐site evaluation (ROSE) followed by EBUS‐TBMC, all under general anaesthesia. Histopathological analysis showed discordance between EBUS‐TBNA and EBUS‐TBMC in which only TBMC samples provided adequate tissue to attain a diagnosis of primary mediastinal large B‐cell lymphoma. This case report reinforced the diagnostic role of EBUS‐TBMC in the diagnosis of lymphoproliferative diseases.
INTRODUCTION
Endobronchial ultrasound-guided transbronchial needle aspiration (EBUS-TBNA) is a well-established tool for the diagnosis and staging of mediastinal disorders.Nevertheless, the utility of EBUS-TBNA in lymphoproliferative disorders remains restricted due to low diagnostic sensitivity. 1 Ongoing efforts to obtain larger mediastinal samples with preserved tissue architecture led to the idea of performing EBUS-guided transbronchial mediastinal cryobiopsy (EBUS-TBMC).Recent studies have confirmed the safety and feasibility of EBUS-TBMC, highlighting its superiority over EBUS-TBNA in certain pathologies including lymphoproliferative diseases. 2Herein, we present a middle-aged gentleman that underwent EBUS-TBNA with rapid on-site evaluation (ROSE) followed by EBUS-TBMC for a huge anterior mediastinal mass.TBMC samples provided adequate tissue to attain a final diagnosis of primary mediastinal large B-cell lymphoma.
CASE REPORT
A 44-year-old gentleman, non-smoker, with no prior medical illness presented with 2 weeks of worsening shortness of breath, orthopnoea, and paroxysmal nocturnal dyspnoea.He reported no constitutional symptoms or fever.Physical examination on arrival revealed bilateral lower zone lung crepitations with reduced air entry over the right lower zone.Cardiovascular examination was unremarkable but his liver was palpable at two finger breaths below the costal margin.There was bilateral pitting oedema up till mid-shin but no lymph nodes were palpable.Chest radiograph showed a right sided pleural effusion while echocardiogram demonstrated a global pericardial effusion with a widest fluid margin of 1.8 cm, leading to right ventricle collapse.Urgent diagnostic and therapeutic thoracentesis were performed, which drained cloudy milky fluid.The fluid was exudative in nature, and was biochemically confirmed to be a chylothorax.Other pleural fluid investigations including bacterial cultures, acid-fast bacilli, cytology, and fungal cultures were all unremarkable.Concurrently, pericardial fluid obtained from emergency pericardiocentesis revealed multiple lymphocytes but without malignant cells.Computed tomography (CT) of the thorax showed a right pleural effusion with a huge anterior mediastinal mass measuring up to 11.6 cm at its widest margin with invasion into adjacent vascular structures including the brachiocephalic, subclavian, and internal jugular veins together with bilateral mediastinal and hilar lymphadenopathies (Figure 1).
Under general anaesthesia, and via laryngeal mask airway, he underwent EBUS-TBNA of the enlarged mediastinal mass at right lower paratracheal (4R) region, which measured more than 30 mm on EBUS (Pentax EB-1970UK Linear-Array Ultrasound Bronchoscope, Japan).The lesion showed illdefined margins with areas of central hypodensities.A total of four passes were made using a 22 Gauge EBUS needle (EchoTip ProCore Needle, Cook Medical, USA) with ROSE showed only lymphoid cells.A decision was then made by the bronchoscopist to proceed with EBUS-TBMC as lymphoproliferative disease was likely in this case.A 1.1 mm cryoprobe (ERBE, Medizintechnik, Germany) inserted through the working channel of EBUS scope was advanced directly into the mediastinal lesion via a tract that was already created by EBUS-TBNA attempts earlier (Figure 2A,B).A total of three samples were obtained, with activation times ranging from 3 to 7 s.The procedure was uneventful.
Smears and cell block from EBUS-TBNA showed only mixed inflammatory cells which composed of lymphocytes, plasma cells and occasional neutrophils (Figure 3A) while samples obtained from EBUS-TBMC demonstrated tumour tissue containing a sheet of atypical lymphoid cells (Figure 3B).These cells were medium in size and displayed irregular hyperchromatic nuclei, inconspicuous nucleoli, and scanty cytoplasm with frequent mitosis and apoptotic bodies.Immunohistochemistry was positive for CD20(diffuse), BCL6(>30%), MUM1(>30%), C-Myc(>40%), and BCL2(>50%) and negative for CKAE1/AE3, CD3, and CD10.The Ki-67 proliferative index was >80%.The overall morphological and immunohistochemical features were diagnostic for high-grade B-cell lymphoma favouring primary mediastinal large B-cell lymphoma (Figure 4A,D).He was subsequently referred to haematology team where chemotherapy (R-EPOCH regime: rituximab, etoposide, prednisolone, oncovin, cyclophosphamide and hydroxydaunorubicin) was commenced.Repeated CT thorax 6 weeks post treatment showed resolution of pleural effusion together with reduction in size of his anterior mediastinal mass (Figure 5).
DISCUSSION
Whilst diagnostic yields are high at beyond 90% for lung malignancies, 3 the utility of EBUS-TBNA in lymphoma remains restricted due to low diagnostic sensitivity at only 65%. 1 This is unsurprising as cytology or cell block samples provided by TBNA may be insufficient for pathologies with complex morphologies and/or requiring further immunohistochemical or molecular analyses.The key advantage of EBUS-TBMC lies in its ability to obtain larger mediastinal samples with preserved tissue architecture. 2,47][8][9] A randomized controlled trial conducted in 2021 which enrolled 197 patients demonstrated that EBUS-TBMC has a better diagnostic yield compared to EBUS-TBNA (91.8% vs 79.9%). 10The superiority of EBUS-TBMC is particularly noticeable in benign mediastinal disorders (80.9% vs. 53.2%)and in rare tumours (91.7% vs. 25%). 10hese study outcomes were subsequently reiterated in another follow-up randomized trial which involved 271 patients. 11In this trial, the authors reported that incorporating TBMC to standard TBNA sampling can significantly increase the overall diagnostic yield for mediastinal lesions (93% vs. 81%, risk ratio [RR] 1Á15 [95% CI 1Á04-1Á26]; p = 0Á0039). 11In our case, a final diagnosis of B-cell lymphoma was only possible using samples obtained from TBMC.
Existing literature reiterates EBUS-TBMC as a safe procedure, with reported complications rates such as self-limiting pneumothorax and pneumomediastinum at between 0.5% and 1.0%. 2,11Our patient did not experience any adverse events directly related to TBMC.Nevertheless, some fundamental technical aspects of EBUS-TBMC remain unanswered, ranging from the optimal cryoprobe freezing time to the number of cryo-passes and its impacts of specimen size. 2,4e performed a total of 3 cryo-passes for our patient, with a gradually increasing freezing time from 3 to 5 s and finally 7 s.The selected freezing times pays homage to recommendations from diagnosing interstitial lung disease using transbronchial cryobiopsy (ILD-TBLC). 12Unlike in EBUS-TBMC, various technical aspects of ILD-TBLC, from freezing time, number of passes and their correlation with specimen size and quality have been extensively studied in ILD-TBLC. 13oreover, different methods have been described to create a tract for cryoprobe insertion during EBUS-TBMC. 9We managed to insert the cryoprobe through a tract that was created by repeated TBNA attempts in our case.Other tools such as high-frequency needle knife 10,11 and Nd:YAG laser 14 have been used for tract creation as well.
In conclusion, existing literature suggest EBUS-TBMC as an effective and safe technique to acquire sufficient diagnostic tissue samples for mediastinal diseases when conventional EBUS-TBNA is not conclusive.Future studies focusing on patient selection and technical aspects of EBUS-TBMC such as optimal passes, freezing times and cryoprobe tract creation methods are needed to further solidify its position in the diagnosis and staging of mediastinal pathologies.
1
Computed tomography of the thorax showing right pleural effusion (yellow arrow) and a huge anterior mediastinal mass (red dot) with invasion into adjacent vascular structures including the brachiocephalic, subclavian, and internal jugular veins together as well as bilateral mediastinal and hilar lymphadenopathies.F I G U R E 2 (A) The position of the cryoprobe (yellow arrow) within the mediastinal lesion was confirmed under EBUS before activation for a duration of 3-7 s. (B) The frozen tissue specimen (yellow arrow) can be seen at the tip of the cryoprobe immediately after the EBUS scope was removed en bloc along with the cryoprobe.F I G U R E 3 (A) Cell block from EBUS-TBNA showed polymorphous lymphocytes with a background of mixed inflammatory cells.No atypical lymphocytes or malignant cells were seen (200Â magnification).(B) In comparison, samples obtained from EBUS-TBMC demonstrated tumour tissue containing atypical lymphoid cells, consistent with primary mediastinal large B-cell lymphoma (200Â magnification).
F
I G U R E 4 EBUS-TBMC confirmed a diagnosis of primary mediastinal large B-cell lymphoma.(A) Sheets of medium-sized atypical lymphoid cells separated by fibrosis (200Â magnification).(B) Further magnification (400Â) showed that the nuclei were round with inconspicuous nucleoli and scanty cytoplasm.Mitosis (yellow circle) and apoptotic bodies were frequent seen.(C) CD20 was diffusely positive (200Â magnification).(D) Ki-67 proliferative index was approximately 80% (200Â magnification).
F I G U R E 5
Surveillance computed tomography thorax 6 weeks post chemotherapy showing resolution of pleural effusion together with a size reduction of the anterior mediastinal mass.
|
2024-04-06T05:08:34.175Z
|
2024-04-01T00:00:00.000
|
{
"year": 2024,
"sha1": "7cddc60cb7920034bf9e83b2d3d7ae2e27cb8c8f",
"oa_license": "CCBYNC",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "7cddc60cb7920034bf9e83b2d3d7ae2e27cb8c8f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
237851403
|
pes2o/s2orc
|
v3-fos-license
|
Minimally-Invasive Surgery of Mitral Valve. State of the Art
Minimally-invasive mitral valve surgery has been in development during the last thirty years and now allows to perform mitral and tricuspid interventions, coronary bypass surgery, repair of congenital heart defects and more. Current state-of-theart technology and clinical knowledge make possible to offer this approach in expert centers to a growing number of patients, who benefit from its advantages. Minimally-invasive mitral surgery is becoming the best option to repair que mitral valve and patients are able to recover better and faster than after conventional surgery without compromising quality of the repair. With the aid of high-definition 3D visualization and specifically designed instruments, including robotic telemanipulation, thoracoscopic and robotic surgery performed this way require only small incisions in the right chest. In the present chapter we will expose the current state of this field, going into detail regarding patient selection and operative techniques, and also reviewing the requirements for building a successful program.
Introduction
Modern minimally-invasive mitral surgery started in the mid 90's with the arrival of technical improvements in peripheral cannulation for cardiopulmonary bypass, vacuum-assisted drainage and specifically-designed long-shafted instruments, that allowed surgeons to safely perform mitral interventions through small anterolateral thoracotomies using thoracoscopic visualization [1].
Another major step forward in this field was the development of endoaortic balloon occlusion devices that, introduced through the femoral artery and positioned in the ascending aorta, allowed to occlude the aorta and to administer antegrade cardioplegia to arrest the heart.The safe and effective utilization of this device required close coordination between the anesthesiologist, the perfusionist and the surgeon, together with continuous monitorization of its correct position by transesophageal echocardiography and avoid its migration during the case, with the risks of injuring the aortic valve (proximal migration) or the occlusion of the head vessels in the aortic arch (distal migration).There were also risks of embolization of atherosclerotic debris and even of aortic dissection, making the selection of adequate candidates for the procedure of paramount importance.Another strategy that evolved to simplify the operation consisted in cross-clamping the aorta using specifically designed transthoracic vascular clamps.These can be rigid or articulated and are introduced either from the working port or using a separate incision in the chest, allowing an external occlusion of the aorta as in conventional surgery.This strategy reduces the complexity and costs of the procedure and avoids the risks involved with the use of the endoaortic balloon.
Two more percutaneous tools have been developed to facilitate minimallyinvasive operations, both designed to be inserted by the anesthesiologist through the right jugular vein, the "endovent": a suction catheter introduced in the pulmonary artery to reduce the blood return from the pulmonary veins into the left atrium during surgery, and the "endoplegia": a catheter placed inside the coronary sinus to administer retrograde cardioplegia throughout the case.
Further refinements and technological developments included specificallydesigned instruments, the introduction of high-definition videothoracoscopy and, more recently, 3D visualization.All these advancements facilitate the performance of this complex procedures and try to mitigate the learning curve involved.Despite these refinements, thoracoscopic mitral repair has not been widely adopted by the cardiac surgery community, and difficult procedures such as complex repairs, reoperations, combined ablation and multivalvular procedures are mostly performed in few, high-volume centers.
Robotic surgery was developed in the late 90's and the first mitral repair and mitral replacement with the da Vinci system (Intuitive Surgical.CA, USA) were performed in Europe by Carpentier [2] and Mohr [3] in 1998 and in the USA by Chitwood [4] in 2000, respectively.
Robotic mitral surgery has slowly expanded worldwide and it is currently the preferred surgical approach for mitral repair in many programs.The only robotic platform in clinical use for cardiac surgery is the DaVinci system (Intuitive Surgical.CA, USA), which is currently in its 4th generation with the X and the Xi systems.These two systems are capable to perform cardiac surgery and provide some advantages with respect to previous models.However, many groups still rely on the previous generation, the Si, which is particularly useful for coronary revascularization since it has dedicated instruments such as the coronary endo-stabilizer, that are not provided for the newest generation.
Robotic technology allows the surgeon to regain much of the dexterity lost by the use of long-shafted instruments and provides superb high-definition 3D visualization with up to 10x magnification.This technology provides unparalleled precision inside the chest and a range of movement even wider than that of the human hand [5].Another great advantage of robotic telemanipulation in the field of mitral surgery comes with the use of a third robotic arm equipped with the dynamic atrial retractor.This instrument provides outstanding exposure of the valve that adapts to every patient's particular anatomy and is controlled with the same precision and dexterity as the rest of surgical instruments.As exposure and visualization of the valve and subvalvular apparatus is better, all repair techniques can be robotically performed, including the most complex ones such as papillary muscle repositioning or the sliding leaflet plasty.Reported results in experienced centers show excellent results, both in repair rate and postoperative complications.Robotic mitral surgery has demonstrated shorter intensive care unit (ICU) and hospital length of stay, better quality of life postoperatively and better cosmesis, as compared to conventional surgery [6,7].
In this chapter, we will review in detail all the key points of current state-of-the-art minimally invasive mitral surgery, both thoracoscopic and robotic, from the indications and the evaluation of candidates to a step-by-step walkthrough of both techniques.
Preoperative evaluation
All patients should undergo a high-quality comprehensive transthoracic or transesophageal echocardiographic evaluation to have a good understanding of their particular valve disease before the operation.Echocardiography also provides valuable clues to avoid technique-related complications such as systolic anterior motion (SAM) and helps in the selection of the most adequate type and size of annuloplasty prosthesis.
Proper patient selection is crucial to avoid postoperative complications.Minimally-invasive surgery is more difficult in morbidly obese patients and in those with small thoracic cavities.Other anatomical issues that affect port placement or may limit the movements of the robotic arms include: scoliosis, pectus excavatum, phrenic nerve palsy and intrathoracic herniation of abdominal viscera.History of prior thoracic surgery, radiation or thoracic trauma should be ruled out preoperatively and may contraindicate a thoracoscopic or robotic surgery [8][9][10].
The need for peripheral cannulation for cardiopulmonary bypass mandates a thorough evaluation of the vascular system and an adequate size of the femoral vessels.A detailed physical examination and preoperative thoracic, abdominal and pelvic computed tomography angiograms are essential to ensure a safe operation.Evaluation of the venous system for signs of thrombosis or occlusion (interrupted inferior vena cava, presence of cava filters or extrinsic compression) is also achieved with these examinations.Patients with cardiovascular risk factors should undergo a preoperative coronary angiogram (with coronary catheterization or computed tomography angiography) to rule out ischemic heart disease.
Minimally-invasive surgery requires single-lung ventilation at different moments during the operation.Single-lung ventilation causes hypoxia and hypercarbia and increased pulmonary artery pressure.Patients with chronic pulmonary disease or pulmonary hypertension should undergo further testing to determine if they can tolerate this safely.Single-lung ventilation can be reduced or even avoided at the expense of longer cardiopulmonary bypass duration, so a tailored risk-benefit evaluation should be performed for these patients [11].
Indications and contraindications
Minimally-invasive mitral surgery follows the same indications that apply to conventional surgery, as described in the current guidelines published by the major European and American scientific societies of cardiac surgery and cardiology [12,13].
• Severe peripheral vascular disease or aneurysms of the descending thoracic or abdominal aorta.
• Prior right chest surgery.
• Moderate to severe aortic stenosis or regurgitation.
• Severe calcification of the mitral annulus.
• Severe pulmonary dysfunction or pulmonary hypertension.
All the prior situations described would difficult basic steps for a minimally invasive surgery and deem the operation unsafe.However, certain aspects of the technique, such as peripheral cannulation or cardioplegia administration, can be modified in order to adapt to some of these situations and still be able to offer a minimally invasive approach.
Fundamentals of minimally-invasive mitral surgery
Both thoracoscopic and robotic approaches to repair the mitral valve share a basic common set of techniques, skills, tools, anesthetic management and mitral repair techniques.These include the use of retrograde peripheral perfusion, similar options for myocardial protection, working with thoracic ports, thoracoscopic visualization instead of direct vision, and the use of specifically designed surgical instruments.
In addition to these intraoperative and more technical aspects, both approaches also share a similar philosophy in terms of patient selection and postoperative management to assure a successful outcome and an improved postoperative recovery, with a faster return to normal activities after surgery.
Mitral repair techniques used in minimally-invasive surgery
We follow the same basic principles of mitral repair regardless of the approach used, conventional or minimally-invasive, as established by Carpentier over 30 years ago [14].This can be summarized in these principles: 1. Preservation or restoration of normal leaflet mobility.
Creation of a good coaptation between the anterior and the posterior
leaflet along the entire coaptation line.
3. Remodeling and stabilization of the mitral annulus using an annuloplasty band or ring.
To achieve a successful mitral repair program, it is mandatory to clearly understand both the vast array of mitral pathology and to master a large armamentarium of techniques to address every specific case.A comprehensive understanding of mitral pathology is imperative to follow a structured analysis, as proposed by Carpentier, clearly distinguishing between etiology, lesions and dysfunctions.Etiology is the cause of the valvular problem (e.g., fibroelastic deficiency, infective endocarditis, ischemic heart disease), lesions are the results of the disease (e.g., chordal rupture, leaflet perforation, annular calcification) and the dysfunction is the result of the lesions (e.g., leaflet prolapse, restricted leaflet closure) (Figure 1).This may appear simple, but it is not uncommon to see these three concepts mixed up when evaluating echocardiography reports, discussing with other surgeons and cardiology colleagues and reading the medical literature.Following this approach to mitral disease, we are able to follow a systematic methodology to describe the problem and solve it.
Most of the techniques described and used in open surgery to repair the mitral valve can also be used successfully in minimally-invasive surgery, both thoracoscopic and robotic, but some adaptations must be implemented to facilitate its use in these approaches.
Triangular and quadrangular resections
Triangular resections are ideal to treat a localized prolapse of the posterior leaflet, when the prolapsing segment involves only one scallop without affecting the complete length of its free edge.This technique is very well suited for thoracoscopic and robotic surgery.The resection is triangular in shape with its base at the free edge and extending towards the annulus up to 2/3 to 3/4 of the leaflet height.Ideally, the edges of the resection should be convex towards each other and care must be taken not to resect too much of the free edge, to avoid producing a "curtain effect" upon re-approximation.Reconstruction of the leaflet can be accomplished with interrupted or running sutures.If the latter is preferred, the surgeon must avoid a purse-string effect that could impair leaflet mobility.We prefer to use a double running technique using 4/0 polypropylene sutures.Some surgeons prefer using 5/0 sutures, but we only use them when the leaflets are very thin (Figure 2).
Quadrangular resections are preferred when large portions of the posterior leaflet are prolapsing with great excess of tissue.In this technique the resection is extended all the way to the posterior annulus in a rectangular or trapezoidal shape.In order to avoid excessive tension upon reapproximation, the posterior annulus is plicated using interrupted, pledgeted 2/0 braided sutures.This maneuver excludes a portion of the dilated annulus from the final repaired valve, increasing the amount of annular remodeling.In complex cases, where a very large portion of the posterior must be resected, the "sliding plasty" technique allows reconstruction without the need for excessive annular plication.In this technique the base of posterior leaflet on both ends of the resection are detached from the annulus and advanced centrally during reattachment.
Chordal replacement
Neochordal implantation using polytetrafluoroethylene (PTFE) sutures has gained great popularity in the treatment of mitral prolapse, particularly on the anterior leaflet.There are some reasons for the enthusiastic adoption of this technique, particularly in minimally-invasive approaches: • Simplicity: is a simple way to treat complex problems.
• Availability: neochords can be easily constructed in any number and length to accommodate any particular anatomy.• Reversibility: in case of a suboptimal result, it is easy to remove the neochord and replace it with another at any moment during the operation.In contrast, once a resection is performed, this cannot be undone.
• Precision: adjusting the length of the chords allows the optimization of coaptation, this can be performed during water test to simulate systolic closure.
In cases of very redundant leaflets, applying more restriction on the posterior leaflet by shortening the neochords can prevent SAM.
• Reproducibility and good long-term durability: using a standardized technique, mitral repair with this approach has proven extremely effective and durable.
Neochords are constructed using 4/0 PTFE sutures that are anchored in the tip of the papillary muscle and the free edge of the prolapsing leaflet (Figure 2).Our technique of choice consists in passing both ends of the suture with a PTFE pledget through the tip of the target papillary muscle head.We do not tie the suture down in the papillary muscle to allow both ends of the suture to automatically adjust their length, so that tension is equally distributed among them.Then, both chords are passed twice around the free edge of the prolapsing segment in a "figure-of-eight" fashion and chordal length is adjusted visually, taking into account the distance to annular plane, the length of the normal chords on both sides of the prolapsing segment and the excess of leaflet tissue.The valve is then tested using saline to fill the ventricle and the final length adjustments are made to optimize the depth of coaptation and the position of the closure line.Once the desired result is achieved, the chords are tied on the atrial side of the leaflet to secure the resulting length.
Mitral annuloplasty
As discussed earlier, we implant an annuloplasty ring in nearly all repairs.The choice of the ring depends on the etiology and the disfunction of the patient.We favor flexible bands for cases with mitral prolapse and complete rigid rings for functional regurgitation and whenever significant leaflet tethering is present.Sizing of the ring follows a standardized method, taking into account the intertrigonal distance and most importantly, the surface area of the anterior leaflet.A small under or oversizing from this initial measurement can be added to accommodate the repair according to some particular features of the case such as the perceived risk of SAM, very significant excess of tissue on the leaflets and leaflet tethering due to ventricular dilatation.
Annuloplasty rings can be implanted using interrupted or running sutures.For complete rings and bands implanted thoracoscopically or through median sternotomy, we favor the interrupted suturing technique using 2-0 braided sutures.On robotic cases, we prefer to use the running suturing technique with monofilament sutures (barbed polybutester or PTFE) instead, due to the significant gain in time and the simplicity it provides while working with the bedside surgeon during the implant.This running suturing technique is facilitated by the greater dexterity of the robotic system and the main drawback is that it may be more difficult to judge the distribution of sutures along the annulus and the band, particularly in cases with very dilated annulus requiring significant annular remodeling (Figure 2).
Operative technique 4.1 Common steps for robotic and thoracoscopic surgery: patient preparation and cardiopulmonary bypass
After induction of anesthesia, the patient is intubated and a transesophageal echocardiography (TEE) probe is positioned.Intubation can be performed either with a double-lumen endotracheal tube or with a single lumen tube and a bronchial occluder with a balloon.The patient is positioned in supine, as close as possible to the right side of the surgical table, the right chest is slightly elevated placing a blanket roll along the right hemithorax and the right arm is allowed to fall below the right chest to expose the lateral chest and the axilla.Care must be taken to protect the arm to avoid neural injuries due to compression against the structure of the table.
Cardiopulmonary bypass (CPB) is commonly instituted with cannulation of the right common femoral vessels, using a small 3 cm incision to expose both vessels (Figure 3).Dissection is kept to a minimum to avoid damage to the surrounding neural structures and to decrease the risk of seroma formation after surgery.Only the anterior wall of the vessels is exposed, and two 5/0 polypropylene purse-strings are placed in a rectangular fashion following the long axis of both vessels.After heparinization, the femoral vein is cannulated using the Seldinger technique and a guidewire is advanced under TEE guidance into the superior vena cava.Then the puncture site is sequentially enlarged using dilators and finally a 25F multiperforated venous cannula is, under TEE guidance, introduced and placed with its tips 3 to 5 cm inside the superior vena cava (SVC).After securing the venous cannula to the skin with stay sutures, arterial cannulation is performed following the same technique described, using TEE to verify the position of the guidewire in the descending aorta.The diameter of the vessel is the sole determinant of the size chosen for the arterial cannula and typically ranges from 15F to 19F.We never oversize the arterial cannula and prefer to use a 6-8 mm Dacron side-graft if in doubt.In rare cases when we have to use a cannula that completely occludes the lumen (as in obese patients with poor femoral arteries) and the planned procedure is long, we place a distal perfusion line to prevent limb ischemia and a postoperative compartment syndrome.For this purpose, we use a 6F introducer connected to the arterial line through a side port.
Using this technique, we do not need routine cannulation of the jugular vein for mitral surgery, but it is crucial to always use vacuum-assisted drainage in the venous line and to make sure the venous cannula is correctly positioned inside the superior vena cava to avoid its occlusion by the Chitwood clamp when the atrial retractor is placed in the left atrium and pulled anteriorly to expose the valve.
Port placement and initial steps
Under left-lung ventilation the working port is created in the 4th intercostal space performing a 3-4 cm incision around the level of the anterior axillary line and a soft-tissue retractor (Alexis XSl, Applied Medical, CA) is placed to prevent inadvertently introducing fatty tissue or debris in the cardiac chambers during the introduction of surgical instruments, sutures or valvular prostheses.Then, a 5 mm trocar is placed in the third intercostal space along the anterior axillary line and CO2 is continuously insufflated (4 L/min) throughout the entire operation, to create an intrathoracic CO2 environment that reduces the risk of air embolism after releasing the aortic clamp.This trocar is used to introduce a 5 mm 30-degree videothoracoscopic HD camera to guide the procedure and the camera is held by an articulated arm placed in the right side of the head piece of the surgical table (Figure 4).An 11 mm trocar is placed in the 6th intercostal space, mid-axillary line.We use this second trocar to exteriorize retraction sutures (placed in the diaphragm and in the lower portion of the pericardium) and to introduce the left atrial vent line.After the procedure is completed, a 28F chest tube will be passed through this trocar and left in the pleural space.If diaphragmatic retraction is needed (>75% of cases), a pledgeted "2/0" PTFE suture is placed and tied in the central tendon, taking great care not to damage the liver during this maneuver.The traction suture is
Presents the final set-up of an endoscopic surgery. A shows the surgical field with all the ports during a mitral repair through a periareolar incision. In this image the surgeon is using two long-shafted instruments through the working port and the assistant is moving the left atrial retractor to improve exposure. B shows the thoracoscopic camera held in position by an articulated arm. C show how the transthoracic clamp is inserted below the camera trocar.
exteriorized through the 11 mm trocar and tension is applied to improve exposure.Then, the pericardium completely opened in its lateral aspect with a longitudinal incision performed at least 3 cm anterior to the phrenic nerve.Two PTFE stay sutures are placed near cranial and caudal ends of the posterior edge of the pericardium and exteriorized using the 11 mm trocar for the caudal suture and a transthoracic puncture just cranial and posterior to the working port, in the mid axillary line.
After opening the pericardium and placing the stay sutures in the diaphragm and the pericardium, the aortic cross-clamp is inserted through a 5mmm incision in the 3rd intercostal space mid-axillary line (Figure 4).A curved Chitwood clamp is advanced inside the pericardium and placed across the ascending aorta, with its lower jaw placed inside the transverse sinus under thoracoscopic vision and using blunt instruments (typically a thoracoscopic suction cannula) to push the aorta anteriorly.Once the aortic clamp is in place, a pledgeted purse string is placed in the ascending aorta and a long cardioplegic needle is inserted through the working port under thoracoscopic vision.At this point, cardiopulmonary bypass is initiated and, upon reaching full systemic flow, the ventilator is disconnected and the aorta is cross-clamped.
Our technique for myocardial protection consists on antegrade administration of cold blood cardioplegia (Buckberg's technique) after cross-clamping the aorta and repeated every 20 minutes thereafter.During cardioplegia infusion, the left atrium is opened just below the interatrial septum and the left atrial vent is introduced and placed in the left pulmonary veins to maintain a bloodless surgical field.The size of the left atrial retractor blade is selected at this moment and the stem of the retractor is placed under thoracoscopic control, typically through the 5th intercostal space along the midclavicular line.The retractor is then assembled inside the thorax and placed inside the left atrium.Retraction is then applied to achieve an adequate exposure of the mitral valve.The atrial retractor is held in place using a second articulated arm placed on the left side or the table.
Completion of the operation
After achieving the mitral repair or replacement, the left atriotomy is closed.Our preference for this is to use monofilament sutures (barbed polybutester or PTFE) using two sutures secured at both ends of the atriotomy and sutured towards the center.While finishing this step, the left atrial vent line is removed so blood is allowed to start filling the left cardiac chambers to begin de-airing.After closing the atriotomy, suction is applied in the root vent line and both lungs are manually inflated to remove as much air as possible from the heart and pulmonary veins before releasing the aortic cross-clamp.To further assist this process, the venous line can be intermittently clamped to allow blood to fill the right heart and the lungs and push air out from the heart.After completing de-airing, the aortic clamp is released and the heart is reperfused.We do not put pacing wires routinely after isolated mitral mitral repair in patients without previous rhythm abnormalities; if they are required, they can be placed in the right atrial wall and/or on the right ventricular wall.After normal rhythm is restored, and preliminary echocardiographic evaluation of valvular and ventricular function is satisfactory, the patient is weaned from cardiopulmonary bypass in the usual fashion.Our preference is to remove the aortic root vent after echocardiographic evaluation of the result is completed under a short period of full-flow cardiopulmonary bypass, to facilitate the removal of the cannula and the repair of the entry site with low aortic pressure and reduced pulsatility.After this is completed satisfactorily, the patient is weaned from cardiopulmonary bypass and decannulated, and protamine is given.All ports are removed under thoracoscopic control to avoid any port site bleeding, and pericardial (19F Blake drain) and pleural (28F curved chest tube) draining tubes are implanted using the port incisions.The pericardium is loosely approximated with two or three interrupted sutures to avoid low risk of cardiac herniation and facilitate reoperation, should future procedures be needed.
After cannulas are removed from the femoral vessels, the purse strings are tied and both cannulation sites are reinforced with 5/0 polypropylene sutures.After hemostasis is accomplished all incisions are closed in the usual fashion and the patient is usually extubated in the operating room immediately after the operation is completed.
Robotic surgery
Most of the preparations and initial steps of the operation are the same as described before in detail for the endoscopic surgical procedure.In short, the patient is positioned in supine with mild right chest elevation, on demand singlelung mechanical ventilation is prepared and cardiopulmonary bypass is established using the right femoral vessels.
The placement of the trocars for the robotic arms starts with the introduction of an 8 mm trocar in the 4th intercostal space between the mid clavicular and the anterior axillary lines using blunt dissection; this will serve as the camera port.After insertion, the camera port is connected to a CO 2 line to create a controlled pneumothorax using a pressure limit of 7-10 mmHg.The scope is inserted in the trocar and the right pleural space inspected for adhesions.Under thoracoscopic visualization, 3 angiocaths are inserted at the 4th, 6th and 7th intercostal spaces in the posterior axillary line; these will be used to exteriorize the traction sutures placed on the diaphragm and the pericardium.It is important to keep these catheters occluded to avoid losing CO 2 during this phase.Then, the right arm trocar is inserted in the 6th intercostal space, approximately 2 cm posterior to the camera port.Thereafter, the left arm trocar in inserted in the 2nd or 3rd intercostal space at the same level as the camera port.Finally, the trocar for the atrial retractor is inserted in the 6th intercostal space just medial to the mid clavicular line, avoiding injuring the internal thoracic artery.All trocars are inserted under direct vision using the scope.Once all trocars in place, a 2-3 cm thoracotomy is performed around 3 cm lateral to the camera port, in the same intercostal space, to serve as the working port.After achieving proper hemostasis, a soft tissue retractor is applied leaving a suction catheter outside, that will be used as a left atrial vent during the operation (Figure 5).
Once all thoracic steps are completed, heparin can be administered and the femoral vessels are cannulated as described earlier in detail.Once both cannulas are secured and connected to the CPB circuit, the DaVinci system can be docked to the patient.It is extremely important to connect the trocars in the direction they will move to avoid potential conflicts between the robotic arms.The camera port is connected to arm number 2 and, under endoscopic visualization, instruments are placed in arms 1 and 4 and introduced inside the thorax for the console surgeon control (Figure 6).At this stage CPB can be started, although in some patients with excellent anatomy the pericardium can be opened and the diaphragm retracted before starting CPB.For diaphragmatic and pericardial retraction, we normally use a 2/0 PTFE sutures as described in the thoracoscopic procedure.These sutures exit the chest through the angiocaths placed at the beginning of the operation.
The atrial retractor is inserted under direct vision on the robotic arm number 3, and used to separate the aorta from the SVC to expose the transverse sinus.Placement of the transthoracic aortic clamp and cardioplegia administration are performed as described for the endoscopic procedure.The left atrium is incised to expose the mitral valve using the dynamic left atrial retractor (Figure 2).
Once the mitral repair or replacement is completed, the left atrium is closed using two running sutures from both ends of the incision as described for the endoscopic procedure.After proper deairing the cross-clamp is removed and the patient is weaned from CPB as described earlier.At this point, the 3rd arm is removed and the trocar is used to insert a 19F Blake drainage inside the pericardium, which is approximated with two sutures.The remaining trocars are removed and checked for bleeding using the camera, and a 24F curved chest tube is inserted in the pleural space using the right arm trocar incision.
Results
The primary goal of minimally-invasive mitral repair is, as in open surgery, to achieve a successful and durable valve repair.Despite its perceived technical complexity, experienced centers have reported excellent repair results with 95% freedom from mitral regurgitation at 5 years, even in complex mitral pathology [15][16][17][18].
In addition to a durable result, minimally-invasive mitral surgery aims for a faster recovery and higher patient satisfaction as incisions are smaller.Recent series reported higher quality of life and earlier return to work in minimally invasive surgery [16].Furthermore, several publications have shown that despite longer CPB and aortic-clamp times, minimally invasive surgery is associated with a very low complication rate, reduced blood loss and shorter ICU and in-hospital length of stay [17,19].
Starting a new program
The most successful centers in minimally-invasive mitral surgery are usually highvolume referral centers with vast experience in mitral repair when they started their programs.Minimally invasive surgery, and even more robotic surgery, require developing new skills such as peripheral cannulation, intra-aortic occlusion and the use of thoracoscopic vision and long-shafted instruments.It is not only the surgeon, but the whole operating team (anesthesiologists, perfusionists, Operating room nurses, etc.) that has to learn to use new tools and adapt to new surgical techniques [20].
Based on our own experience, there is an increase in complexity for the whole team when transitioning from open to thoracoscopic surgery, which is even steeper when going further from there to robotic surgery.For this reason, we consider that it is mandatory for the whole surgical team to undergo specific in-depth training in thoracoscopic and robotic cardiac surgery as basic first-step before starting a successful program, either endoscopic or robotic.Training must include the acquisition of the required theoretical knowledge and the progressive achievement of the technical skills required.For the latter, the first step is practicing on simulators and dry lab training.After this step is accomplished, the team can move to higher fidelity wet labs, and training in live, large animal and/or human cadaveric models.
After this basic training the members of the surgical team should do repeated case observation work in experienced large volume institutions and during the first phases of their experience have the aid of expert proctors that can provide the required support to ensure safety for the patient and a profitable learning experience to the team [21].
The best way to stablish a successful program is to avoid overlapping learning curves.Results are better during the initial phases if a robotic program is started by teams with wide experience in mitral repair and there is a strict patient selection protocol.As programs grow in experience, there is a tendency to extend the indication to more complex patients.Surgeons should pay close attention to their results in order to detect any changes on the appearance of complications while extending their inclusion criteria and adjust them accordingly [22,23].
Latest developments and percutaneous devices
Following the expansion of transcatheter devices to treat aortic stenosis, several developments have appeared to repair or replace the mitral valve.The most extended one in the repair group is the MitraClip (Abbott Laboratories.IL, USA), which resembles an edge-to-edge repair.It requires venous access and a transeptal puncture.To date, more than 100000 implants have been performed, mostly on selected patients with secondary mitral regurgitation [24].More recently, another device using the same concept has been introduced, the Pascal repair system (Edwards Lifesciences.CA, USA) [25].
New transapical devices are trying to resemble mitral repair using neochords, sucha as the Harpoon (Edwards Lifesciences.CA, USA) [26] and the Neochord (Neochord INC.MN, USA) [27] devices, which allow the transpical insertion of neochords on the beating heart under TEE guidance.Both of them require a left mini-thoracotomy and can be used without CPB.
The Cardioband (Edwards Lifesciences.CA, USA) percutaneously delivers an annuloplasty band using a transseptal access.It is anchored with several screws into the mitral annulus and then cinches the valve under TEE guidance.
In the field of percutaneous mitral replacement, the Tendyne bioprosthetic mitral valve system (Abbott Medical Inc.IL, USA) accumulates the larger clinical experience.It is a fully retrievable and repositionable device that is implanted using an off-pump transapical approach.
These and other developments [28,29] will undoubtedly continue in the future and will reshape the landscape of options to treat the mitral valve for the benefit of patients.
Figure 1 .
Figure 1.A shows a two-chamber view of the preoperative transesophageal echocardiography where a flail anterior leaflet due to chordal rupture can be seen.B shows an intraoperative view of the anterior mitral leaflet of the same patient during robotic mitral repair, confirming the preoperative echocardiographic findings.
Figure 2 .
Figure 2. Displays different steps of a minimally-invasive mitral repair: A shows the placement of the transthoracic aortic clamp in the ascending aorta through the transverse sinus with the aid of three robotic arms.B represents the initial surgical valve analysis, showing a posterior leaflet prolapse of its central scallop.C shows the surgical exposition of the posteromedial papillary muscle during the implantation of a PTFE neochord.D shows the final aspect of a repaired valve after the annuloplasty has been performed using a flexible band implanted using continuous sutures.
Figure 3 .
Figure 3. Shows the right common femoral artery and vein cannulated for cardiopulmonary bypass.
Figure 4 .
Figure 4.Presents the final set-up of an endoscopic surgery.A shows the surgical field with all the ports during a mitral repair through a periareolar incision.In this image the surgeon is using two long-shafted instruments through the working port and the assistant is moving the left atrial retractor to improve exposure.B shows the thoracoscopic camera held in position by an articulated arm.C show how the transthoracic clamp is inserted below the camera trocar.
Figure 5 .
Figure 5.This picture shows a patient set-up for a robotic mitral repair.Markings in the chest reveal the surgical landmarks used for port placement and the anticipated location of the four arms of the robot.The location of the working port (WP) is also shown, and the mid-axillary line (MAL) is drawn to help in the placement of the retraction sutures for the diaphragm and the pericardium.
Figure 6 .
Figure 6.Final view of the operating room during a robotic mitral repair.A shows the operative field with all four robotic arms connected to the trocars.B shows the working port and bed side surgeon preparing to place the transthoracic clamp.C is a general view of the robotic operating room to demonstrate the location of the surgical assistants, the perfusion team, the anesthesia team and the three components of the robotic system: The patient cart (the robot itself), the vision cart and the surgeon console in the back of the room.
|
2021-09-28T01:09:38.567Z
|
2021-07-08T00:00:00.000
|
{
"year": 2021,
"sha1": "6d43be2f768b5ad8cd21e14b597b3c9dc3634df9",
"oa_license": "CCBY",
"oa_url": "https://www.intechopen.com/citation-pdf-url/77457",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "0c78678494418845816f31d92e508fce287d5518",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
230107073
|
pes2o/s2orc
|
v3-fos-license
|
Z-score neurofeedback, heart rate variability biofeedback, and brain coaching for older adults with memory concerns
Background: The three-month, multi-domain Memory Boot Camp program incorporates z-score neurofeedback (NFB), heart rate variability (HRV) biofeedback, and one-on-one coaching to teach memory skills and encourage behavior change in diet, sleep, physical fitness, and stress reduction. Objective: This prospective trial evaluates the Memory Boot Camp program for adults ages 55 to 85 with symptoms of Mild Cognitive Impairment (MCI) and subjective memory complaints. Methods: Participants were evaluated via the Montreal Cognitive Assessment (MoCA), NeuroTrax Global Cognitive Score, measures of anxiety, depression, sleep, quality of life, quantitative electroencephalography (QEEG), and HRV parameters at four timepoints: baseline, pre-program, post-program, and follow-up. The trial included a three-month waiting period between baseline and pre-program, such that each participant acted as their own control, and follow-up took place six months after completion of the program. Results: Participants’ MoCA scores and self-reported measures of anxiety, depression, sleep quality, and quality of life improved after treatment, and these changes were maintained at follow-up. Physiological changes in HRV parameters after treatment were not significant, however, breathing rate and QEEG parameters were improved at post-program and maintained at follow-up. Finally, participants’ improvement in MoCA score over the treatment period was correlated with their improvement in two brain oscillation parameters targeted by the z-score NFB protocol: relative power of delta and relative power of theta. Conclusions: Trial results suggest that the Memory Boot Camp program is a promising treatment strategy for older adults with symptoms of MCI and subjective memory complaints.
Introduction
Most individuals experience a decline in cognition and memory during older adulthood (Petersen, 2011). For the majority, this decline is relatively minor and does not prevent them from carrying out normal functions. However, approximately 10-20% of adults 10 K.D. Meeuwsen et al. / Multi-domain memory program for older adults knowledgeable informant), and the ability to maintain functional independence (often by performing compensatory strategies to offset the decline) (American Psychiatric Association, 2013;Petersen et al., 2014). The amnestic MCI subtype involves memory decline, which might or might not be accompanied by decline in other cognitive domains (Petersen, 2011;Petersen et al., 2014). MCI is a heterogeneous disorder; individuals who have received this diagnosis may have neuropathology consistent with many different diseases, including Alzheimer's disease, other dementia subtypes (or a combination of subtypes), cerebrovascular disease, or others (Abner et al., 2017).
Subjective Cognitive Decline (SCD), an earlier stage of cognitive impairment, has recently been recognized (Buckley et al., 2015). SCD is characterized by concern about memory loss and other cognitive concerns that may or may not be associated with objective changes in cognition. Older adults with SCD have twice the risk of developing dementia than do those without SCD (Mitchell et al., 2014), and they have reduced quality of life (Pusswald et al., 2015).
Promising non-pharmaceutical treatment strategies: Addressing modifiable risk factors
There are no medications currently approved by the United States Food and Drug Administration to treat MCI or to prevent dementia. However, several health and lifestyle risk factors associated with age-related cognitive decline, MCI, or dementia are considered modifiable, including lack of physical activity, lack of cognitive stimulation, and a diet high in refined sugars and saturated fats and low in whole grains, fiber, fruits, and vegetables (Licher et al., 2019;Livingston et al., 2017;Lourida et al., 2019;Wahl et al., 2019).
Further, it is known that exposure to chronic stress and elevated levels of the stress hormone, cortisol, lead to cognitive impairment and increased risk for dementia (Lupien et al., 1999;Ouanes & Popp, 2019). Patients with MCI (Nicolini et al., 2014) and dementia (da Silva et al., 2018) are more likely to show autonomic dysfunction and alterations in the stress response. A number of chronic stress-related mental health conditions have been linked to hippocampal atrophy, cognitive decline, and/or increased risk of dementia, including post-traumatic stress (Jatzko et al., 2006), major depressive disorder (Sheline et al., 2003), anxiety (Gimson et al., 2018), and insomnia (Neylan et al., 2010;Noh et al., 2012). Sleep deprivation disrupts emotions and cognition in multiple ways, including causing memory problems (Krause et al., 2017). Insufficient sleep may also increase the risk of Alzheimer's disease. It has been theorized that sleep deprivation leads to increased levels of amyloid- in the brain and accelerates amyloid- deposition into the amyloid plaques characteristic of Alzheimer's disease (Ju et al., 2014). Depression is likewise tightly linked with memory and cognitive deficits, and depressed individuals are likely to experience impairments in working memory, executive function, and processing speed domains (LeMoult & Gotlib, 2019). Older adults with MCI, compared to those in the general population, are at increased risk for depression and anxiety (Mirza et al., 2017), and a recent meta-analysis found a depression prevalence of 34% among MCI patients with the amnestic subtype (Ismail et al., 2017).
There is evidence that intervention to reduce some of these risk factors can delay or decrease the risk of age-related cognitive impairment or MCI (McEvoy et al., 2019;Ngandu et al., 2015;Petersen et al., 2014;Williamson et al., 2019). Physical activity is associated with protection from cognitive decline and dementia, and high-intensity exercise is associated with even greater protection (Livingston et al., 2017). Six months of aerobic exercise training has been shown to increase brain volume in older adults (Colcombe et al., 2006), and exercise in combination with cognitive training has been shown to improve cognitive function in older adults (with or without dementia) (Karssemeijer et al., 2017;Law et al., 2014). The Mediterranean diet is associated with longer life (Trichopoulou et al., 2003), reduced cardiovascular disease risk (Grosso et al., 2017), reduced Alzheimer's disease risk, and a lower rate of conversion from MCI to Alzheimer's disease Scarmeas et al., 2009;Scarmeas, Stern, Tang, et al., 2006). Even diet changes made late in life may provide benefit. Older adults at high risk for cardiovascular disease who were placed on the Mediterranean Diet for four years experienced improvements in cognitive function, while those in the control group experienced cognitive decline (Valls-Pedret et al., 2015). Further, epidemiological studies suggest that the status of specific vitamins and other nutrients is associated with reduced risk of cognitive impairment, MCI, and/or dementia (Mohajeri et al., 2015;Smith & Refsum, 2016). These include (but are not limited to) vitamin D (Annweiler et al., 2013;Annweiler et al., 2012;Buell & Dawson-Hughes, 2008), the vitamin B family (Cooper et al., 2015;Douaud et al., 2013;Mohajeri et al., 2015;Smith & Refsum, 2016), and omega-3 fatty acids (Bourre, 2006;Cole et al., 2009;Su, 2010;Yurko-Mauro et al., 2010).
Cognitive stimulation and learning new skills are promising strategies as well. Completing high school during early life is associated with a reduction in dementia risk in old age; for older adults, continuing education, learning complex new skills, and engaging in cognitively stimulating activities are likely to produce cognitive improvements, even if begun late in life (Livingston et al., 2017). Older adults who learned how to use an iPad or digital photography software experienced improvements in memory (Chan et al., 2014;Park et al., 2014). Certain computer brain games have been shown to enhance cognitive function in older adults (Anguera et al., 2013), and a recent Cochrane review of computerized cognitive training in healthy older adults suggests small, but measurable, improvements in cognitive function and memory at the end of training (Gates et al., 2020). Further, there is evidence that older adults with working memory concerns are more likely than other groups to benefit from computerized training (Diamond & Ling, 2019). Computerized cognitive training is known to improve skills that are very similar to those trained, although the extent to which it improves performance on untrained tasks or extends to other domains of life is currently under debate in the field (Simons et al., 2016;Smid et al., 2020). Improvements in cognitive skills after noncomputerized cognitive training have been shown to transfer to long-lasting improvements in daily life (Tennstedt & Unverzagt, 2014) and may be especially effective when training is performed with an enthusiastic coach (Diamond & Ling, 2019).
In addition to the more extensively researched modifiable risk factors described above, there is emerging evidence to suggest that managing stress and improving mental health may likewise contribute to improved cognitive health in older adults (Fotuhi et al., 2016;Young et al., 2017). The severity of memory complaints is associated with symptoms of depression (Ponds et al., 1997), anxiety (Rabin et al., 2017), and reduced quality of life (Mol et al., 2007). The current study focuses on three techniques in particular to improve stress management and mental health: neurofeedback (NFB), heart rate variability (HRV) biofeedback, and mindfulness meditation. These techniques are also associated with improvements in cognition (discussed below).
Z-score NFB is a more recent protocol in which multiple QEEG parameters are typically trained simultaneously, and only those for which the participant deviates from the average values in an age-matched normative database (Collura et al., 2010;Thatcher & Lubar, 2009). The z-score for a particular QEEG parameter represents the number of standard deviations from the age-normed mean, and participants are trained to approach z = 0. Although few high-quality studies currently exist for the use of z-score NFB (R. Coben et al., 2019), a number of published studies show promising results of z-score NFB for a variety of conditions (Thatcher et al., 2020), including ADHD (Groeneveld et al., 2019;, pain perception (Prinsloo et al., 2018), and cognitive dysfunction (Koberda, 2014).
HRV refers to the variability in timing between heart beats (De Jong & Randall, 2005), and it is governed by both the sympathetic and parasympathetic nervous systems. Low HRV is associated with reduced ability to adapt to stressful environments and increased incidence of disease and stress-related illness, whereas high HRV is associated with increased resilience, adaptability, and autonomic flexibility (Baevsky et al., 2007;Baevsky & Chernikova, 2017;Lehrer & Gevirtz, 2014;Shaffer & Ginsberg, 2017;Shaffer et al., 2014;Thayer & Friedman, 2002). Further, high HRV is associated with greater cognitive performance, especially for executive control, in both young and older adults (de Oliveira Matos et al., 2020;Forte et al., 2019). HRV biofeedback trains individuals to control their respiration by producing a smooth, diaphragmatic breath at approximately six breaths per minute, thereby maximizing the amplitude of their respiratory sinus arrhythmia (Lehrer & Gevirtz, 2014). As an adjunct treatment to traditional therapies, HRV biofeedback has been used to treat individuals with depression, stress, anxiety, and other conditions (Gevirtz, 2013). A recent study has provided evidence that HRV biofeedback can directly improve measures of attention, while also ameliorating symptoms of depression and anxiety in an older adult population (Jester et al., 2019).
Approximately 8% of Americans practice some form of meditation. Expert meditators perform better on cognitive tasks (including attention and executive function) and may also have increased cognitive processing speed (Clarke et al., 2015). Meditation may result in physical changes in the brain; MRIs of expert meditators show greater gray matter in regions involved in attention and memory (Christie et al., 2017;Tang et al., 2015). Mindfulness meditation has been shown to reduce stress (Tang et al., 2015) and improve executive function (Diamond & Ling, 2019). Mindfulness meditation training in individuals with MCI resulted in enhanced brain connectivity (Wells et al., 2013). Further, a mindfulness meditation intervention for older adults with MCI demonstrated improved cognitive function, and those who meditated the longest experienced the largest improvements (Wong et al., 2017).
Benefits of combining multiple strategies to prevent cognitive decline
Several programs that combine multiple strategies for older adults have demonstrated improvements in cognition (Barnes et al., 2013;Bredesen, 2014;Fotuhi et al., 2016). Studies have also suggested that combining multiple strategies is more effective than a single strategy for cognitive decline/MCI (Sherman et al., 2017). For instance, cognitive stimulation is likely to be more effective when combined with exercise (Karssemeijer et al., 2017;Law et al., 2014). The Finnish Geriatric Intervention Study to Prevent Cognitive Impairment and Disability (FINGER) trial, which combined cognitive stimulation, exercise, diet changes, and management of vascular problems in older adults with dementia risk factors, demonstrated an 80% improvement in executive function and a 150% increase in cognitive speed (Ngandu et al., 2015;Rosenberg et al., 2018).
Despite the promising data for interventions to modify risk factors, making long-lasting behavior change is difficult. For instance, as of 2006, only 10% of adults in the United States exercised for at least the minimum amount of time each week recommended by the US Department of Health and Human Services (Office of Disease Prevention and Health Promotion, 2017; Tucker et al., 2011). A comprehensive health program that includes an accountability partner/coach improves participants' motivation, adherence to the program, and physiological and psychological well-being (Kivelä et al., 2014;Kreitzer et al., 2008;Prince et al., 2017). Including coaching combined with additional strategies is likely to improve adherence to, and therefore any effectiveness of, the intervention program.
Intervention and objectives
Our center has developed a multi-domain Memory Boot Camp program to incorporate the factors described above that are known or believed to protect against cognitive decline, based on a similar published program (Fotuhi et al., 2016). Z-score NFB and HRV biofeedback is combined with inperson meetings with 'brain coaches', who encourage participants to improve diet and add supplements, practice memory training, exercise, get quality sleep, make time for relaxation and mindfulness meditation, learn new things that are cognitively challenging, and increase social interaction. The present study is a prospective trial to evaluate the three-month Memory Boot Camp program for older adults (age 55 to 85) with subjective and objective memory deficits. Participants were evaluated via neurocognitive assessments, self-report questionnaires, quantitative electroencephalography parameters, and HRV parameters at four timepoints: baseline, pre-program, post-program, and follow-up. The trial included a three-month waiting period between baseline and pre-program, such that each participant acted as their own control, and follow-up took place six months after completion of the program. The primary objective of the trial was to determine whether the three-month Memory Boot Camp program improved cognitive function when compared to changes that occurred over the equivalent control period. Secondary objectives included testing for changes in stress-related measures of anxiety, depression, sleep, and quality of life, as well as heart rate, breathing rate, HRV, and QEEG variables. A six-month post-program follow-up assessment was included to determine the longevity of any changes observed upon program completion.
Study design
This study (ClinicalTrials.gov ID: NCT04426162) was performed by a private center that provides EEG-NFB and HRV biofeedback therapy in Michigan and Florida, USA. This prospective, multi-site trial took place in five locations in the cities of Grandville, Grand Rapids, and Holland, Michigan, and Palm Beach Gardens and Boca Raton, Florida. Seventysix participants were enrolled, and 44 completed the specified intervention (Fig. 1). The study design was repeated measures with a three-month waiting period between the initial baseline assessment and the start of the Memory Boot Camp program, such that each participant served as their own control. Nineteen enrolled individuals withdrew from the study before or during this waiting period. Assessments took place at baseline (t = -3 months), pre-program (t = 0 months), post-program (t = 3 months), and 6-month post-program follow-up (t = 9 months). Participants paid nothing to take part in the program; after completion of each assessment, participants were given a gift card of value between $50 and $100, depending on which assessment milestone was completed. Licensed clinical social workers oversaw the testing and program, and brain coaches met with subjects during the Memory Boot Camp intervention. This study was approved by the New England Independent Review Board.
Selection of participants: Inclusion and exclusion criteria
A flow chart diagram of study participation at each phase is shown in Fig. 1. Inclusion criteria were: age 55-85 at the start of the trial, subjective memory concerns, at least a high school education, having a current primary care doctor (or agreement to acquire a primary care doctor), ability to read and write English, time availability of 4-5 hours/week, and good general health (e.g., no active fevers, no recent heart transplants, etc.). Exclusion criteria included: major depression, known neurological illness (e.g., Alzheimer's or other dementia, Parkinson's, epilepsy, multiple sclerosis), serious psychiatric diagnosis, substance abuse, complete blindness or deafness, plans to be out of town for more than 10 days during the active phase of the trial, current or past client of our centers, and employee or their family member. Participants were recruited from the community surrounding participating center locations through digital and print ads via radio, newspaper, Facebook, and paid search ads. Ads were targeted to individuals with memory concerns, offered a free memory assessment, and directed potential participants to contact our center by telephone or online for pre-screening.
Pre-screening
Pre-screening surveys (over the phone or online) were used to gather information on inclusion and exclusion criteria. Five questions were selected from a subjective cognitive decline assessment (Gifford et al., 2015) to confirm the presence of subjective memory concern. Individuals who identified with two or more memory concerns were invited to come to the center for in-person screening.
Screening
Potential participants were given the Patient Health Questionnaire (PHQ-9), a brief measure of depression severity (Kroenke et al., 2001). Those who scored in the severe range (20-27) or indicated suicidal ideation on question nine were not eligible for the study; they were given a safety evaluation before being dismissed. Individuals who completed some or all of the screening procedures but were not invited to enroll in the study were given gift cards of a nominal amount.
Remaining individuals were given the Montreal Cognitive Assessment (MoCA). A summary figure that describes the protocol from this point in screening through the final follow-up assessment is shown in Fig. 2. The MoCA is a screening instrument developed specifically to detect MCI in older adults that can be administered in 10 minutes or less (Nasreddine et al., 2005). Individuals are scored on seven different domains: visuospatial/executive, naming, attention, language, abstraction, delayed recall, and orientation, and resultant scores range from 0-30. The MoCA is considered to be the most sensitive populationbased cognitive screening tool (De Roeck et al., 2019). A cut-off of 25/26 is recommended by the test manufacturer to differentiate individuals in the MCI category (Nasreddine et al., 2005). This cutoff is likely to be inappropriate for some sub-populations (Milani et al., 2018), and alternative cutoffs have been suggested, including 23/24 for a cardiovascular population (McLennan et al., 2011), and 26/27 for a population with Parkinson's disease (Hoops et al., 2009). For screening purposes in the present study, individuals who scored 18-26 on the MoCA were invited to enroll. For potential participants who were ultimately enrolled in the study, MoCA scores from this timepoint (baseline) and the other three timepoints were the primary outcome measure for this study. Individuals also took the NeuroTrax BrainCare Testing Suite (NeuroTrax) during this screening session. The NeuroTrax is described in the Assessments section below.
Assessments
Individuals who enrolled in the study returned to a center location within one week of screening to complete the baseline timepoint assessment (Fig. 2). After the baseline assessment, the protocols for the following three timepoint assessments (pre-program, post-program, and follow-up) were identical.
Neurocognitive assessments and
self-report questionnaires The MoCA, described in section 2.4, was used for screening and as the primary outcome assessment for the study. The NeuroTrax was originally developed and validated for the diagnosis of MCI in clinical practice and in research (Dwolatzky et al., 2004); it has been validated in both demented (Dwolatzky et al., 2010) and normally aging populations (Lampit et al., 2015). The NeuroTrax covers seven cognitive domains, consisting of memory, executive function, attention, visual spatial, verbal function, problem solving, and working memory. Overall performance incorporates these domains into one age-normed Global Cognitive Score.
The Beck Depression Inventory-II (BDI-ii) (Beck et al., 1996) is a self-report inventory and screening tool for depression. If a participant scored 31 or higher on the BDI-ii at any assessment, a safety check was performed, and the subject was dismissed from the trial. The Beck Anxiety Inventory (BAI) (Beck & Steer, 1993) is a self-report inventory for measuring subjective, somatic, or panic-related symptoms of anxiety. The Insomnia Severity Index (ISI) (Bastien et al., 2001) is a self-report inventory designed to assess sleep patterns and presence/severity of insomnia during a 14-day period. The Pittsburgh Sleep Quality Index (PSQI) (Buysse et al., 1989;Mollayeva et al., 2016) is a self-report inventory to measure sleep quality and disturbances over a onemonth interval. The Epworth Sleepiness Scale (ESS) (Johns, 1992) is a self-report inventory designed to assess level of daytime sleepiness. ESS scores ≥11 are considered indicative of above-normal sleepiness that requires further evaluation; any participants with a score in this range were advised to discuss possible sleep apnea with their physician. The Work and Social Adjustment Scale (WSAS) (Mundt et al., 2002) is a self-report inventory used to provide a measure of global functional impair-16 K.D. Meeuwsen et al. / Multi-domain memory program for older adults ment, due to a specific issue, in this case, memory concerns.
Physiological assessments
Quantitative electroencephalography (QEEG) and cardio-respiratory measurement (to evaluate HRV and respiration rate) were performed at each assessment; QEEG was additionally performed at a midprogram timepoint to confirm progress and adjust the NFB protocol, if needed. QEEG, HRV, heart rate, and breathing rate collection protocols were performed as described in a previous publication by our group (Groeneveld et al., 2019). Succinctly, one 5-minute eyes closed and one 5-minute eyes open QEEG recording was collected at each assessment using 19 electrode locations: O1, O2, P3, PZ, P4, T3, T4, T5, T6, C3, CZ, C4, F3, FZ, F4, F7, F8, FP1, and FP2, with a Neuron-Spectrum-3 amplifier (Neurosoft, Ivanovo, Russia) and Neuroguide software (Applied Neuroscience, Inc., Largo, FL) (Thatcher, 2012), using a sampling rate of 500 Hz. HRV data were collected for seven minutes using a blood volume pulse finger sensor using a sampling rate of 128 Hz with a ProComp5 or ProComp Infiniti amplifier (both from Thought Technology, Montreal, Canada) with Biograph software version 6.0.4. This sevenminute interval was also used to measure breathing rate, using a strain gauge respiration belt (Resp-Flex/Pro, Thought Technology). Seven minutes of HRV data were collected to increase the likelihood of selecting a five-minute artifact-free segment; if no artifact-free five-minute segment could be identified, the participant was excluded from HRV analysis. In a departure from the method of Groeneveld et al., 2019, during the HRV/breathing rate collection, participants were shown a screen displaying a random order of blue and yellow squares and instructed to silently count the blue squares (Eddie et al., 2014;Jennings et al., 1992). Because some HRV measurements are sensitive to breathing rate, this 'plain vanilla' task was intended to distract participants from focusing on their breathing, to prevent against exaggerated breathing, and to more accurately capture a normal breath pattern.
Study intervention: Memory boot camp
Following screening and the baseline assessment, all subjects began a three-month waiting period to serve as a control, during which they were instructed not to make any major lifestyle changes (Fig. 2).
Before beginning Memory Boot Camp, participants met with a brain coach at an orientation appointment to go over the details of the program and received the following: (1) a wrist-wearable sleep and activity tracker (MisFit, Burlingame, California) used to motivate participants for physical activity and sleep behavior change; (2) Metagenics brand supplements (Aliso Viejo, CA, USA): Omega-3 fatty acid containing eicosapentaenoic acid and docosahexaenoic acid (1000 mg, two pills/day; participants taking anticoagulant medications were advised not to take this supplement), Ceralin Forte (B-complex, three pills/day), and vitamin D 3 (1000 IU/day); (3) Memory Playbook (an in-house educational resource to track progress) and (4) an instructional document on memorization techniques. Participants then started the active program phase of the trial, which consisted of sessions two to three times per week for three months. For sessions that included both brain coaching and NFB + HRV biofeedback brain coaching took place first.
Brain coaching sessions
Participants met with their brain coach approximately two times per week, for a total of 24 meetings of 40-60 minutes each. A summary is given in the diagram in Fig. 2. Brain coaches served as instructors as well as accountability partners to encourage participants to fulfill goals set during brain coaching sessions. These goals were based on improving diet (work toward Mediterranean diet (defined in Fig. 2), taking supplements, and increasing water intake due to the susceptibility for dehydration in older adults (Wotton et al., 2008)), increasing sleep to eight hours using sleep hygiene recommendations (Irish et al., 2015;Souman et al., 2018;Stepanski & Wyatt, 2003), increasing exercise to 150 minutes per week, reducing stress (using daily mindfulness meditation and specific strategies based on individual stressors in the participant's life), cognitive training (Happy Neuron Pro: a computer-based program (www.happyneuronpro.com), memorization of word lists with various techniques), and increasing social interaction with peers via group activities and/or volunteer activities based on the participants' interests.
NFB + HRV biofeedback sessions
Each session began with HRV biofeedback training, which was performed as described in (Groeneveld et al., 2019). In brief, participants were trained using respiratory and cardiorespiratory biofeedback while wearing a volume pulse sensor on a finger and respiration belt around the waist. For respiratory biofeedback, participants had to slow their breaths to approximately six breaths per minute and practice smooth, consistent breath patterns, and for cardiorespiratory biofeedback, participants were coached to move into slow, smooth, consistent breath patterns so that the peak of their inhalation matched the peak of their respiratory sinus arrhythmia (the rising and falling of the heart rate as a result of inhalation and exhalation). Increasing the amplitude of respiration and slowing it in turn increases the amplitude of the RSA and slows the oscillatory rhythm to approximately 0.1 Hz, moving it into the low frequency range when RSA is processed via spectral analysis (Lehrer & Gevirtz, 2014). The respiratory biofeedback portion of this protocol was continued throughout NFB training. Feedback for NFB training was conducted for 20 to 30 minutes using a Procomp Infiniti device (sampling rate 256 Hz), Biograph software, and Neuroguide's Dynamic Link Library (DLL; Applied Neuroscience Inc.) utilizing a joint time-frequency analysis algorithm, by trained EEG technicians and overseen by Biofeedback Certification International Alliance-certified clinicians. Each participant's pre-program QEEG assessment was analyzed and visually examined to determine the most appropriate NFB protocol, given the findings unique to each individual. Most participants received 30 sessions of multi-lead z-score NFB at four sites (based on the region of maximal dysfunction in their QEEG assessment, and subject to change at the midprogram assessment if needed), which was performed as described in detail in (Groeneveld et al., 2019). Participants with attenuated alpha amplitudes on their QEEG assessment (n = 3) were given ten sessions of alpha amplitude training intended to increase alpha band power, followed by 20 sessions of multi-lead z-score NFB (as described above).
Blinding
The clinical trial coordinator applied a code, using state names, to each record to blind the data processing researcher and the statistician. Blinding was used for the first three timepoints but was not used for the follow-up records, as these records were identifiable due to change in sample size.
2.9. Data analysis 2.9.1. HRV data analysis HRV data files were exported to CardioPro software (version 1.2.1, Thought Technologies) for normalization of missed heartbeats and visually inspected for validity. Data were then exported and processed in Kubios (version 3.2.0; (Tarvainen et al., 2014)). Using Kubios, an artifact-free five-minute segment was selected for analysis, as described in (Williams et al., 2015). This study utilized shortterm HRV metrics, for which five-minute recordings are considered appropriate (Shaffer & Ginsberg, 2017;The Task Force Report, 1996). Respiration rate was retrieved from the original Thought Technology HRV files. HRV parameters analyzed included mean heart rate, the power of the following frequencydomain measures: high-frequency, 0.15-0.4 Hz (HF); low-frequency, 0.04-0.15 Hz (LF); and very-lowfrequency, 0.016-0.04 Hz (VLF), and the two most common time-domain measures used to calculate HRV: the standard deviation of normal to normal beat intervals (SDNN) and the root mean square of successive normal to normal beat interval differences (RMSSD) (Baevsky & Chernikova, 2017;Laborde et al., 2017;Shaffer & Ginsberg, 2017;The Task Force Report, 1996;van den Berg et al., 2018).
QEEG data analysis
All records used in analysis were in the eyes open condition with an average reference montage. Artifact was manually removed from these records by a blinded researcher (blinding was applied for the first three timepoints only; it was necessary to unblind for the fourth timepoint). A minimum of 30 seconds of artifact-free data was required for inclusion of a record in the final analysis; the overall average length of included records was 69 seconds. Assessed across the 4 timepoints were eight z-score measurements, including absolute power and relative power for the frequency bands delta (1-4 Hz), theta (4-8 Hz), beta (12-25 Hz), and high beta (25-30 Hz). The alpha frequency band was not assessed, due to small sample size (fewer than five records per timepoint that met inclusion criteria, outlined below). The data analysis protocol developed by Wigton and Krigbaum (Krigbaum & Wigton, 2015; was used as the model for our method to determine whether QEEG parameters had changed after treatment in accordance with the specific z-score protocol. Similar to the deviations from Wigton and Krigbaum used in (Groeneveld et al., 2019), the method utilized in the present study separately calculated average |z-score| values for each of the eight listed measurements and used 1.5 as the absolute value threshold for transformed z-scores to identify sites of interest (SOIs). For each site trained for each individual, absolute values of pre-program timepoint z-scores for each of these 8 measurements were used to define SOIs.
Statistical analysis
All statistical analyses were performed with SAS® Enterprise Guide, Version 7.1. Copyright, SAS® Institute Inc. The graphs were made using Graphpad Prism (Version 8), except for scatter plot graphs, which were made using SAS. Proc Mixed was used to perform a repeated measures analysis for all statistical tests. To account for the variation among the time intervals, the spatial power covariance structure was utilized. The models for the neurocognitive and self-report assessments controlled for covariates gender, age, and the interaction between gender and age. The models for the HRV measurements all controlled for respiration rate, age, gender, and the interaction between age and gender. Backward selection was used to drop or keep covariates. Although respiration rate was a significant predictor for only heart rate, respiration rate was kept in the model due to its inherent relation to HRV. Respiration rate was not used in the model predicting respiration rate. To satisfy repeated measures assumptions including normality, the natural log transformation was used on SDNN, RMSSD, VLF power, LF power, and HF power. The models for QEEG did not control for covariates as sample sizes were not large enough to support a model with multiple parameters. For absolute power of delta, within the QEEG analysis, empty cells within the R correlation matrix were present, in this instance, the ANTE(1) covariance structure was used to correct this. Post-hoc analyses were performed to compare the least squares mean (LS mean) changes from timepoints. The resultant p-values were compared to appropriate Bonferroni-corrected significance levels for four multiple comparisons (␣ B = 0.0125. After reviewing results, an unplanned exploratory analysis was conducted to assess the relationship among pre-to post-changes in MoCA with the pre-to post-changes in the HRV parameters and QEEG parameters. A scatter plot with jitter was created for each association. To aid in the visualization of the relationship, a 90% prediction ellipse was applied to each scatter plot.
Improvement in score for most neurocognitive and self-report measures over treatment period
Three months before the start of the Memory Boot Camp intervention, participants took all neurocognitive and self-report measures for the first time (baseline assessment). Participants took each measure again at pre-program, post-program, and follow-up assessment timepoints. These study measures included the MoCA (the primary outcome measure for this study), NeuroTrax, BAI, BDI-ii, ESS, ISI, PSQI, and WSAS. Statistical models were significant for the effect of time on score for all eight of these measures (p ≤ 0.0013). This suggests the measurements changed across the different periods (control, treatment, and follow-up). Pairwise LS mean change values for these measures are shown in Table 2, and LS mean values for each timepoint are shown in Fig. 3 along with p-values for LS mean change between the relevant timepoints. For the MoCA, both the baseline and pre-program score LS mean values were within the "Mild Cognitive Impairment" range ( Fig. 3). Baseline LS mean values for the BAI were within the "mild anxiety" range, those for the ESS were within the "higher normal daytime sleepiness" range, and those for the PSQI were within the "poor sleep quality" range. Baseline LS mean values for all other neurocognitive and self-report measures were within normal/subclinical ranges. There was no evidence to support a change in MoCA score during the control waiting period (p = 0.2761; Table 2). Further, there was no evidence to support a change in score for any other neurocognitive or self-report measure during the waiting period (p ≥ 0.276) except for the NeuroTrax, for which the LS mean increased (improved) from 97.79 to 100.69, for an LS mean change of 2.89 points (p = 0.0002). For the NeuroTrax, both the baseline and pre-program score LS mean values were within the "Probable Normal" range. For the treatment period from pre-to post-program, LS mean values on the MoCA increased (improved) from 22.68 to 24.25, for an LS mean change value of 1.57 points (p < 0.0001; d z = 0.67). Participants also experienced improvement in score on the BAI over the treatment period (p = 0.0001; d z = -0.60), with the preand post-treatment LS mean scores both within the "mild anxiety" range. On the BDI-ii depression measure, participants experienced improvement in score over the treatment period (p < 0.0001; d z = -0.98), with the pre-and post-treatment LS mean scores both within the "normal" range. On all three measures of sleep, the ESS, ISI, and PSQI, participants experienced improvement in score (p ≤ 0.0001; d z ≤-0.59) over the treatment period. On the WSAS measure of degree of functional impairment due to memory concerns, participants experienced an improvement in score over the treatment period (p < 0.0001; d z = -0.97), with pre-and post-treatment LS mean scores both within the "subclinical" range. There was not statistically significant evidence to support a change in NeuroTrax score (p = 0.0493) over the treatment period after Bonferroni correction for multiple testing. For all tested neurocognitive and self-report measures, LS mean change values during the six-month follow-up period after the conclusion of treatment were not significant. For the MoCA, BAI, BDIii, ESS, ISI, PSQI, and WSAS, p ≥ 0.2872 for the follow-up period. For the NeuroTrax, the LS mean change value for the follow-up period was not significant after Bonferroni correction (p = 0.0327). Although there was no significant change in Neuro-Trax score during the treatment period or follow-up period, both periods showed a trend in the positive direction (improvement). Likewise, there was a significant increase in NeuroTrax score between pre-treatment and 6-month follow-up, indicating, perhaps, that the cumulative increase over this longer time period represented a meaningful improvement in the Global Cognitive Score.
HRV parameters and respiration rate
In order to produce approximately normal distributions, SDNN, RMSSD, VLF power, LF power, and HF power were transformed to natural log values for statistical analysis. Statistical models were significant for the effect of time on the HRV parameters ln(SDNN) (p = 0.0031), ln(RMSSD) (p = 0.0018), and for respiration rate (p < 0.0001), but not for heart rate (p = 0.0140), ln(VLF power) (p = 0.1199), ln(LF power) (p = 0.0250), or ln(HF power) (p = 0.0111) after Bonferroni correction for multiple testing (␣ B = 0.0071). For HF, SDNN, and RMSSD, a higher value is considered an improvement (Baevsky & Chernikova, 2017). Pairwise LS mean change comparisons were calculated for HRV parameters and respiration rate between the four assessment timepoints of this study (Table 3). LS mean values at each timepoint are graphed in Fig. 4, with significant LS mean changes indicated.
Over the control period, there was a significant LS mean increase for ln(HF Power), ln(SDNN), and ln(RMSSD). After back-transforming from natural log, HF power increased from a baseline value of 147.19 ms 2 to 292.56 ms 2 at the pre-program timepoint. SDNN increased from a baseline value of 21.31 ms to 28.99 ms at the pre-program timepoint, and RMSSD increased from a baseline value of 29.15 ms to 42.63 ms at the pre-program timepoint. Nunan and colleagues (Nunan et al., 2010) have suggested that a normal range for SDNN is between 32 and 93 ms, while the normal range for RMSSD is between 19 and 75 ms. Therefore, SDNN values for participants in the present study were diminished at both the baseline and pre-treatment timepoints; RMSSD values were consistent with the normal range at those timepoints. There was no significant change over the treatment period or follow-up period for ln(HF power), ln(SDNN), or ln(RMSSD). Although there was no significant change, SDNN mean values were within the normal range at post-treatment (33.35 ms) and at 6-month follow-up (38.19 ms).
There was no evidence to support a change in respiration rate during either the waiting period or the follow-up period; however, over the treatment period, LS mean breaths per minute decreased from 14.42 to 12.24, for an LS mean change value of -2.19 breaths per minute (p < 0.0001; d z = -1.79). A decrease in respiration rate after the Memory Boot Camp program would be expected in accordance with the HRV biofeedback protocol utilized, which rewarded a rate of 6-8 breaths/minute. There was no evidence to support a change in mean heart rate, ln(VLF power), or ln(LF power) over any period analyzed in the study.
QEEG parameters
All participants in this study received at least 20 sessions of z-score NFB, the goal of which is to simultaneously train 248 different QEEG metrics toward their mean value (based on each participants' age-matched normal distribution) (Collura et al., 2010;Thatcher & Lubar, 2009). During each z-score NFB session, the only QEEG metrics trained are those that diverge from the normative mean by a quantity greater than the z-score threshold set by the technician. In effective z-score NFB, therefore, one would expect that all of a participant's QEEG metrics that diverged beyond this threshold from the normative mean before training would be significantly closer to the mean after training (approaching z = 0). For the present study, data analysis was performed on within-subject change in QEEG parameters collected during the eyes open full-cap QEEG assessments. The parameters absolute power and relative power for the frequency bands delta, theta, beta, and high beta were compared between the four timepoints already described: baseline, pre-program, post-program, and follow-up. Because the z-score NFB protocol only trains QEEG metrics that are divergent from the mean, not all participants would be expected to receive z-score training for all eight of these QEEG metrics. Using our previously published statistical analysis protocol (Groeneveld et al., 2019), based on , a participant's data for a particular metric were considered if the pre-program QEEG |z-score| was ≥1.5 at a trained site (an SOI, see Methods). For alpha frequency bands, statistical analysis was not possible because the number of participants with SOIs of |z-score| ≥ 1.5 at trained sites was fewer than five. Data for the remaining eight parameter/frequency band combinations are shown in Table 4, and LS mean values at each timepoint are graphed in Fig. 5, with significant LS mean changes indicated.
With the exception of relative power of beta (p = 0.0826), the effect of time on all QEEG parameters considered was significant at the ␣ = 0.05 level. Post-hoc analyses were performed on all eight measures for investigational purposes. For all eight QEEG parameters examined, there was a trend toward the normative mean (decrease in |z-score|) over the Fig. 4. Change in HRV Physiological Measures across Timepoints. LS mean values for HRV parameters and breathing rate at each timepoint: -3 months (baseline assessment), 0 months (pre-treatment assessment), 3 months (post-treatment assessment) and 9 months (follow-up assessment). Pairwise LS mean change values between timepoints are summarized in the top of each graph (values from Table 3). Respiration rate decreased during the active treatment period (gray shading). Ln(SDNN), ln(RMSSD), and ln(HF) LS mean values increased over the control waiting period. Although not specifically labeled, there was no change over the waiting period for ln(VLF) or ln(LF). Error bars represent the standard error. VLF: power in the very low frequency domain; LF: power in the low frequency domain; HF: power in the high frequency domain. SDNN: standard deviation normal to normal beat intervals; RMSSD: root mean square of successive normal to normal beat interval differences. a These LS mean values have been back-transformed from the natural log values displayed in the top graph. Error bars are not included because they are not appropriate for back-transformed data. Significance markers are not included on back-transformed data graphs because they appear on the ln-transformed graphs. ns: p > 0.0125, change not significant at the Bonferroni-corrected significance level. * p ≤ 0.0125. **** p ≤ 0.0001. treatment period. For six out of these eight parameters: absolute power of delta, absolute power of theta, relative power of theta, absolute power of beta, absolute power of high beta, and relative power of high beta, there was a significant decrease (improvement) in |z-score| after the treatment period at the Bonferroni-corrected significance level of ␣ B = 0.0125. Effect sizes were medium to large for all of these LS mean changes. There was no evidence to support an LS mean change over the follow-up period. These results are consistent with effective zscore NFB training of participants' brain oscillation parameters that was maintained at least six months after the conclusion of treatment. SOI: sites of interest (within each metric for each frequency band, |z-score| of baseline values that are farther than 1.5 standard deviations from zero for the sites trained, selected at pre-treatment timepoint); n: number of participants who had at least one SOI for the given frequency band/parameter; LS Mean: pairwise least squares mean change values in average distance from zero for SOIs; 95% CI LL: 95% confidence interval lower limit; 95% CI UL: 95% confidence interval upper limit; d z : Cohen's d for effect size of paired differences; t: test statistic; p: p-value for post-hoc pairwise comparisons. * Bonferroni-corrected significance level ␣ B = 0.0125.
Although not significant for any parameter at the Bonferroni-adjusted level, there was a trend for LS mean increase in |z-score| over the control period for all QEEG parameters examined. Despite the lack of statistical significance, this could represent an actual small increase in |z-score| from the baseline to preprogram timepoint, or it could indicate regression to the mean due to the data analysis protocol. The authors acknowledge that by selecting the sites and parameters for which absolute z-score values are larger than 1.5 at a single timepoint (the pre-program timepoint), the data analysis will inherently capture any regression to the mean at other timepoints. The fact that none of these changes are significant and that the slope gradients differ between the control and treatment time periods suggest that the changes in |z-score| over the treatment period were greater than any regression to the mean. In further support of this, all |effect sizes| for the control periods, which range from |d z | = 0.11 to 0.76; mean = 0.39, are smaller than (<60% of) the effect sizes for the corresponding treatment periods.
Correlation between change in MoCA score and physiological metrics
As an exploratory analysis, we determined the association between participants' change in MoCA score over the treatment period (the primary outcome measure) and the change in their physiological HRV and QEEG metrics over the same period. Due to the exploratory nature of this analysis, clinical significance was set at |r| ≥ 0.5. To determine these changes over the treatment period, each participant's value at pre-treatment was subtracted from their value at post-treatment. Data points for change on the MoCA Fig. 5. Change in QEEG Metrics across Timepoints. For participants with SOIs at trained sites, LS mean |z-score| values at each timepoint: -3 months (baseline assessment), 0 months (pre-treatment assessment), 3 months (post-treatment assessment) and 9 months (follow-up assessment) are shown for the QEEG parameters absolute power and relative power percent for the frequency bands delta, theta, beta, and high beta. Pairwise LS mean change values between timepoints are summarized in the top of each graph (values from Table 4). There was no change in |z-score| over the control or follow-up period for any parameter. For absolute power of delta, absolute power of theta, relative power of theta, absolute power of beta, absolute power of high beta, and relative power of high beta, there was a significant decrease (improvement) in |z-score| over the treatment period (gray shading). Error bars represent the standard error. ns: p > 0.0125, change not significant at the Bonferroni-corrected significance level. * p ≤ 0.0125. ** p ≤ 0.008. *** p ≤ 0.001. **** p ≤ 0.0001. Fig. 6. Correlation between Changes in MoCA Score and Physiological Metrics over the Treatment Period. Data points for participants' change from pre-treatment to post-treatment (post minus pre) on MoCA score (Y-axis) versus each physiological metric (X-axis) are graphed. Reference lines are drawn at X = 0 and Y = 0, which divide the graph into quadrants. Points that demonstrate improvement in both graphed measures (where possible) are shown in black, while those that demonstrate no change or decline are shown in gray. For 12 of the 15 graphs, points in a single quadrant are black and points in the other three quadrants are gray, because an improvement was defined for both the X-axis and Y-axis measures. For change in MoCA score, an improvement was an increase in score. For change in heart rate, respiration rate, and all eight QEEG metrics, an improvement was a decrease. For the HRV metrics ln(SDNN) and ln(RMSSD), an improvement was an increase. For the three graphs representing change in ln(VLF power), ln(LF power), and ln(HF power), two quadrants (top left and top right) contain black points and two quadrants (bottom left and bottom right) contain gray dots because an improvement in the X-axis measure was not specifically defined during normal breathing conditions. 90% prediction ellipses are also graphed to aid in relationship visualization. The two graphs representing change in relative power of delta and relative power of theta (the bottom left panels) are indicated with an * symbol because there was a significant linear correlation found between change in MoCA score over the treatment period and change in the physiological metric (values from Table 5). In both cases, the association between these variables was a negative linear correlation. This indicates that the increase (improvement) in participants' MoCA scores over the treatment period was associated with a decrease (improvement; closer to z = 0) in these QEEG |z-scores|.
versus each physiological metric are graphed in Fig. 6 along with 90% prediction ellipses to assist with visualization. Data points that demonstrate improvement are in black, and those with no change or decline are in gray. As an example, a representative participant in the present study had a MoCA score of 24 at pretreatment and 26 at post-treatment, for a change from pre-to post-treatment of 2 (26 -24 = 2), indicating an improvement in MoCA. The participant also had a pre-treatment relative power of theta |z-score| of 1.664 (1.664 standard deviations from the normative mean, z = 0) and a post-treatment relative power of theta |z-score| of 0.898, for a change in |z-score| over the treatment period of -0.766. This indicates improvement, because the participant's z-score was closer to the normative mean of z = 0 after the program. On the graph representing change in relative power of theta in Fig. 6, this participant's datapoint is represented as a black dot in the top left quadrant because they experienced improvement for change in MoCA score (Y-axis) and for change in relative power of theta |z-score| (X-axis). Correlation coefficients (r) for the change in MoCA score versus the change in individual physiological metrics are shown in Table 5. A significant correlation was found between change in MoCA score over the treatment period with change in two physiological metrics: relative power of delta (r = -0.576; p = 0.0813) and relative power of theta (r = -0.524; p = 0.0371). In both cases, the association between these variables was a negative linear correlation. This indicates that the increase (improvement) in participants' MoCA scores over the treatment period was associated with a decrease (improvement; closer to z = 0) in these QEEG |z-scores|.
Discussion
This study evaluated the Memory Boot Camp program for older adults with both subjective and objective memory deficits. The baseline LS mean MoCA score for participants in this study was 22.29,95% CI [21.26,23.32], a value well below the standard cutoff for MCI cited by the test developer (25/26, (Nasreddine et al., 2005)), which suggests that many participants experienced symptoms of MCI before the program. The Memory Boot Camp program combines NFB and HRV biofeedback with brain coaching to support behavior change on modifiable risk factors known to be associated with age-related cognitive decline, MCI, and dementia. Cognitive skills and relevant symptoms for participants in this study were evaluated via neurocognitive tests and self-report questionnaires. LS mean change in scores for these measures over the treatment period were compared to an equivalent pre-testing control waiting period and to a follow-up period six months after program completion. For the majority of neurocognitive tests and self-report questionnaires, participants experienced significant improvement in score over the treatment period, with little evidence of change in score during the waiting or follow-up time periods. On the primary outcome measure, the MoCA, participants experienced an LS mean change of 1.57 points (p < 0.0001) over the treatment period. Although not directly relevant to our study population, a recent study estimated (using an anchor-based method) a Minimal Clinically Important Difference (MCID) value of 1.22 for the MoCA in survivors of stroke who were undergoing rehabilitation (Wu et al., 2019). This suggests that the change in MoCA score during the treatment period experienced by participants in the present study could have clinical relevance in addition to statistical significance.
Participants in this study did not experience a significant score increase over the treatment period on the NeuroTrax cognitive test. There was, however, a significant NeuroTrax score increase recorded for participants during the waiting/control period, when participants were instructed not to make any major lifestyle changes (and during which we did not observe a significant score change in any other neurocognitive or self-report test). An obvious difference between MoCA and NeuroTrax is that MoCA is administered via "pencil and paper" whereas Neuro-Trax is computerized. Older adults' attitudes toward technology have been shown to vary widely, based on factors such as increased age, gender and socioeconomic status (Werner & Korczyn, 2012). It is possible that the increase in NeuroTrax score over the control period represented an increase in comfort with the technology. Unlike the MoCA, for which participants' baseline LS mean score was within the "Mild Cognitive Impairment" range (22.29), the baseline LS mean score for the NeuroTrax was within the "Probable Normal" range (97.79). Rather than reflecting a discrepancy between severity of underlying symp-toms measured by the two instruments, this may instead represent differences in the sensitivity and specificity of the tests. MoCA, in particular, has demonstrated superior sensitivity to earlier/milder stages of cognitive decline, compared to tools like the Mini Mental State Examination (Freitas et al., 2013). A cross validation study of MoCA in a communitybased population showed high sensitivity (97%) at the recommended MCI cutoff of 26, but only fair specificity (35%) at that point (Luis et al., 2009). When the cutoff score was lowered to 23, it showed both high sensitivity (96%) and specificity (95%). In contrast, NeuroTrax put forth a cutoff score of 96.25 (25% of one standard deviation below the normed mean) "as a best-balance normal/abnormal cutoff, with equivalent severity" of false positive and false negative . It is therefore likely that the "Normal" NeuroTrax cutoff score of 103.75 (25% of one standard deviation above the normed mean) is similar to the MoCA cutoff score of 26. Importantly, by the time of 6-month follow-up, participants either approached or exceeded this Normal cutoff on both instruments, indicating a meaningful improvement in overall cognitive performance as a result of participating in the Memory Bootcamp Program.
In addition to targeting known risk factors, such as diet and exercise, the Memory Boot Camp program aimed to improve stress management and overall mental health. Participants experienced improvement in self-reported measures of mental health and quality of life upon completion of the program. On average, participants reported lower levels of depressive symptoms (BDI-ii), anxious symptoms (BAI), and improvements in sleep quality (PSQI), insomnia severity (ISI), and excessive daytime sleepiness (ESS). These improvements were accompanied by an average increase in perceived ability to function in daily life (WSAS). It is plausible that participants' perceived improvements in memory and cognition (as measured by the MoCA) preceded their improvements in mood, or vice versa. In the dementia field, the cause-effect relationship between depression and dementia, which are tightly correlated, has been difficult to tease apart (Livingston et al., 2017). Regardless of the specific cause, participants significantly improved on average in mental health profile, and these improvements were maintained for at least six months after treatment.
The current protocol utilized a combination of NFB and HRV biofeedback; our center has previously shown that 30 sessions of similar "NFB + HRV biofeedback" protocols are associated with signifi-cant improvements in attention (Groeneveld et al., 2019) as well as anxiety and depression (White et al., 2017). The HRV biofeedback protocol utilized in this study was intended to normalize sympathetic and parasympathetic processes and facilitate stress recovery (Lehrer & Gevirtz, 2014;Lehrer & Vaschillo, 2008;Vaschillo et al., 2002). Although no changes were observed in HRV variables over the active treatment period, there was a significant decrease in breaths per minute during that time that was maintained six months after the completion of the program, which may indicate that participants learned to modulate their breathing in accordance with the Memory Boot Camp HRV biofeedback protocol. Participants began the study with reduced SDNN, a condition associated with elevated cardiovascular risk (The Task Force Report, 1996) and dysfunctional sympathetic activity that may inhibit autonomic function (Baevsky & Chernikova, 2017). It has previously been shown that older adults have lower values for many HRV parameters than do younger adults, and older adults experience smaller changes in HRV after biofeedback (Lehrer et al., 2006). Over the course of the present study, the only significant change in HRV metrics was an increase in HF power, SDNN, and RMSSD that took place over the control waiting period. Although increases in HF power, SDNN, and RMSSD are thought to be associated with improved autonomic regulation (Shaffer & Ginsberg, 2017) and increased vagal activity and resilience to stress (Carnevali et al., 2018), there is no evidence from the present study to suggest that the HRV biofeedback portion of the Memory Boot Camp program contributed to cognitive changes experienced by participants over the active treatment period.
The current study demonstrated significant improvements in six out of eight QEEG variables following four-channel z-score NFB. The variables changed in the direction of the Neuroguide database normative mean, which is the goal of z-score NFB (Thatcher & Lubar, 2009). Disregarding statistical significance, all QEEG LS mean changes from pretreatment to post-treatment were in the direction of the database normative mean, as were all LS mean changes from the pre-treatment to follow-up timepoints. This indicates that the z-score NFB treatment normalized brain oscillation parameters for participants in this study in a lasting manner, and this was maintained at least six months after training.
In an exploratory post-hoc analysis, we observed a moderately strong linear association between normalized QEEG variables and improved cognition. Specifically, improvements in relative power of theta and relative power of delta were linearly correlated with improvements in the MoCA over the active treatment period. "Slowing" of the EEG has long been recognized as an indicator of progression from normal brain activity patterns to brain activity patterns associated with cognitive decline and dementia ( (Berger, 1933), as described in (Johannesson et al., 1979); (Gianotti et al., 2007;Malek et al., 2017)). Increased relative power of theta, in particular, has been associated with increased incidence of conversion from MCI to dementia (Jelic et al., 2000), increased severity of dementia (L. A. Coben et al., 1985), and increased abnormalities in neuropsychological measures as well as presence of the pathological protein tau (Musaeus et al., 2018).
The linear correlation of improvements in QEEG metrics with improvement in MoCA score in the present study is consistent with a role for z-score NFB in improving cognition in subjects with symptoms of MCI. The specific NFB protocol utilized in this study, z-score NFB, was selected due to our clinical experience with the protocol for treatment of ADHD (Groeneveld et al., 2019) and other clinical conditions (unpublished data). Several recent studies have demonstrated improved cognitive function in subjects with MCI following more traditional NFB training methods (Jang et al., 2019;Jirayucharoensak et al., 2019;Lavy et al., 2019). As discussed in the Introduction, there are a multitude of supportive studies of NFB in the published literature for a variety of conditions, with the strongest support level existing for its use in ADHD ((Van Doren et al., 2018), for example). A recently published consensus paper authored by more than 80 researchers both supportive of, and skeptical of, NFB (Ros et al., 2020) acknowledges the promise of NFB and provides guidance for the appropriate design of future basic and clinical studies to advance the field of NFB in a more definitive direction.
It is known that health programs with a coach/ accountability partner improve participants' motivation and adherence to the program, along with their physiological and psychological well-being (Kivelä et al., 2014;Kreitzer et al., 2008;Prince et al., 2017). Furthermore, the Memory Boot Camp program combines multiple strategies, which is likely to be more effective than a single strategy for cognitive decline/MCI (Sherman et al., 2017). The FINGER trial, which combined two years of intervention for some of the risk factors described above (cognitive stimulation, exercise, and diet changes) with man-agement of vascular problems in older adults demonstrated an 80% improvement in executive function and a 150% increase in cognitive speed (Ngandu et al., 2015;Rosenberg et al., 2018). The success of this program may be due to the fact that it incorporated multiple strategies rather than a single strategy for cognitive decline/MCI (Sherman et al., 2017). A clinical trial to test this strategy on a United States population at risk for developing Alzheimer's disease, US-POINTER, is currently ongoing, with an estimated study completion date of November 2023 (ClinicalTrials.gov ID: NCT03688126).
Limitations
The present study was a prospective trial of the Memory Boot Camp program, with a waitlist control period of three months and 6-month post-program follow-up. Statistical analysis was utilized such that participants acted as their own control. The limitation of this type of experimental design is the lack of a sham-control placebo group. Further, a single treatment condition prevented comparison of the different elements of the Memory Boot Camp program protocol. Although we found no evidence of a change in HRV metrics over the treatment period, it is possible that a change was missed due to the protocol utilized. For convenience, photoplethysmography was utilized rather than electrocardiography. Though it is more accurate, electrocardiography is more invasive as it requires increased direct contact with participants to apply adhesive sensors to the chest or wrist areas (Schäfer & Vagedes, 2013). Also for convenience, we utilized short-duration rather than long-duration (twenty-four hour) recordings of HRV, and we therefore did not evaluate the aspects of HRV that are only captured by long-duration recordings. Further, the 'plain vanilla' task given to participants during HRV recording sessions to prevent them from consciously modifying their breathing rate involved keeping track of the number of blue boxes on the screen. Although it was intended to produce a more consistent breathing environment to evaluate changes in HRV over time, this specific task might have been inappropriate for individuals with memory concerns because it may have introduced an unnecessary stressor (especially during the first baseline recording session), which can affect HRV metrics. Finally, participants in this study did not necessarily have a medical diagnosis of MCI. Their inclusion in the study was based on their baseline MoCA score and subjective memory concerns.
Despite these limitations, the study utilized multiple neurocognitive and self-report questionnaire assessment tools, and it assessed concomitant physiological change in HRV and QEEG parameters. Further, to our knowledge, this is the first study to report the use of z-score NFB with HRV biofeedback for subjective and objective memory concerns and symptoms of MCI.
Conclusions
After the Memory Boot Camp program, participants experienced improvements in mean score on memory/cognition (MoCA), and measures of depression (BDI-ii), anxiety (BAI), and sleep (ESS, ISI, PSQI), and these improvements were maintained at the six-month follow-up assessment. Participants also experienced mean score improvement over the treatment period on the WSAS, a measure of functional impairment due to memory concerns, which indicates that participants felt more able to deal with their memory symptoms after the Memory Boot Camp. There was no change in HRV parameters. However, participants experienced physiological improvements in breathing rate and brain oscillation parameters that were consistent with the specific protocols utilized, suggesting that the Memory Boot Camp program effected lasting physiological change. Finally, participants' change in MoCA score over the treatment period was correlated with change in two brain oscillation parameters utilized in the NFB portion of the Memory Boot Camp program. As there is currently no approved pharmaceutical treatment for MCI, our results for this multifactorial treatment strategy are particularly encouraging, including a significant change on the MoCA, which is known to be particularly sensitive to early changes in cognition. Given the fact that each participant served as his/her own control, these trial results suggest that a fairly short three-month program incorporating z-score NFB, HRV biofeedback, memory and cognitive training, and one-on-one coaching to encourage behavior change in diet, sleep, physical fitness, and stress reduction is a promising treatment strategy for adults age 55 to 85 with subjective and objective memory deficits and symptoms of MCI.
|
2020-12-31T09:04:17.396Z
|
2020-12-25T00:00:00.000
|
{
"year": 2020,
"sha1": "5268e2704cec19561947f4aeb2983d148d92c8f8",
"oa_license": "CCBYNC",
"oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7990441",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "1748734ecdfe67b8ce5b4d7f386ab5103be5f9d4",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
264710587
|
pes2o/s2orc
|
v3-fos-license
|
Human metapneumovirus infection in chimpanzees, United States.
Zoonotic disease transmission and infections are of particular concern for humans and closely related great apes. In 2009, an outbreak of human metapneumovirus infection was associated with the death of a captive chimpanzee in Chicago, Illinois, USA. Biosecurity and surveillance for this virus in captive great ape populations should be considered.
Zoonotic disease transmission and infections are of particular concern for humans and closely related great apes. In 2009, an outbreak of human metapneumovirus infection was associated with the death of a captive chimpanzee in Chicago, Illinois, USA. Biosecurity and surveillance for this virus in captive great ape populations should be considered. Z oological facilities in North America house endangered species of great apes with annual visitation rates of >100 million persons (1). Because humans and great apes are related genetically, interspecies transmission of infectious pathogens is a concern. Consequently, procedures are instituted to limit the potential spread of infectious pathogens (2).
Reports of outbreaks of human metapneumovirus (HMPV) infection with respiratory symptoms have been documented in wild great ape populations (3)(4)(5). All of these outbreaks have been attributed to exposure to humans because of the suspected HMPV-negative status of these populations. However, disease caused by HMPV has not been previously documented in North American zoo populations, despite the close proximity of humans and great apes. We report an outbreak of HMPV infection in 2009 in a troop of previously HPMV-negative chimpanzees (Pan troglodytes) in Chicago, Illinois, USA, that resulted in 1 death and an illness rate of 100%.
The Study
Two chimpanzee and 2 western lowland gorilla (Gorilla gorilla gorilla) troops were housed in separate areas within 1 building with shared airspace at a zoological facility in Chicago, Illinois, USA. Animals had periodic contact with keepers during daily feeding, cage cleaning, and training sessions. Biosecurity for staff in the great ape area included wearing gloves, dedicated footwear and clothing, handwashing, and use of footbaths when entering and exiting the facility and during movement between troops. Staff members were required to notify management if they had a confirmed or suspected respiratory infection so that they were removed from direct interaction with animals. No outside personnel were allowed direct contact; indirect contact was considered rare and only possible if visitors tossed objects into outdoor enclosures.
Within 1 week before the outbreak in chimpanzees, staff members at the great ape facility had respiratory disease (coughing and nasal discharge), which coincided with peak HMPV season in the United States ( Figure 1). The affected chimpanzee troop consisted of 7 chimpanzees; the initial clinical sign (coughing) on March 18, 2009, was observed in 1 adult female (Table, http://wwwnc.cdc.gov/ EID/article/20/12/14-0408-T1.htm).
Within 96 hours, all 7 chimpanzees had moderateto-severe respiratory disease (≥2 characteristic clinical signs), and their intake of oral fluids was increased. One juvenile male with pectus excavatum had marked mucopurulent rhinorrhea, coughing, lethargy, tachypnea, dyspnea, and partial anorexia; he was also given an intramuscular broad-spectrum antimicrobial drug (ceftiofur, 25 mg/kg). Within 24 hours, the condition of this animal worsened, which necessitated sedation for rehydration and diagnostic assessment. Radiographs showed marked bronchointerstitial pneumonia. An additional antimicrobial drug (cefazolin, 25 mg/kg) and fluids (0.9% NaCl, 40 mL/kg/h) were administered, but the animal died the next morning.
The remaining animals in the troop were given antimicrobial drugs (cefazolin, 25 mg/kg; enrofloxacin, 5 mg/kg) and anti-inflammatory medication (flunixin meglumine, 0.25 mg/kg). Within 48 hours, all animals showed mild improvement. Antimicrobial drugs were given for 10 days. Fifteen days post-onset, clinical resolution had occurred. The other troops showed no signs of respiratory disease.
Necropsy of the animal that died showed that the lungs were firm, had not collapsed, and surrounding airways were mottled red-to-purple. Histologic analysis showed that lesions were similar to those observed in humans and indicated necrotizing bronchointerstitial pneumonia with type II pneumocyte hyperplasia, abundant fibrin, and streaming mucus in airways. Cilia were absent from many bronchial epithelial cells, and rare epithelial cells lacked any identifiable cytoplasmic membrane or nuclear structure, suggestive of the smudge cells commonly found in human HMPV infection (6) (Figure 2). Only rare gram-positive cocci were observed, and the lack of extensive suppurative inflammation suggested that these findings were not a major contributing factor. Bacterial lung tissue cultures were negative. Lung tissue was screened for viral respiratory pathogens by real time reverse transcription PCR by using published methods (7). Sections were positive for HMPV and negative for adenoviruses, coronaviruses, influenza viruses A and B, human parainfluenza viruses 1-4, bocavirus, rhinovirus, and respiratory syncytial virus.
To assess troop HMPV exposure, we conducted a serologic study by measuring IgG against HMPV in available serum samples collected at various times before and after the outbreak (Table) (8). Serum samples were usually collected opportunistically, often not during clinical illness. Unlike samples from other troops, serum samples from chimpanzee troop 1 showed that these animals were seronegative for HMPV before the outbreak; 100% seroconversion was observed 1-3 years later. Serum samples obtained before 2009 from the other troops had stable levels of IgG against HMPV, and these troops had experienced 12 episodes in which a ≥4-fold increase in titer or seroconversion were noted, indicating exposure within the testing interval. Overall, HMPV seroprevalence in the year before the outbreak for the chimpanzee and gorilla troops were 42% and 75%, respectively. Seroprevelance 3 years post-outbreak was 100% and 92%, respectively.
Conclusions
We report an outbreak of HMPV infection producing illness and death in chimpanzees in a North American zoo. Although the human source of the infection remains unknown, staff members in the great ape area had respiratory disease just before the outbreak in chimpanzees. Serologic testing of staff for respiratory viruses could not be performed. Post-outbreak, in addition to biosecurity measures already in place, all staff working with primates have been required to wear facemasks during direct primate interactions.
Unlike this situation, chimpanzees experimentally infected with HMPV have shown only mild cold-like clinical signs in seronegative animals and no clinical signs in seroconverted animals (9). Seroconverted, naturally infected, captive-bred chimpanzees represented 61% of the laboratory population, which demonstrated that captive animals are readily infected with HMPV (9). Worldwide, nearly 100% of the human population has seroconverted to HMPV by 10 years of age, and most illness and death occurs in young, elderly, and immunocompromised persons, although persons in any age group can become infected (10,11). Immunity is transient, and reinfections with the same or different strains are common, but illness is reduced (10,11). Recent evidence suggests these findings are also found in captive and wild chimpanzees (9,12) and other primates (13). In this instance, the congenital thoracic defect may have affected the ability of the chimpanzee to survive. In contrast to other cases in which apes were infected with Streptococcus pneumoniae (3)(4)(5), only rare lung bacteria were observed histologically and lung bacterial cultures were negative. These findings, in combination with the absence of major suppurative inflammation, suggest that HMPV was the primary pathogen.
Given the ubiquitous nature of this human virus in North America and the frequency with which it infects humans, it is notable that all members of the affected chimpanzee troop were born in captivity and had contact with humans throughout their lives, yet still remained negative for this virus until March 2009. Because HMPV is a recently discovered virus that closely resembles other respiratory viruses in its clinical course and few animal facilities specifically test for it, the degree of illness associated with this disease in the captive great ape population is unknown. Therefore, enhanced biosecurity and disease surveillance measures for HMPV should be considered for great apes. In addition to the commonly tested viral respiratory pathogens of great apes, surveillance for HMPV by serologic analysis at quarantine or preventative medical examinations would provide additional benefits. These procedures would enable management to tailor biosecurity protocols and procedures to limit the risk for exposure of HMPV-negative animals or troops, particularly during the height of the HMPV season.
|
2018-04-03T03:51:32.880Z
|
2014-12-01T00:00:00.000
|
{
"year": 2014,
"sha1": "0d6d63d979cb9ddca2dca491815170a3653661bd",
"oa_license": "CCBY",
"oa_url": "https://wwwnc.cdc.gov/eid/article/20/12/pdfs/14-0408.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "bc3329b36c13471c6f66375174ec989f6c0b3883",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.